{"id":4234,"date":"2023-08-26T00:41:54","date_gmt":"2023-08-26T00:41:54","guid":{"rendered":"https:\/\/www.clouddatainsights.com\/?p=4234"},"modified":"2023-08-26T00:42:00","modified_gmt":"2023-08-26T00:42:00","slug":"whats-so-amazing-about-chatgpt-a-quick-recap-of-large-language-models","status":"publish","type":"post","link":"https:\/\/www.clouddatainsights.com\/whats-so-amazing-about-chatgpt-a-quick-recap-of-large-language-models\/","title":{"rendered":"What’s So Amazing About ChatGPT? A Quick Recap of Large Language Models"},"content":{"rendered":"<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1000\" height=\"582\" src=\"https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S.jpg\" alt=\"large language models\" class=\"wp-image-4235\" srcset=\"https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S.jpg 1000w, https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S-300x175.jpg 300w, https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S-768x447.jpg 768w\" sizes=\"(max-width: 1000px) 100vw, 1000px\" \/><figcaption class=\"wp-element-caption\"><em>Large language models allow machines to reach human-like generation of content and language understanding.<\/em><\/figcaption><\/figure><\/div>\n\n\n<p>ChatGPT is all over the news and heavily featured in new business capabilities in company newsletters everywhere. Its capabilities seem astounding \u2014 writing emails that sound like you, reading more books in one minute than humans could read in a lifetime. It can draft reports and answer complex queries. This is the era of the large language model (LLM).<\/p>\n\n\n\n<p>We’re now at the cutting-edge intersection of business, technology, and linguistics. For companies seeking a competitive edge, understanding these behemoths of the AI world isn’t just beneficial\u2014it’s essential. Dive in as we unravel the magic behind these digital wordsmiths and the new paradigms they’re setting for future business operations.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What are large language models?<\/h3>\n\n\n\n<p>LLMs are machine learning models designed to understand and generate human language. While “understand” should probably go in quotations, these models do understand one thing: patterns. Thanks to a massive number of parameters ranging from hundreds of millions to hundreds of billions, these algorithms can capture and process intricate patterns and nuances of language like never before.<\/p>\n\n\n\n<p>If you’ve ever used <a href=\"https:\/\/www.clouddatainsights.com\/first-steps-toward-leveraging-enterprise-chatgpt\/\">ChatGPT<\/a> or a similar LLM, you might be fooled into thinking the machine is sentient. That isn’t true yet. But these LLMs are doing something truly remarkable by understanding and predicting patterns in language that read as close to human comprehension and generation as possible without sentience. It’s a new world.<\/p>\n\n\n\n<p>There are different types of large language models:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Tr<strong>ansformer Models<\/strong>: Introduced in the paper “Attention is All You Need” by Vaswani et al, these use self-attention mechanisms to weigh input tokens differently, allowing for dynamic relationships between different parts of an input sequence. They dominate NLP tasks and are the basis for models like BERT, GPT, T5, etc.<\/li>\n\n\n\n<li><strong>Autoencoder<\/strong>: A neural network used for unsupervised learning of efficient codings, it’s designed to minimize the difference between input and its reconstruction. It has applications in anomaly detection, de-noising data, and generating new data.<\/li>\n\n\n\n<li><strong>Sequence-to-Sequence (Seq2Seq)<\/strong>: A model consisting of two primary parts: an encoder and a decoder. The encoder processes an input sequence and compresses it into a context vector. The decoder takes this context vector and produces an output sequence. This has applications in machine translation (e.g., translating English to French), speech recognition, and text summarization.<\/li>\n\n\n\n<li><strong>Recursive Neural Networks (RecNNs or TreeNets)<\/strong>: RecNNs operate on hierarchical tree structures rather than as sequences. Nodes in these trees represent words, and their children represent constituent words or phrases. They’re used often in tasks like parsing and sentiment analysis.<\/li>\n\n\n\n<li><strong>Hierarchical Models<\/strong>: A type of model architecture designed to capture hierarchical structures in data. It can involve multiple levels or layers, each capturing different levels of abstraction. These appear in image recognition tasks (where different layers can recognize parts of objects, for example) and document classification (where different levels might understand words, sentences, paragraphs, and entire documents).<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">What’s the difference between LLMs and natural language processing (NLP)?<\/h3>\n\n\n\n<p>NLP and LLMs are related concepts in the field of artificial intelligence. <\/p>\n\n\n\n<p>Natural language processing is a broad field of application focusing on the interaction between computers and humans through natural language. The primary goal is to enable computers to understand, interpret, and generate human language meaningfully and usefully. LLMs are specific types of machine learning models designed to understand and generate human language. They’re a subset of models and techniques used in NLP. <\/p>\n\n\n\n<p>NLP encompasses a wide range of tasks and also covers foundational topics like linguistics, semantics, and syntax. LLMs primarily focus on understanding context from vast amounts of text and generating coherent and contextually relevant content. Where NLP can be applied to simple or more complex tasks, LLMs are typically utilized for more complex understanding and generation on par with well-informed humans.<\/p>\n\n\n\n<p>NLP also has a long history, tracing back to the early days of computer science. Early NLP relied on hand-crafted rules and later statistical or neural-based methods. LLMs are a more recent evolution of NLP, using deep learning models to mimic human communication and understanding.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">How do large language models mimic humans?<\/h3>\n\n\n\n<p>The ability of an LLM to mimic human text, speech, and understanding comes from extensive training. It’s important to note that while it may seem like LLMs think like humans, they don’t “understand” language or concepts in the same way humans do. Their “knowledge” is pattern recognition derived from vast amounts of data, devoid of true consciousness, emotions, or innate understanding.<\/p>\n\n\n\n<p>That said, these models mimic human language understanding and generation through a combination of vast amounts of data, intricate architecture, and advanced training methods. Here’s how they approach human-like linguistic capabilities:<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Massive training<\/h4>\n\n\n\n<p>LLMs are trained on enormous datasets, much of which comes from the internet. This includes books, articles, websites, social media, and other forms of written input. These inputs expose the model to a diverse range of topics, contexts, and writing styles. By processing this data, they learn grammar, idioms, facts, reasoning patterns, and even some biases present in the texts.<\/p>\n\n\n\n<p>Models are “pre-trained” on this material to learn fundamental language tasks. Once models are successfully pre-trained, they can be adapted to task-specific use cases:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Fine-tuning:\n<ul class=\"wp-block-list\">\n<li><strong>Concept<\/strong>: Once an LLM has been pre-trained on a large corpus, it can be further trained (fine-tuned) on a smaller, task-specific dataset.<\/li>\n\n\n\n<li><strong>Use<\/strong>: It’s a standard approach for adapting a general-purpose model to a specific task, like sentiment analysis or named entity recognition.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>In-context Learning:\n<ul class=\"wp-block-list\">\n<li><strong>Concept<\/strong>: Instead of fine-tuning, the model uses the context provided in the prompt to guide its responses. Essentially, you give the model a bit of guidance through the input to achieve the desired output.<\/li>\n\n\n\n<li><strong>Use<\/strong>: Useful when you want to guide the model’s behavior without fine-tuning it on new data, e.g., asking GPT-3 to “Translate the following English text to French: …”<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>Zero-\/One-\/Few-shot Learning:\n<ul class=\"wp-block-list\">\n<li><strong>Concept<\/strong>: This refers to the model’s ability to perform tasks without (zero-shot), with one (one-shot), or with a few (few-shot) examples to guide it.<\/li>\n\n\n\n<li><strong>Zero-shot<\/strong>: You ask the model to perform a task without providing any examples. E.g., “Translate the following text into French: …”<\/li>\n\n\n\n<li><strong>One-shot<\/strong>: You provide one example to guide the model. <\/li>\n\n\n\n<li><strong><a href=\"https:\/\/www.rtinsights.com\/few-shot-domain-adaptation-could-make-ai-deployment-easier\/\">Few-shot<\/a><\/strong>: You give multiple examples to help the model generalize the task. E.g., providing several translation pairs before asking for a new translation.<\/li>\n\n\n\n<li><strong>Use<\/strong>: Allows the model to tackle tasks it wasn’t explicitly fine-tuned on.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<h4 class=\"wp-block-heading\">Deep learning architecture<\/h4>\n\n\n\n<p>One of the more common architectures, <a href=\"https:\/\/developers.google.com\/machine-learning\/resources\/intro-llms\">Transformer architecture<\/a>, excels in handling sequential data like text. Instead of taking each word alone, it employs attention mechanisms that allow the model to focus on different parts of an input text. This is a lot like how humans pay attention to specific words or phrases when comprehending language.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">The role of context<\/h5>\n\n\n\n<p>The ability to understand context is a feature of specific deep learning architectures. The Transformer architecture, for example, utilizes self-attention mechanisms that weigh input tokens \u2014typically a chunk or unit of text that the model processes \u2014 differently. This allows the model to focus on different parts of the input for various tasks. This mechanism is central to models like BERT, GPT, and their derivatives, enabling them to achieve state-of-the-art performance on many NLP tasks.<\/p>\n\n\n\n<p>In contrast, earlier NLP models, like word embeddings offered a fixed representation for each word, irrespective of its context. However, modern Transformer-based models provide dynamic word representations based on context, capturing nuances like polysemy, where a word can have multiple meanings based on its usage.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">The role of transfer learning<\/h5>\n\n\n\n<p><a href=\"https:\/\/vitalflux.com\/large-language-models-concepts-examples\">Transfer learning<\/a> is where these models took a real turn into human-like performance. Transfer learning is a technique where a model developed for one task is reused (or “transferred”) as the starting point for a model on a second task. It leverages the knowledge gained from the initial task to improve learning in the new task. Sound familiar? That’s how humans learn to do new things, too.<\/p>\n\n\n\n<p>In previous artificial intelligence iterations, AI would need to begin again from scratch every time it learned a new task. In contrast, human children learn to hold something– a bottle maybe or a rattle –and then transfer that knowledge to hold other things. It’s this ease researchers wanted to replicate in things like large language models.<\/p>\n\n\n\n<p>Transfer learning isn’t inherently a part of the architecture but a training strategy. However, deep learning architectures, especially large neural networks, have made transfer learning particularly effective. For instance, models like BERT are pre-trained on a massive corpus to learn general language understanding and can then be fine-tuned on smaller, task-specific datasets.<\/p>\n\n\n\n<p>This approach has become standard in many NLP tasks because training large models from scratch is computationally expensive and may require data resources that are not always available. Now that these models are capable of transfer learning, we’re getting domain-specific models.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What are some examples of large language models?<\/h3>\n\n\n\n<p>Here are a few examples of LLMs making news right now and some that continue to transform how we approach natural language processing. <\/p>\n\n\n\n<h4 class=\"wp-block-heading\">GPT-4<\/h4>\n\n\n\n<p><a target=\"_blank\" href=\"https:\/\/openai.com\/research\/gpt-4\" rel=\"noreferrer noopener\">OpenAI’s GPT-4<\/a> was unveiled in March of 2023 and has just about astonished everyone around. It has a deep comprehension of complex reasoning that goes beyond mere text. It has also demonstrated potential for complex coding capabilities (albeit with some controversy). It’s the first model to incorporate multimodal capabilities, accepting text and images. ChatGPT is the most salient example of GPT-4, although without the multimodal capabilities. However, Bing Chat has rolled out this capability for select users.<\/p>\n\n\n\n<p>And obviously, if we’re talking about GPT-4, we need to give nods to previous versions. OpenAI first released GPT-3 in 2020, and GPT 3.5 powers the current version of <a href=\"https:\/\/www.clouddatainsights.com\/are-you-ready-is-your-business-ready-for-chatgpt\/\">ChatGPT<\/a> for most users.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">LaMDA<\/h4>\n\n\n\n<p><a target=\"_blank\" href=\"https:\/\/blog.google\/technology\/ai\/lamda\/\" rel=\"noreferrer noopener\">Language Model for Dialog Applications<\/a> (LaMDA) is a group of LLMs developed by Google. These use decoder-only transformer language models. Google pre-trained this LLM on a large corpus of text, but many people may remember it for more sensational reasons. A former Google engineer went public claiming that the program wasn’t just human-like, but actually sentient.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">BERT<\/h4>\n\n\n\n<p><a target=\"_blank\" href=\"https:\/\/research.google\/pubs\/pub47751\/\" rel=\"noreferrer noopener\">Bidirectional Encoder Representations from Transformers<\/a> (BERT) is another Google LLM family able to convert sequences of data into other sequences. It’s famous for its bidirectional transformers, which are able to process input data both left-to-right and right-to-left. This capability gives BERT a deeper understanding of the meaning of words in sentences and how they relate to each other, something handy in sentiment analysis. You might know it best from a <a target=\"_blank\" href=\"https:\/\/blog.google\/products\/search\/search-language-understanding-bert\/\" rel=\"noreferrer noopener\">2019 update<\/a> to Google search capabilities.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">LLaMA<\/h4>\n\n\n\n<p><a target=\"_blank\" href=\"https:\/\/ai.meta.com\/blog\/large-language-model-llama-meta-ai\/\" rel=\"noreferrer noopener\">Large Language Model Meta AI (LLaMA)<\/a> used a variety of public data sources for training, and the largest parameter is 65 billion. Although originally released to approved researchers and developers, it was leaked and is now open source. <\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Orca<\/h4>\n\n\n\n<p>Developed by Microsoft, <a target=\"_blank\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4\/\" rel=\"noreferrer noopener\">Orca’s<\/a> relatively small parameters (13 billion) make it small enough to (theoretically) train on something like a laptop. It’s built on top of a smaller parameter version of LLaMA but can imitate and learn from much <a target=\"_blank\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4\/\" rel=\"noreferrer noopener\">larger models like GPT-4<\/a>. It’s open source and currently making waves in the research world.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">PaLM 2<\/h4>\n\n\n\n<p><a target=\"_blank\" href=\"https:\/\/ai.google\/discover\/palm2\/\" rel=\"noreferrer noopener\">Pathways Language Model (PaLM) 2<\/a> is the next generation of PaLM, an LLM designed to generalize across domains and tasks. It’s an expansive iteration, reportedly trained on over 500 billion parameters, and shows more promise in understanding traditionally tricky language tasks like riddles, idioms, and other nuanced texts from multiple languages. It powers Google’s Bard, but users can also test it on Google’s Vertex AI platform.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">phi-1<\/h4>\n\n\n\n<p><a target=\"_blank\" href=\"https:\/\/arxiv.org\/abs\/2306.11644\" rel=\"noreferrer noopener\">phi-1<\/a> from Microsoft is a Microsoft LLM making the news for its miniature 1.3 billion parameters. It was trained in just four days on a collection of textbook-quality data \u2014 a testament to the power of truly quality data plus synthetic data. It has fewer general capabilities but notes a trend towards LLMs scaling down.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">BLOOM<\/h4>\n\n\n\n<p><a href=\"https:\/\/huggingface.co\/docs\/transformers\/model_doc\/bloom\" target=\"_blank\" rel=\"noreferrer noopener\">BLOOM<\/a> is a massive, open source AI trained on 46 languages and 13 coding languages. This was a <a href=\"https:\/\/www.rtinsights.com\/bloom-open-source-ai-model\/\">massive project<\/a> cofounded by teams from HuggingFace, NVIDIA, Microsoft, and others and developed by over 1000 researchers for the purpose of making an open source resource. <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">The road ahead for LLMs<\/h3>\n\n\n\n<p>These are by far not the only LLMs out there, and we’ll continue to see more. Large language models are demonstrating a profound ability to grasp human language and generate content. We’ve already taken huge leaps in artificial intelligence and brought AI capabilities to the masses; we’ll continue to see iterations of LLMs niche down, become more efficient, and integrate with more of our everyday tasks. However, the continued question of ethical usage and understanding limitations will define the years ahead.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Large language models continue to make the news. Let’s recap what you need to know about LLMs to keep up with the latest advancements.<\/p>\n","protected":false},"author":8,"featured_media":4235,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"om_disable_all_campaigns":false,"_uag_custom_page_level_css":"","neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","neve_meta_reading_time":"","_themeisle_gutenberg_block_has_review":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[14],"tags":[237,248,249],"class_list":["post-4234","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-deep-learning","tag-large-language-models","tag-llms"],"acf":[],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S.jpg","uagb_featured_image_src":{"full":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S.jpg",1000,582,false],"thumbnail":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S-150x150.jpg",150,150,true],"medium":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S-300x175.jpg",300,175,true],"medium_large":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S-768x447.jpg",768,447,true],"large":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S.jpg",1000,582,false],"1536x1536":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S.jpg",1000,582,false],"2048x2048":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S.jpg",1000,582,false],"alm-thumbnail":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S-150x150.jpg",150,150,true],"neve-blog":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S-930x582.jpg",930,582,true],"rpwe-thumbnail":["https:\/\/www.clouddatainsights.com\/wp-content\/uploads\/2023\/08\/Depositphotos_202181990_S-45x45.jpg",45,45,true]},"uagb_author_info":{"display_name":"Elizabeth Wallace","author_link":"https:\/\/www.clouddatainsights.com\/author\/elizabeth-wallace\/"},"uagb_comment_info":36,"uagb_excerpt":"Large language models continue to make the news. Let's recap what you need to know about LLMs to keep up with the latest advancements.","_links":{"self":[{"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/posts\/4234"}],"collection":[{"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/comments?post=4234"}],"version-history":[{"count":3,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/posts\/4234\/revisions"}],"predecessor-version":[{"id":4246,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/posts\/4234\/revisions\/4246"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/media\/4235"}],"wp:attachment":[{"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/media?parent=4234"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/categories?post=4234"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.clouddatainsights.com\/wp-json\/wp\/v2\/tags?post=4234"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}