blur-bg

Our guide to the words and abbreviations of generative AI

Categories:

Generative ai use cases Large language models Machine learning Nlp

Over the last couple of years, you, like us, will have heard about ChatGPT as often as almost anything else — it has been prominent in the news cycle since its public launch in December 2022. Related terms, including “prompt engineering”, “large language model’” and “GenAI” were added to the Cambridge dictionary last year, such was the speed at which the new technology, and the way we talk about it, permeated our consciousness.

But with so many new phrases — and, more intimidatingly, so many new abbreviations — to take in, the subject can become confusing if you are not working with it day-to-day.

So, welcome to our guide to GenAI, LLMs and NLP — plus everything else you need to know.

Introduction

By now, we have all had an abrupt introduction to generative AI over the last couple of years, accompanied by the full range of media reaction: from its seemingly impossible abilities, through to the scare stories about it precipitating the end of civilisation.

Artificial intelligence, of course, has a much longer history than that. The concept of breaking down human capabilities like vision, communication and reasoning, and mechanising them so non-living objects can perform tasks for us, is one that stretches back to ancient times.

But in the middle of the 20th century, serious research into the topic began, as then-modern computers allowed for greater exploration of automating human tasks.

Early examples of programs like ELIZA — an artificial psychotherapist, developed in the 1960s to explore communication between humans and machines — and prototype driverless cars, showed the capacity for AI to start mimicking human skills like communication and sight.

NLP

The ability to understand language is among the most essential of human capabilities — and the field of natural language processing (NLP) is the AI domain which specialises in this (and it’s what we do).

LLMs

Attempts to model language have taken many different forms, from mathematically-driven formal linguistics, to training machines to learn from a body of text resulting in the so called language model. As increases in computing power have allowed that latter approach to encompass huge amounts of textual information, these large language models (LLMs) have reached levels of performance which are close to human understanding of language.

Extractive and generative tasks

Language models, whether small or large, use the patterns present in the data they have been trained on, to work with the input they are provided with, and predict the subsequent words. They are tuned to avoid providing harmful responses.

While language models can be useful for a wide range of tasks, they can become much more relevant to industry-specific jobs when they are trained further on task-specific data.. Doing this reduces the scope of what is expected of a model, and pushes it to do more focused work with a higher degree of performance.

This could include extractive tasks, using NLP to help identify the most relevant information to answer a query. This includes tasks like identifying people mentioned in a text — a task known as named entity recognition.

Or they could be generative tasks — so called because they are generating new content — like summarising emails.

Text classification

Building AI applications often works best when problems are broken down into smaller steps. For example, if you wanted to assist a user in identifying the diet plan which is best for them, you might want to break this down into different generative and extractive tasks. First an extractive step to identify the most appropriate diet plan from a database, then a generative step to write a response to the user that provides and explains that plan.

The extractive task — alongside others like finding the appropriate category for a product — has been the most common use of AI and NLP in recent years, and jobs like this are known as text classification. Given a text input, we mark it as belonging to a given class — in this case, “suitable” or “not suitable” for each diet plan.

Generative AI

As LLMs have, recently, hit very high benchmarks in understanding human language, they have also allowed the development of AI tools which perform a wide range of generative tasks. Taken together, this field is known as generative AI, and covers the automated creation of new content across image, video, audio and even code, as well as text and others.

RAG

In customising large models, one of the most practical use cases is tying them to your own data. Through this, the language capabilities of the LLM can be harnessed to maximise the findability and intelligibility of data, and keep knowledge at a user’s finger tips.

This process has four stages: indexing, or preparing documents to be understood by an LLM; retrieval, in which the most relevant documents are identified; augmentation, with the proprietary data informing the LLM’s response; and generation, the preparation of a natural-language response containing the relevant information to the user. The final three steps give this process its name: retrieval-augmented-generation, or RAG.

This differs from the specialised training that smaller models often require to achieve good performance because we don’t change the model itself — and thus save a lot of up-front time and money — but rather we change only the input to it.

Further reading

Hopefully, the glossary above has been of some use to you and given you more understanding of how we talk about generative AI. If you would like to know more about how we think about AI, you might want to look at our blogs on the subject:

Or if all this talk of generative AI has whetted your appetite, why not check out our post on how to get started with it.

As experts in the field, we know what we’re talking about when it comes to applying generative AI to industry — why not have a look at this example from healthcare, or if you have any questions, get in touch.


Our guide to the words and abbreviations of generative AI was originally published in MantisNLP on Medium, where people are continuing the conversation by highlighting and responding to this story.

Next Article

Why Projects go wrong, and what we can do about it

Why AI projects go wrong, and what we can do about it Photo by Chris Liverani on …

Read Post

Do you have a Natural Language Processing problem you need help with?

Let's Talk