What Is Generative AI? | Meaning & Examples
Generative AI is the use of artificial intelligence (AI) systems to generate original media such as text, images, video, or audio in response to prompts from users. Popular generative AI applications include ChatGPT, Bard, DALL-E, and Midjourney.
Most generative AI is powered by deep learning technologies such as large language models (LLMs). These are models trained on a vast quantity of data (e.g., text) to recognise patterns so that they can produce appropriate responses to the user’s prompts.
This technology has seen rapid growth in sophistication and popularity in recent years, especially since the release of ChatGPT in November 2022. The ability to generate content on demand has major implications in a wide variety of contexts, such as academia and creative industries.
How does generative AI work?
Generative AI is a broad concept that can theoretically be approached using a variety of different technologies. In recent years, though, the focus has been on the use of neural networks, computer systems that are designed to imitate the structures of brains.
Highly complex neural networks are the basis for large language models (LLMs), which are trained to recognise patterns in a huge quantity of text (billions or trillions of words) and then reproduce them in response to prompts (text typed in by the user).
An LLM generates each word of its response by looking at all the text that came before it and predicting a word that is relatively likely to come next based on patterns it recognises from its training data. You can think of it as a supercharged version of predictive text. The fact that it generally works so well seems to be a product of the enormous amount of data it was trained on.
LLMs, especially a specific type of LLM called a generative pre-trained transformer (GPT), are used in most current generative AI applications – including many that generate something other than text (e.g., image generators like DALL-E). This means that things like images, music, and code can be generated based only on a text description of what the user wants.
Types of generative AI
Generative AI has a variety of different use cases and powers several popular applications. The table below indicates the main types of generative AI application and provides examples of each.
|Chatbots, text generators, or AI writing tools generate new text based on a prompt from the user, whether this is an answer to the user’s question or, for example, a summary, translation, or paraphrase of the prompt. Sometimes, these chatbots are integrated into search engines to provide a more advanced search experience.
|ChatGPT, QuillBot Paraphraser, Scribbr Text Summariser, Bard, Bing AI, DeepL Translator
|OpenAI Codex, GitHub Copilot
|LLMs have been found to be surprisingly versatile and can sometimes also be used in a modified form to generate images rather than text. These apps generally take a text-based prompt from a user (e.g., “The Mona Lisa in the style of Van Gogh”) and turn it into an image. Some instead modify user-submitted images.
|DALL-E, Midjourney, Stable Diffusion, Prisma
|Generative AI applications that can create whole videos have also started to appear. These videos are not necessarily very smooth or coherent yet, but the technology is improving.
|Synthesia, Make-a-Video, Gen-2
|Generative AI is starting to be used to generate music and synthesised voices. For example, these tools might create a song based on a text description or generate audio of a specific voice reading the words the user inputs.
|MusicLM, MusicGen, MuseNet, Murf AI
|The application of generative AI is also being explored in other contexts. It has potential, for example, in the hard sciences (e.g., predicting protein structures) and in robotics (e.g., turning text prompts into actions carried out by the robot). Further applications will emerge over time.
Strengths and limitations of generative AI
Generative AI is a powerful and rapidly developing field of technology, but it’s still a work in progress. It’s important to understand what it excels at and what it tends to struggle with so far.
- Generative AI technology is often flexible and can generalise to a variety of tasks rather than specialising in just one. This opens up opportunities to explore its use in a wide range of contexts.
- This technology can make any business processes that involve generating text or other content (e.g., writing emails, planning projects, creating images) dramatically more efficient, allowing small teams to accomplish more and bigger teams to focus on more ambitious projects.
- Generative AI tools allow non-experts to approach tasks they would normally be unable to handle. This allows people to explore areas of creativity and work that were previously inaccessible to them.
- Generative AI models often hallucinate – for example, a chatbot’s answers might be factually incorrect, or an image generator’s outputs might contain incongruous details like too many fingers on a person’s hand. Outputs should always be checked for accuracy and quality.
- These tools are trained on datasets that may be biased in various ways (e.g., sexism), and the tools can therefore reproduce those biases. For example, an image generator asked to provide an image of a CEO may be more likely to show a man than a woman.
- Although they’re trained on large datasets and draw on all that data for their responses, generative AI tools generally can’t tell you what sources they’re using in a specific response. This means it can be difficult to trace the sources of, for example, factual claims or visual elements.
Implications of generative AI
The rise of generative AI raises a lot of questions about the effects – positive or negative – that different applications of this technology could have on a societal level. Commonly discussed issues include:
- Jobs and automation: Many people are concerned about the effects of generative AI on various creative jobs. For example, will it be harder for illustrators to find work when they have to compete with image generators? Others claim that these tools will force various industries to adapt but also create new roles as existing tasks are automated.
- Effects on academia: Many academics are concerned about ChatGPT cheating among their students and about the lack of clear guidelines on how to approach these tools. University policies on AI writing are still developing.
- Plagiarism and copyright concerns: Some argue that generative AI’s use of sources from its training data should be treated as plagiarism or copyright infringement. For example, some artists have attempted legal action against AI companies, arguing that image generators use elements of their work and stylistic approach without acknowledgement or compensation.
- Fake news and scams: Generative AI tools can be used to deliberately spread misinformation (e.g., deepfake videos) or enable scams (e.g., imitating someone’s voice to steal their identity). They can also spread misinformation by accident if people assume, for example, that everything ChatGPT claims is factually correct without checking it against a credible source.
- Future developments: There is a lot of uncertainty about how AI is likely to develop in the future. Some argue that the rapid developments in generative AI are a major step towards artificial general intelligence (AGI), while others suspect that we’re reaching the limits of what can be done with current approaches to AI and that future innovations will use very different techniques.
Other interesting articles
Frequently asked questions about generative AI
- How does generative AI work?
Generative AI technology typically uses large language models (LLMs), which are powered by neural networks – computer systems designed to mimic the structures of brains. These LLMs are trained on a huge quantity of data (e.g., text, images) to recognise patterns that they then follow in the content they produce.
For example, a chatbot like ChatGPT generally has a good idea of what word should come next in a sentence because it has been trained on billions of sentences and “learnt” what words are likely to appear, in what order, in each context.
This makes generative AI applications vulnerable to the problem of hallucination – errors in their outputs such as unjustified factual claims or visual bugs in generated images. These tools essentially “guess” what a good response to the prompt would be, and they have a pretty good success rate because of the large amount of training data they have to draw on, but they can and do go wrong.
- What is generative art?
Generative art is art that has been created (generated) by some sort of autonomous system rather than directly by a human artist. Nowadays, the term is commonly used to refer to images created by generative AI tools like Midjourney and DALL-E. These tools use neural networks to create art automatically based on a prompt from the user (e.g., “an elephant painted in the style of Goya”).
However, the term has been in use since before this technology existed, and it can also refer to any technique use by an artist (or writer, musician, etc.) to create art according to a process that proceeds autonomously – i.e., outside of the artist’s direct control. Examples of generative art that does not involve AI include serialism in music and the cut-up technique in literature.
- How can I use AI writing tools?
AI writing tools can be used to perform a variety of tasks.
These tools can also be used to paraphrase or summarise text or to identify grammar and punctuation mistakes. You can also use Scribbr’s free paraphrasing tool, summarising tool, and grammar checker, which are designed specifically for these purposes.
- How does ChatGPT work?
ChatGPT is a chatbot based on a large language model (LLM). These models are trained on huge datasets consisting of hundreds of billions of words of text, based on which the model learns to effectively predict natural responses to the prompts you enter.
ChatGPT was also refined through a process called reinforcement learning from human feedback (RLHF), which involves “rewarding” the model for providing useful answers and discouraging inappropriate answers – encouraging it to make fewer mistakes.
Essentially, ChatGPT’s answers are based on predicting the most likely responses to your inputs based on its training data, with a reward system on top of this to incentivise it to give you the most helpful answers possible. It’s a bit like an incredibly advanced version of predictive text. This is also one of ChatGPT’s limitations: because its answers are based on probabilities, they’re not always trustworthy.
- How can I detect AI writing?
Tools called AI detectors are designed to label text as AI-generated or human. AI detectors work by looking for specific characteristics in the text, such as a low level of randomness in word choice and sentence length. These characteristics are typical of AI writing, allowing the detector to make a good guess at when text is AI-generated.
But these tools can’t guarantee 100% accuracy. Check out our comparison of the best AI detectors to learn more.
You can also manually watch for clues that a text is AI-generated – for example, a very different style from the writer’s usual voice or a generic, overly polite tone.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.