What Are the Limitations of ChatGPT?
ChatGPT is a popular chatbot released by OpenAI in late 2022. Chatbots, or computer programs that simulate human interactions via artificial intelligence (AI) and natural language processing (NLP), can help answer many academic questions.
While using ChatGPT for your studies can be really useful, particularly for help with exam preparation, homework assignments, or academic writing, it is not without its limitations. It’s essential to keep in mind that AI language models like ChatGPT are still developing technologies and are far from perfect. Current limitations include:
ChatGPT limitation 1: Incorrect answers
Because ChatGPT is a constantly evolving language model, it will inevitably make mistakes. It’s critical to double-check your work while using it, as it has been known to make grammatical, mathematical, factual, and reasoning errors (using fallacies).
It’s not always reliable for answering complicated questions about specialist topics like grammar or mathematics, so it’s best to keep these types of questions basic. Double-check the answers it gives to any more specialised queries against credible sources.
Perhaps more concerningly, the chatbot sometimes has difficulty acknowledging that it doesn’t know something and instead fabricates a plausible-sounding answer. In this way, it prioritises providing what it perceives as a more “complete” answer over factual correctness.
Some sources have highlighted several instances where ChatGPT referred to nonexistent legal provisions that it created in order to avoid saying that it didn’t know an answer. This is especially the case in domains where the chatbot may not have expertise, such as medicine or law, or anything that requires specialised knowledge in order to proceed beyond a general language understanding.
ChatGPT limitation 2: Biased answers
ChatGPT, like all language models, is at risk of inherent biases, and there are valid concerns that widespread usage of AI tools can perpetuate cultural, racial, and gender stigma. This is due to a few factors:
- How the initial training datasets were designed
- Who designed them
- How well the model “learns” over time
If biased inputs are what determines the pool of knowledge the chatbot refers to, chances are that biased outputs will result, particularly in regards to how it responds to certain topics or the language it uses. While this is a challenge faced by nearly every AI tool, bias in technology at large represents a significant future issue.
ChatGPT limitation 3: Lack of human insight
While ChatGPT is quite adept at generating coherent responses to specific prompts or questions, it ultimately is not human. As such, it can only mimic human behavior, not experience it itself. This has a variety of implications:
- It does not always understand the full context of a topic, which can lead to nonsensical or overly literal responses.
- It does not have emotional intelligence and does not recognise or respond to emotional cues like sarcasm, irony, or humor.
- It does not always recognise idioms, regionalisms, or slang. Instead, it may take a phrase like “raining cats and dogs” literally.
- It does not have a physical presence and cannot see, hear, or interact with the world like humans do. This makes it unable to understand the world based on direct experience rather than textual sources.
- It answers questions very robotically, making it easy to see that its outputs are machine-generated and often flow from a template.
- It takes questions at face value and does not necessarily understand subtext. In other words, it cannot “read between the lines” or take sides. While a bias for neutrality is often a good thing, some questions require you to choose a side.
- It does not have real-world experiences or commonsense knowledge and cannot understand and respond to situations that require this kind of knowledge.
- It can summarise and explain a topic but cannot offer a unique insight. Humans need knowledge to create, but lived experiences and subjective opinions also are crucial to this process – ChatGPT cannot provide these.
ChatGPT limitation 4: Overly long or wordy answers
ChatGPT’s training datasets encourage it to cover a topic from many different angles, answering questions in every way it can conceive of.
While this is positive in some ways – it explains complicated topics very thoroughly – there are certainly topics where the best answer is the most direct one, or even a “yes” or “no”. This tendency to over-explain can make ChatGPT’s answers overly formal, redundant, and very lengthy.
Other interesting articles
Frequently asked questions about ChatGPT
- Is ChatGPT a credible source?
No, ChatGPT is not a credible source of factual information and can’t be cited for this purpose in academic writing. While it tries to provide accurate answers, it often gets things wrong because its responses are based on patterns, not facts and data.
- Currency: The dataset that ChatGPT was trained on only extends to 2021, making it slightly outdated.
- Authority: It’s just a language model and is not considered a trustworthy source of factual information.
- Accuracy: It bases its responses on patterns rather than evidence and is unable to cite its sources.
So you shouldn’t cite ChatGPT as a trustworthy source for a factual claim. You might still cite ChatGPT for other reasons – for example, if you’re writing a paper about AI language models, ChatGPT responses are a relevant primary source.
- Where does ChatGPT get its information from?
ChatGPT is an AI language model that was trained on a large body of text from a variety of sources (e.g., Wikipedia, books, news articles, scientific journals). The dataset only went up to 2021, meaning that it lacks information on more recent events.
It’s also important to understand that ChatGPT doesn’t access a database of facts to answer your questions. Instead, its responses are based on patterns that it saw in the training data.
So ChatGPT is not always trustworthy. It can usually answer general knowledge questions accurately, but it can easily give misleading answers on more specialist topics.
Another consequence of this way of generating responses is that ChatGPT usually can’t cite its sources accurately. It doesn’t really know what source it’s basing any specific claim on. It’s best to check any information you get from it against a credible source.
- Can I trust ChatGPT results?
You should not trust ChatGPT’s results unconditionally. While you can use ChatGPT during your studies to clarify questions, you should always double-check the answers you receive against other credible sources, as it doesn’t always give correct information. Don’t cite ChatGPT as a source of factual information.
- Can I create citations using ChatGPT?
No, it is not possible to cite your sources with ChatGPT. You can ask it to create citations, but it isn’t designed for this task and tends to make up sources that don’t exist or present information in the wrong format. ChatGPT also cannot add citations to direct quotes in your text.
Instead, use a tool designed for this purpose, like the Scribbr Citation Generator.
But you can use ChatGPT for assignments in other ways, to provide inspiration, feedback, and general writing advice.
- Is ChatGPT biased?
For example, users have shown that it sometimes makes sexist assumptions such as that a doctor mentioned in a prompt must be a man rather than a woman. Some have also pointed out political bias in terms of which political figures the tool is willing to write positively or negatively about and which requests it refuses.
The tool is unlikely to be consistently biased toward a particular perspective or against a particular group. Rather, its responses are based on its training data and on the way you phrase your ChatGPT prompts. It’s sensitive to phrasing, so asking it the same question in different ways will result in quite different answers.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.