Blog:
AI Can Hallucinate Too: What Are the Dangers, and How Can We Protect Ourselves?
Artificial Intelligence (AI) progresses every day, attracting an increasing number of followers aware of its potential. However, it is not infallible and every user must maintain a critical mindset when using it to avoid falling victim to an “AI hallucination”. What exactly is an AI hallucination? How do we identify them, and most importantly, how do we protect ourselves from them? Let’s explore this together.
Just like their creators, AI systems can sometimes experience hallucinations. This is a phenomenon where an AI generates a response that seems plausible, but is in fact partially or entirely false or unrelated to the given context. In other words, the AI “imagines” information that is not true or relevant. This can occur in various contexts, from text generation to image recognition.
An AI could, therefore, whip up literary works that never existed, just to back up its argument. Or it could respond to you with details about the life and history of people who never existed.
It could also contradict itself, as seen in our test on GPT-4:
Here, for example, we can see that despite asking for “the number of victories of the New Jersey Devils in 2014”, the AI's response is that it “unfortunately does not have data after 2021”. Since it doesn't have data after 2021, it therefore can't provide us with an answer for 2014. Doesn’t make sense, does it?
This is yet another hallucination. But why are these happening?
In the context of large-scale language models (LLM), like GPT-3 or GPT-4, hallucination can be attributed to a lack of truth coming from external sources. LLMs are indeed based on vast amounts of textual data, which allow them to predict the next word in a sentence based on the context provided by the previous words. However, they do not have access to external sources of truth to verify their predictions, which can lead them to learn and propagate inaccuracies present in the training data.
Of course, this last parameter tends to become less true today with AI models like Google Bard, or Bing, constantly connected to the Internet. Thus, while hallucinations are becoming less and less frequent, it is crucial to remember that they still can exist and that they can even have disastrous consequences.
While being wrong about an ice hockey team’s number of wins or creating a historical fact out of thin air might not have major repercussions, AI hallucinations can still turn out to be disastrous.
Imagine an AI spouting off hallucinated medical information during a medical analysis. This could easily throw doctors for a loop and lead to the patient's death. The same is true in connected cars where an AI could hallucinate the presence or absence of a physical element, causing it to react inappropriately.
To give a concrete example, lawyer Steven Schwartz fell victim to AI hallucinations in May 2023 during a lawsuit against the Colombian Airline company Avianca. To defend his client, who suffered a knee injury after being struck by a metal cart, Steven had to dig up similar cases in order to win the lawsuit.
Steven used ChatGPT to conduct his research. The AI found six cases similar to his, which the artificial intelligence assured were true and consultable in the "LexisNexis" and "Westlaw" databases, used by lawyers.
Alas, once the pleading was done, no one could find the judgments, nor the quoted and summarized excerpts, seeing that they did not actually exist.
The AI had simply hallucinated them to help Steven win his case.
Such a mistake is unforgivable, and Steven's reputation has been annihilated. He is now awaiting trial for serious misconduct.
From this example alone, it’s important to note that we should always keep a sharp mind when using an AI and always double-check its response by Doing Your Own Research (DYOR)!
Fortunately, while there are no other miracle recipes to protect you from these hallucinations, there are still good practices to reduce the risk of getting them when using AI.
This list is not exhaustive, but when used properly, it can significantly reduce the risk of encountering hallucinations. However, as we just mentioned, it is imperative that you always verify the truthfulness of the provided responses:
To test out these precautions, we used the last point during our conversation on “the number of victories of the New Jersey Devils", and here’s the response we got:
The response is indeed correct: 35 victories! This is no longer a hallucination, but a correct answer.
By getting a solid handle on what AI hallucination is, how to spot it, and how to guard against it, we're able to leverage AI technologies in a safer and more efficient way. However, it must be recognized that the hallucination of artificial intelligence is a complex challenge that requires constant vigilance and ongoing research. At HUMAN Protocol, we actively contribute to this fight against hallucination by facilitating the verification of data accuracy and collecting feedback from millions of human users who are transparently compensated for their efforts.
Although AI is powerful, it is not infallible. It is our responsibility to remain vigilant and collaborate to improve it.
Legal Disclaimer
The HUMAN Protocol Foundation makes no representation, warranty, or undertaking, express or implied, as to the accuracy, reliability, completeness, or reasonableness of the information contained here. Any assumptions, opinions, and estimations expressed constitute the HUMAN Protocol Foundation’s judgment as of the time of publishing and are subject to change without notice. Any projection contained within the information presented here is based on a number of assumptions, and there can be no guarantee that any projected outcomes will be achieved.