Interested in a hands-on learning experience for developing LLM applications?
Join our LLM Bootcamp today and Get 5% Off for a Limited Time!

AI hallucinations: Risks associated with large language models

September 15, 2023

AI hallucinations: When language models dream in algorithms. While there’s no denying that large language models can generate false information, we can take action to reduce the risk. Large Language Models (LLMs), such as OpenAI’s ChatGPT, often face a challenge: the possibility of producing inaccurate information.

 

Inaccuracies span a spectrum, from odd and inconsequential instances—such as suggesting the Golden Gate Bridge’s relocation to Egypt in 2016—to more consequential and problematic scenarios.

For instance, a mayor in Australia recently considered legal action against OpenAI because ChatGPT falsely asserted that he had admitted guilt in a major bribery scandal. Furthermore, researchers have identified that LLM-generated fabrications can be exploited to disseminate malicious code packages to unsuspecting software developers. Additionally, LLMs often provide erroneous advice related to mental health and medical matters, such as the unsupported claim that wine consumption can “prevent cancer.”

AI Hallucination Phenomenon
AI Hallucination Phenomenon

AI Hallucination Phenomenon

This inclination to produce unsubstantiated “facts” is commonly referred to as hallucination, and it arises due to the development and training methods employed in contemporary LLMs, as well as generative AI models in general.

What Are AI Hallucinations? AI hallucinations occur when a large language model (LLM) generates inaccurate information. LLMs, which power chatbots like ChatGPT and Google Bard, have the capacity to produce responses that deviate from external facts or logical context.

These hallucinations may appear convincing due to LLMs’ ability to generate coherent text, relying on statistical patterns to ensure grammatical and semantic accuracy within the given prompt.

  • However, hallucinations aren’t always plausible and can sometimes be nonsensical, making it challenging to pinpoint their exact causes on a case-by-case basis.
  • An alternative term for AI hallucinations is “confabulation.” While most commonly associated with LLMs, these inaccuracies can also manifest in AI-generated video, images, and audio.

Examples of AI Hallucinations

One well-known instance of AI hallucination occurred when Google’s chatbot, Bard, provided false information about the James Webb Space Telescope. In response to the query, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?”

Case study: Bard

Bard falsely claimed that the telescope had captured the first images of an exoplanet beyond our solar system. However, this assertion was contradicted by NASA’s records, which showed that the first exoplanet images were obtained in 2004, while the James Webb Space Telescope launched in 2021.

This response by Bard, though seemingly plausible and contextually relevant, was debunked through fact-checking.

Another instance involved Meta’s demonstration of Galactica, an LLM designed for scientific researchers and students. When asked to draft a paper on creating avatars, the model cited a fabricated paper authored by a real expert in a related field.

 

Read more about -> 10 innovative ways to monetize business using ChatGPT

 

Types of AI Hallucinations

AI hallucinations can range from minor inconsistencies to wholly fabricated or contradictory information, including:

  1. Sentence contradiction: This occurs when an LLM generates a sentence that contradicts a previous sentence within the same output.

Example: Prompt: “Write a description of a landscape in four-word sentences.” Output: “The grass was green. The mountains were blue. The river was purple. The grass was brown.”

  1. Prompt contradiction: In this case, a sentence generated by the LLM contradicts the initial prompt used to generate it.

Example: Prompt: “Write a birthday card for my niece.” Output: “Happy anniversary, mom and dad!”

  1. Factual contradiction: LLMs may present fictitious information as fact.

Example: Prompt: “Name three cities in the United States.” Output: “New York, Los Angeles, Toronto.”

  1. Irrelevant or random hallucinations: These occur when the generated information lacks relevance to the input or output context.

Example: Prompt: “Describe London to me.” Output: “London is a city in England. Cats need to be fed at least once a day.”

Learn how to become a prompt engineer in 10 steps 

Causes of AI Hallucinations

Several technical reasons may underlie the occurrence of hallucinations in LLMs, although the exact mechanisms are often opaque. Some potential causes include:

  1. Data quality: Hallucinations can result from flawed information in the training data, which may contain noise, errors, biases, or inconsistencies.
  2. Generation method: Training and generation methods, even with consistent and reliable data, can contribute to hallucinations. Prior model generations’ biases or false decoding from the transformer may be factors. Models may also exhibit a bias toward specific or generic words, influencing the information they generate.
  3. Input context: Unclear, inconsistent, or contradictory input prompts can lead to hallucinations. Users can enhance results by refining their input prompts.

Large language model bootcamp

Challenges Posed by AI Hallucinations

AI hallucinations present several challenges, including:

  1. Eroding user trust: Hallucinations can significantly undermine user trust in AI systems. As users perceive AI as more reliable, instances of betrayal can be more impactful.
  2. Anthropomorphism risk: Describing erroneous AI outputs as hallucinations can anthropomorphize AI technology to some extent. It’s crucial to remember that AI lacks consciousness and its own perception of the world. Referring to such outputs as “mirages” rather than “hallucinations” might be more accurate.
  3. Misinformation and deception: Hallucinations have the potential to spread misinformation, fabricate citations, and be exploited in cyberattacks, posing a danger to information integrity.
  4. Black box nature: Many LLMs operate as black box AI, making it challenging to determine why a specific hallucination occurred. Fixing these issues often falls on users, requiring vigilance and monitoring to identify and address hallucinations.

Training Models

Generative AI models have gained widespread attention for their ability to generate text, images, and more. However, it’s crucial to understand that these models lack true intelligence. Instead, they function as statistical systems that predict data based on patterns learned from extensive training examples, often sourced from the internet.

The Nature of Generative AI Models

  1. Statistical Systems: Generative AI models are statistical systems that forecast words, images, speech, music, or other data.
  2. Pattern Learning: These models learn patterns in data, including contextual information, to make predictions.
  3. Example-Based Learning: They learn from a vast dataset of examples, but their predictions are probabilistic and not indicative of true understanding.

Training Process of Language Models (LMs)

  1. Masking and Prediction: Language Models like those used in generative AI are trained by masking certain words for context and having the model predict the missing words, similar to predictive text on devices.
  2. Efficacy and Coherence: This training method is highly effective but does not guarantee coherent text generation.

Shortcomings of Large Language Models (LLMs)

  1. Grammatical but Incoherent Text: LLMs can produce grammatically correct but incoherent text, highlighting their limitations in generating meaningful content.
  2. Falsehoods and Contradictions: They can propagate falsehoods and combine conflicting information from various sources without discerning accuracy.
  3. Lack of Intent and Understanding: LLMs lack intent and don’t comprehend truth or falsehood; they form associations between words and concepts without assessing their accuracy.

Addressing Hallucination in LLMs

  1. Challenges of Hallucination: Hallucination in LLMs arises from their inability to gauge the uncertainty of their predictions and their consistency in generating outputs.
  2. Mitigation Approaches: While complete elimination of hallucinations may be challenging, practical approaches can help reduce them.

Practical Approaches to Mitigate Hallucination

  1. Knowledge Integration: Integrating high-quality knowledge bases with LLMs can enhance accuracy in question-answering systems.
  2. Reinforcement Learning from Human Feedback (RLHF): This approach involves training LLMs, collecting human feedback, and fine-tuning models based on human judgments.
  3. Limitations of RLHF: Despite its promise, RLHF also has limitations and may not entirely eliminate hallucination in LLMs.

In summary, generative AI models like LLMs lack true understanding and can produce incoherent or inaccurate content. Mitigating hallucinations in these models requires careful training, knowledge integration, and feedback-driven fine-tuning, but complete elimination remains a challenge. Understanding the nature of these models is crucial in using them responsibly and effectively.

Exploring different perspectives: The role of hallucination in creativity

Considering the potential unsolvability of hallucination, at least with current Large Language Models (LLMs), is it necessarily a drawback? According to Berns, not necessarily. He suggests that hallucinating models could serve as catalysts for creativity by acting as “co-creative partners.” While their outputs may not always align entirely with facts, they could contain valuable threads worth exploring. Employing hallucination creatively can yield outcomes or combinations of ideas that might not readily occur to most individuals.

“Hallucinations” as an Issue in Context

However, Berns acknowledges that “hallucinations” become problematic when the generated statements are factually incorrect or violate established human, social, or cultural values. This is especially true in situations where individuals rely on the LLMs as experts.

He states, “In scenarios where a person relies on the LLM to be an expert, generated statements must align with facts and values. However, in creative or artistic tasks, the ability to generate unexpected outputs can be valuable. A human recipient might be surprised by a response to a query and, as a result, be pushed into a certain direction of thought that could lead to novel connections of ideas.”

Are LLMs Held to Unreasonable Standards?

On another note, Ha argues that today’s expectations of LLMs may be unreasonably high. He draws a parallel to human behavior, suggesting that humans also “hallucinate” at times when we misremember or misrepresent the truth. However, he posits that cognitive dissonance arises when LLMs produce outputs that appear accurate on the surface but may contain errors upon closer examination.

A skeptical approach to LLM predictions

Ultimately, the solution may not necessarily reside in altering the technical workings of generative AI models. Instead, the most prudent approach for now seems to be treating the predictions of these models with a healthy dose of skepticism.

In a nutshell

AI hallucinations in Large Language Models pose a complex challenge, but they also offer opportunities for creativity. While current mitigation strategies may not entirely eliminate hallucinations, they can reduce their impact. However, it’s essential to strike a balance between leveraging AI’s creative potential and ensuring factual accuracy, all while approaching LLM predictions with skepticism in our pursuit of responsible and effective AI utilization.

 

Register today

Data Science Dojo | data science for everyone

Discover more from Data Science Dojo

Subscribe to get the latest updates on AI, Data Science, LLMs, and Machine Learning.