Can AI hallucinate

Share in:
LinkedIn
Facebook
Twitter/X
Email
Share in:

Sometimes, answers received from Generative AI programs such as ChatGPT may appear authoritatively correct when, in fact, they are wrong.  Why is that?

Hallucinations are logical mistakes in Generative AI answers that contain made up but plausible facts without basis in the underlying data, or answers that would be correct in different contexts or for different questions.  AI may invent facts through logical errors.  Hallucinations may be difficult to detect as they can appear correct.  They would likely reduce trust in an AI initiative once they are discovered.  Consequently, it is important to preemptively detect hallucinations by doing one or more of the following:

  • Fact-check answers against trusted sources of truth.
  • Validate consistency by asking similar questions and comparing answers.
  • Compare results with other LLM (Large Language Model).
  • Trace and analyze the logical reasoning behind answers.

How to prevent AI hallucinations

The cornerstone of Generative AI is the Large Language Model, or LLM for short.  Hallucinations occur when the model is trained with data that is inadequate or uses an incorrect or incomplete understanding of the human language.  It is thus imperative to ensure that the data has the required relevance, quantity, and breadth to prevent biases, and to validate that the model truly understands the data.

The easiest method to avoid hallucinations is to use a trusted model from a generative AI platform.  But that can be expensive.  The alternative is to train a model ourselves.  Doing so should consider:

  • Providing adequate data during training.
  • Policing of the training process, which could last a long time, to ensure that new sources of misleading or false data are not introduced during training.
  • Involving human oversight and validation throughout the training process.  Of note is the technique Prompt Engineering, which helps humans train models to understand the contexts and nuances of the human language by crafting queries that test the models and allow the humans to tweak the models according to their responses.

Many pitfalls may be encountered when implementing generative AI, hallucinations being just one of them.  The most important step of the process is to design an AI strategy that accounts for them.

Scroll to Top