Skip to main content

The Hallucination Problem: A Feature, Not a Bug

CASMI Hallucinations Video

There are moments when large language models confidently generate information that is entirely wrong or fabricated—otherwise known as hallucinations. Many people see this as a flaw that needs to be fixed. But what if we're looking at it the wrong way? 

A recent study from Cornell, highlighted by TechCrunch, concluded that no matter how advanced language models become, hallucinations are inevitable. This might sound concerning, but it's important to understand that these hallucinations are not bugs; they're a fundamental part of how these models work. 

Language models are not built to be encyclopedias or databases of facts. Instead, they are designed to model the way humans use language. They encode how to structure sentences, connect words, and follow the rules of grammar. This ability comes from their exposure to vast amounts of text, allowing them to pick up on patterns and structures. But when it comes to factual accuracy, these models can only work when likelihood (the metric by which they choose the next word) and truth align. If there's a gap in their knowledge, they’ll fill it in with whatever is the most likely, regardless of whether it is true. 

They are designed to write. They are just not designed to tell the truth. 

Think of them as skilled copywriters. Just because they can craft beautiful sentences and well-structured documents doesn’t mean they know all the details of your latest project. If you want them to get it right, you need to provide them with the facts you want them to communicate. The same is true for language models—they need reliable data to avoid hallucinating. 

If your copywriter is producing press releases that read great but are filled with errors of fact, you don’t fix the problem with more training. You fix the problem by providing them with better facts. The same holds true for language models. 

Many efforts to eliminate hallucinations focus on adjusting the language models themselves. But this approach misses the point. The real solution lies in improving the data that feeds these models. Retrieval Augmented Generation (RAG) is one attempt at fixing this by providing relevant documents as part of the input to these models, but it has its own issues.  

A parallel approach is to use facts derived from data analytics to augment generation. CASMI’s Satyrn project is an example of such a system, using the analysis of structured data to produce facts that are then fed into language models, seriously reducing inaccuracy by guiding generation with truth. The approach's power is that it utilizes the structural encodings at the core of these models and reduces error by providing them with the facts they need to fill in the details. In effect, the models are still hallucinating, they are just hallucinating the truth. 

Ultimately, hallucinations aren’t a problem—they're a tool. Our focus shouldn’t be on eliminating hallucinations but on providing language models with the most accurate and up-to-date information possible. When we do that, these models can continue to excel at what they were designed to do: structure language in a way that sounds human, while also staying as close to the truth as the data allows. 

Kristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top