codete ai hallucinations problem of data reliability main jpg 3840e5094f
Codete Blog

AI Hallucinations: The Problem of Data Reliability (and How To Fight It)

Dawid Pacholczyk 3622ceab56

19/07/2023 |

10 min read

Dawid Pacholczyk

As we’re in the midst of the AI boom, artificial intelligence (AI) is rapidly revolutionizing industries and changing how we live and work. However, as with any powerful tool, AI comes with its own set of challenges. Let’s take a closer look at the phenomenon of AI hallucinations.

And while the term hallucinations might suggest the level of conciseness that allows the self-sufficient technology to imagine (or perhaps even dream?) things by itself, we’re talking about the more down-to-earth problem. 

In this article, we will discuss what AI hallucinations are, why they are dangerous, and – most importantly – how to stop them with data reliability. 

 

Table of contents:

  1. What are hallucinations in AI?
  2. The problem of data bias
  3. Why are hallucinations in AI dangerous?
  4. Preventing hallucinations with data reliability
  5. Conclusion: harnessing the power of AI with data reliability

What are hallucinations in AI?

Let's start by making a clear statement: AI does not understand whether its output is correct or not – it rather predicts what is the expected answer to your question based on the trained database. This results from a natural language processing legacy, which evolved to operate on tokens. How does this process work?

When training data is complex – such as when it comes to language or domain-specific knowledge – it is tokenized; or divided into smaller units such as characters, words, or single sentences. By creating these tokens, machine learning models can better understand the language's dependencies (=context), resulting in more natural communication. 

As an illustration, let's say we want to train our model on a database that contains a single, longer sentence like: "We tend to think large language models are always telling the truth."

Our chosen method of tokenization is to split the sentence into two-letter bigrams and assess the importance of different tokens in a given context. Following our instructions, the model should begin to “understand” the database by using the self-attention mechanism. After some time, it'll end up with several two-letter clusters like "WE," "ET," "TE," "EN," "ND," etc. When the tokens are sorted, it can evaluate the results by assigning different weights to them. For example, as we can see, there are at least two LAs, ETs, TEs, and ARs (medium relevance); there are also three THs (high relevance). 

What’s important here is that each token is influenced by the tokens that came before or after it. Hence, the model learns connections between bigrams (as in: tokens), not words. 

So, as it learns the rules of language, the model predicts the most likely next token based on the sequence of tokens it has already processed. This knowledge lets it recognize that - based on our example database - when we prompt “we tend to think large l_,” it will undoubtedly be followed by “_anguage.” Or that if the prompt is "T," you're thrice more likely to get "TH" than "TR."

But when you force it to react to a prompt that continues to “THO” or “we tend to think large lions_,” it will have to go beyond the training data and try to meet your requirements. This is the moment when the hallucination occurs – as the model is typically designed to give back the most likely answer based on the data it processed.

And, if you don’t teach it to claim it doesn't know the answer, it will never say so. Instead, it'll improvise. Or at least try to. 

As you can see, the model does not work in the context of whether the output has or does not have a sense. There is no deeper understanding. Instead, the outputs are generated using probabilities and some vast training databases – that, ideally, are constantly updated. 

codete ai hallucinations problem of data reliability main 2 jpg 4173b9ab7b
Based on the database it was trained on, AI predicts the expected response to your question – without considering whether or not its reply is accurate. 

Overall, less logical outputs result from more limited training data. In contrast, as the database grows in size, so does the quality of the generated answers. That’s why Open’s AI Chat GPT can pride itself on such natural conversations. That’s also why Meta decided to make LLaMA 2 free to use to gather more training data. 

But, given its massive global user base, does this mean that the Chat GPT is free of hallucinations?  Not exactly. 

The problem of data bias 

As you already know, a large language model (LLM) is a trained machine-learning model that generates text based on the prompt you provided. However, as I’ve mentioned before, LLMs are prone to hallucinating, sometimes producing illogical or inaccurate text. Even though the answers themselves seem grammatically correct (or logical), they are still wrong.

There could be many reasons for that, but today we address the most prominent one: unreliable and biased data.

It's crucial to realize that biases in AI systems are not consciously held beliefs. Instead, they are unintentional echoes of the training data. AI, like our memories, is built on prior experiences. However, when the data is flawed (too narrow, incomplete, or not representative enough), or the algorithm is rewarded for echoing a biased input, it will eventually learn to produce distorted results. Due to biases in its training data, the AI may display some instances of favoritism or prejudice. 

How does the bias work: a simplified example

Imagine you are teaching an AI model to identify animals in pictures. To do so, you feed it with thousands of pre-labeled pictures of cats, dogs, and birds. However, you conducted your research incorrectly and included 80% of images of cats, 10% of images of dogs, and 10% of images of birds in training. 

Due to overrepresentation, the model is now very good at identifying cats, but it may also be prone to mistaking small dog breeds, dogs with long whiskers, or even dogs with larger eyes and pointy ears for cats. And if we tested it on an unfamiliar (to the model) animal, there's a good chance that – as long as it had four legs – the model would still identify the animal as a cat. 

The same thing could happen to, let’s say, an LLM used to catalog sentiment reviews for a product based on their content's vocabulary. If only positive ones are fed to the model, it may fail to classify neutral ones or catch up with sarcasm/keywords stuffed to deceive the algorithm.

It's extremely hard not to include any biases in databases, as we generally are unaware of the stereotypes we tend to believe. This usually shows in how we gather data – as some groups might be under or over-represented. This is also visible in how we label the outputs and, eventually, how we fine-tune the results.  And the more biased is represented in the database, the better the chance that our model will eventually learn to use them. 

This tendency can be exacerbated by feedback from real-world users interacting with your AI model. In other words, people may unintentionally (or for the sake of fun) reinforce bias that is already there. 

If you want to expand the topic of Biases and Ethics in AI, this is a good read to learn more about boosting your company image with ethical AI (with a summary of ethical principles by Microsoft). 

Why are hallucinations in AI dangerous?

AI hallucinations can have far-reaching and potentially negative effects. This is crucial in industries like legal counsel, healthcare, banking (and insurance), transportation (in the context of autonomous vehicles), and many others that deal with human dignity and safety. 

Let’s name a few potential risks:

  • Spread of misinformation. As the AI model is not a search console, it strictly refers to the data given to it. In other words, it may appear to quote extremely specific research on a chosen topic or even a portion of a specific text (such as a poem), but upon closer examination, it is usually discovered that the source is nonexistent. Or that – as the structure is learned from the tokens – the model changes the pre-uploaded text based on the weights given to certain tokens. This is especially concerning in healthcare, legal counsel, and education.
  • Biased decision-making. AI-powered systems may generate inaccurate or deceptive results that reinforce preexisting biases, resulting in unfair and discriminatory decision-making. For example, an AI-powered recruitment tool that hallucinates and favors candidates based on their gender or race can jeopardize diversity and equality in hiring procedures.
  • Additional costs. Relying on AI systems that generate unreliable outputs can result in expensive mistakes, financial losses, and reputational damage.
  • Safety hazards. AI hallucinations can have life-threatening consequences. A self-driving car that experiences hallucinations misinterprets traffic signals, or ignores pedestrians seriously threatens road safety.

Can AI hallucinations be useful?

Even though AI hallucinations are typically viewed negatively, they occasionally serve a useful purpose. Hallucination stimulates creativity. Therefore, based on the training input, you can modify your model to, for instance, hallucinate some original ideas. Or to come up with a fresh book plot. This can be achieved by manipulating the randomness parameter. Basically, you decide whether the model can improvise or must strictly adhere to the pre-taught rules. 

Preventing hallucinations with data reliability

Language models are not search engines or databases. Thus, hallucinations are unavoidable.

If you want to fine-tune the model for the needs of your business, you can try to clean up the training data and retrain the model. It’s best to keep an eye on that in the early steps of the development, as the open-source models are too big for typical hardware to do that on your own.

There are also a few ways to prevent hallucinations through data preparation:

  1. Thoroughly clean and preprocess your data, removing outliers, duplicates, and irrelevant information. This step helps enhance the quality of your dataset and minimizes the chances of hallucinatory outputs.
  2. Prioritize gathering diverse and representative data that accurately reflects the real-world scenarios your AI system will encounter.  Ensure the data covers a wide range of variations, contexts, and demographics relevant to your AI system's intended application.
  3. Invest time and effort in meticulous data labeling to provide accurate and comprehensive annotations. Well-labeled data reduces ambiguity and improves the performance and reliability of AI models.

When using an open-source solution, it's best to prevent hallucinations through controlled generation. By using prompt engineering, you're able to provide sufficient details and constraints in the prompt. The more specific you are, the better results you’ll achieve. This rule also applies to limiting the model's response length and the source number from “unknown” to including a specific database of links, files, and other pre-chosen materials.

And when you get the first results, start by fact-checking them! It’s best to accept the reality that (open source) models are prone to hallucination. So, if you incorporate AI functionality within your own product, communicate the possibility that hallucinations can happen and add a flag button so the users can report it. 

Conclusion: harnessing the power of AI with data reliability

As AI models advance, business leaders must consider the ethical implications of incorporating AI into their operations and ensure that AI systems are transparent, accountable, and unbiased. Hence, safeguarding against hallucinations is crucial for unlocking its benefits. 

Incorporating AI into your products and services can drive innovation, improve efficiency, and ultimately generate substantial cost savings – an invaluable advantage, especially during recessionary times, where every penny counts for businesses seeking to thrive amidst economic challenges.

Get in touch with Codete's AI experts >

At Codete, we understand the importance of data reliability and the potential of AI to revolutionize your operations. Our expertise lies in discovering industry-specific use cases for Generative AI applications that deliver accurate and reliable results and prioritize ethical considerations and inclusivity.  

If you need a service like that, we encourage you to reach out to us

Rated: 4.4 / 8 opinions
Dawid Pacholczyk 3622ceab56

Dawid Pacholczyk

Consulting Manager at Codete with over 15 years of experience in the IT sector and a strong technical background. Seasoned in working with multinational companies. Ph.D. student and lecturer at Polish-Japanese Academy of IT, focused on software architecture, software development and management.

Our mission is to accelerate your growth through technology

Contact us

Codete Global
Spółka z ograniczoną odpowiedzialnością

Na Zjeździe 11
30-527 Kraków

NIP (VAT-ID): PL6762460401
REGON: 122745429
KRS: 0000983688

Get in Touch
  • icon facebook
  • icon linkedin
  • icon instagram
  • icon youtube
Offices
  • Kraków

    Na Zjeździe 11
    30-527 Kraków
    Poland

  • Lublin

    Wojciechowska 7E
    20-704 Lublin
    Poland

  • Berlin

    Bouchéstraße 12
    12435 Berlin
    Germany