Abstract:

Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucina- tion is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all of the computable functions and will therefore always hal- lucinate. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.

  • eldrichhydralisk@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    38
    ·
    10 months ago

    For those of you who didn’t read the paper, the argument they’re making is similar to Godel’s Incompleteness Theorem: no matter how you build your LLM, there will be a significant number of prompts that make that LLM hallucinate. If the proof holds up then hallucinations aren’t a limitation of the training data or the structure of your particular model, they’re a limitation of the very concept of an LLM. That doesn’t make LLMs useless, but it does mean you shouldn’t ever use one as a source of truth.

    • Daxtron2@startrek.website
      link
      fedilink
      arrow-up
      6
      ·
      10 months ago

      Just want to point out that you shouldn’t use them as a single point of truth. They can and do give factual information, but just as with every source, you should verify and know what you’re actually doing with the output. Part of why it’s so useful to me as a programmer is that I can determine between good and bad outputs and utilize the good ones.

      • eldrichhydralisk@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        Which is exactly what the paper recommends! As long as you have something that isn’t an LLM in the pipeline to vet the output and you’re aware is the tech’s limitations, they can be useful tools. But some of those limitations might be a more solid barrier than some sales departments would like us to believe.

        • Daxtron2@startrek.website
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          Yep! Figured id share for the 90% who won’t read it though haha. Sales teams truly are the worst when it comes to that

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      the argument they’re making is similar to Godel’s Incompleteness Theorem

      More like the conclusion. I will be shocked if I read this and they’re doing Peano arithmetic.