this post was submitted on 27 Feb 2024
105 points (100.0% liked)

Technology

37720 readers
319 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Abstract:

Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucina- tion is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all of the computable functions and will therefore always hal- lucinate. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.

you are viewing a single comment's thread
view the rest of the comments
[–] solanaceous@beehaw.org 2 points 8 months ago

Sure, it’s hard to say whether a computer program can “know” anything or what that even means. But the paper isn’t arguing that. It assumes very little about how how LLMs actually work, and it defines “hallucination” as “not giving the right answer” with no option for the machine to answer “I don’t know”. Then the proof follows basically from the fact that the LLM-or-whatever can’t know everything.

The result is not very surprising, and saying that it means hallucination is inevitable is an oversell. It’s possible that hallucinations, or at least wrong answers, are inevitable for different reasons though.