this post was submitted on 27 Feb 2024
107 points (100.0% liked)

Technology

37603 readers
541 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Abstract:

Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucina- tion is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all of the computable functions and will therefore always hal- lucinate. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.

you are viewing a single comment's thread
view the rest of the comments
[–] LanternEverywhere@kbin.social 3 points 6 months ago (21 children)

I extremely doubt that hallucination is a limitation in final output. It may be an inevitable part of the process, but it's almost definitely a surmountable problem.

Just off the top of my head I can imagine using two separate LLMs for a final output, the first one generates an initial output, and the second one verifies whether what it says is accurate. The chance of two totally independent LLMs having the same hallucination is probably very low. And you can add as many additional separate LLMs for re-verification as you like. The chance of a hallucination making it through multiple LLM verifications probably gets close to zero.

While this would greatly multiply the resources required, it's just a simple example showing that hallucinations are not inevitable in final output

[–] blindsight@beehaw.org 22 points 6 months ago (16 children)

That's not how LLMs work.

Super short version is that LLMs probabilistically determine the next word most likely to occur in a sequence. They do this using Statistical Models (like what your cell phone's auto complete uses); Transformers (rating the importance of preceding words, so the model can "focus" on the most important words); and Relatedness (a measure of how closely linked different words/phrases are to reach other in meaning).

With increasingly large models, LLMs can build a more accurate representation of Relatedness across a wider range of topics. With enough examples, LLMs can infinitely generate content that is closely Related to a query.

So a small LLM can make sentences that follow writing conventions but are nonsense. A larger LLM can write intelligibly about topics that are frequently included in the training materials. Huge LLMs can do increasingly nuanced things like "explain" jokes.

LLMs are not capable of evaluating truth or facts. It's not part of the algorithm. And it doesn't matter how big they get. At best, with enough examples to build a stronger Relatedness dataset, they are more likely to "stay on topic" and return results that are actually similar to what is being asked.

[–] LanternEverywhere@kbin.social 2 points 6 months ago* (last edited 6 months ago) (15 children)

No, I've used LLMs to do exactly this, and it works. You prompt it with a statement and ask "is this true, yes or no?" It will reply with a yes or no, and it's almost always correct. Do this verification through multiple different LLMs and it would eliminate close to 100% of hallucinations.

EDIT

I just tested it multiple times in chatgpt4, and it got every true/false answer correct.

[–] kciwsnurb@aussie.zone 6 points 6 months ago

You seem very certain on this approach, but you gave no sources so far. Can you back this up with actual research or is this just based on your personal experience with chatgpt4?

load more comments (14 replies)
load more comments (14 replies)
load more comments (18 replies)