this post was submitted on 27 Feb 2024
107 points (100.0% liked)
Technology
37712 readers
187 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I extremely doubt that hallucination is a limitation in final output. It may be an inevitable part of the process, but it's almost definitely a surmountable problem.
Just off the top of my head I can imagine using two separate LLMs for a final output, the first one generates an initial output, and the second one verifies whether what it says is accurate. The chance of two totally independent LLMs having the same hallucination is probably very low. And you can add as many additional separate LLMs for re-verification as you like. The chance of a hallucination making it through multiple LLM verifications probably gets close to zero.
While this would greatly multiply the resources required, it's just a simple example showing that hallucinations are not inevitable in final output
How do you propose to get these independent LLMs? If both are trained using similar objectives e.g., masked token prediction, then they won’t be independent.
Also, assuming independent LLMs could be obtained, how do you propose to compute this hallucination probability? Without knowing this probability, you can’t know how many verification LLMs are sufficient for your application, can you?
There are already existing multiple different LLMs that are essentially completely different. In fact this is one of the major problems with LLMs, because when you add even a small amount of change into an LLM it turns out to radically alter the output it returns for huge amounts of seemingly unrelated topics.
For your other point, I never said bouncing their answers back and forth for verification was trivial, but it's definitely doable.
Can you provide the source of a few of these completely different LLMs?
You mean perturbing the parameters of the LLM? That’s hardly surprising IMO. And I’m not sure it’s convincing enough to show independence, unless you have a source for this?