this post was submitted on 18 Jun 2024
94 points (100.0% liked)
TechTakes
1401 readers
139 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
oh but you see, it's "hallucination" when LLM is wrong and it's hype cycle fuel when it's correct. no, LLMs don't "hallucinate", that implies that this state is peculiar, isolated, triggered by very specific circumstances. LLMs bullshit all the time, sometimes they are right, sometimes not, the process that produces both types of response is the same. pushing for "hallucination" tries to obscure that. use of "hallucination" also implies that LLMs know something, they don't, by design. it just so happens that if they "get" things right, it's because it appeared in training material enough times to make an impression in model.
Bullshitting to me is giving intentionally wrong statements. LLMs do not generate intentionally wrong statements. Saying they do, means that you imply intelligence.
LLMs know nothing nor are they intelligent. They also are not right or wrong, they generate output based on statistics.
"Hallucination" as a term for "AIs" making things up is used since the early 2000s (even if it's meaning has changed since then).
bullshitting as in when you give a confident answer without regard of actual reality. previously discussed there LLMs do exactly that: generate confidently, authoritatively sounding text without regard of facts, because these things do not know facts or anything for that matter.
maybe it's high time to change terms then
So you say there could be different meanings of the same word? Like “bullshitting” or “hallucination”?
mod post: please desist, it's just tiresome now
agreed
Absolutely.