Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
I think “hallucinating” and “bullshitting” are pretty much synonyms in the context of LLMs. And I think they’re both equally imperfect analogies for the exact same reasons. When we talk about hallucinators & bullshitters, we’re almost always talking about beings with consciousness/understanding/agency/intent (people usually, pets occasionally), but spicy autocompleters don’t really have those things.
But if calling them “bullshit machines” is more effective communication, that’s great—let’s go with that.
To say that they bullshit reminds me of On Bullshit, which distinguishes between lying and bullshitting: “The main difference between the two is intent and deception.” But again I think it’s a bit of a stretch to say LLMs have intent.
I might say that LLMs hallunicate/bullshit, and the rules & guard rails that developers build into & around them are attempts to mitigate the madness.
I totally agree that both seem to imply intent, but IMHO hallucinating is something that seems to imply not only more agency than an LLM has, but also less culpability. Like, "Aw, it's sick and hallucinating, otherwise it would tell us the truth."
Whereas calling it a bullshit machine still implies more intentionality than an LLM is capable of, but at least skews the perception of that intention more in the direction of "It's making stuff up" which seems closer to the mechanisms behind an LLM to me.
I also love that the researchers actually took the time to not only provide the technical definition of bullshit, but also sub-categorized it too, lol.
I think for the sake of mixed company and delicate sensibilities we should refer to this as a "BM" rather than a "bullshit machine". Therefore it could be a LLM BM, or simply a BM.
Large Bowel Movement, got it.
@davel Very well said. I'll continue to call it bullshit because I think that's still a closer and more accurate term than "hallucinate". But it's far from the perfect descriptor of what AI does, for the reasons you point out.
@davel @ajsadauskas I enjoy the bullshitting analogy, but regression to mediocrity seems most accurate to me. I think it makes sense to call them mediocrity machines. (h/t @ElleGray)