this post was submitted on 21 May 2024
119 points (86.5% liked)

Technology

59381 readers
3930 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] JeffKerman1999@sopuli.xyz 22 points 5 months ago (1 children)

You must have one person constantly checking for hallucinations in everything that is generated: how is that going to be faster?

[–] Grippler@feddit.dk -4 points 5 months ago* (last edited 5 months ago) (3 children)

Sure you sort of need that at the moment (not actually everything, but I get your hyperbole), but you seem to be working under the assumption that LLMs are not going to improve beyond what they are now. It is still very much in its infancy, and as the tech matures this will be less and less until it only requires few people to manage LLMs that solve the tasks of a much larger work force.

[–] SupraMario@lemmy.world 7 points 5 months ago

It's hard to improve when the data in is human and the data out cannot be error checked back against its own data in. It's like trying to solve a math problem with two calculators that both think 2+2 = 6 because the data they were given said that it's true.

[–] Muehe@lemmy.ml 2 points 5 months ago

(not actually everything, but I get your hyperbole)

How is it hyperbole? All artificial neural networks have "hallucinations", no matter their size. What's your magic way of knowing when that happens?

[–] JeffKerman1999@sopuli.xyz 0 points 5 months ago

LLMs now are trained on data generated by other LLMs. If you look at the "writing prompt" stuff 90% is machine generated (or so bad that I assume it's machine generated) and that's the data that is being bought right now.