this post was submitted on 28 Feb 2024
471 points (97.4% liked)

Technology

58133 readers
4819 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] dojan@lemmy.world 20 points 6 months ago (2 children)

I mean it makes sense. Machine learning is fantastic at noticing patterns, and the stuff they generate most definitely do have patterns. We might not notice them, but the models will pick up on them and eventually, if you keep training them on that data, they'll skew more and more in that direction.

They've been marketing things like there isn't a limit to how good these things can get, but there is. Nothing is infinite.

[–] circuitfarmer@lemmy.world 16 points 6 months ago (2 children)

I've tried to make this point several times to folks in the industry. I work in AI, and yet every time I approach some people with "you know it ultimately just repeats patterns", I'm met with scoffs and those people telling me I'm just not "seeing the big picture".

But I am, and the truth is that there are limits. This tech is not the digital singularity the marketers and business goons want everyone to think it is.

[–] zurohki@aussie.zone 9 points 6 months ago (2 children)

It repeats things that sort of sound intelligent to try and convince everyone that actual intelligent thought is taking place? It really is just like humans!

[–] dojan@lemmy.world 2 points 6 months ago

They don't really parrot unless they're overfitted.

It's more that they have been trained to produce a certain kind of result. One method you can train them on is by basically assigning a score on how good the output is. Doing this manually takes a lot of time (Google has been doing this for years via captcha), or you could train other models to score text for you.

The obvious problem with the latter solution is that then you need to ensure that that model is scoring roughly in line with how humans would score it; the technical term for this is alignment. There's a pretty funny story about that with GPT-2, presented in a really cute animation format by Robert Miles.

[–] afraid_of_zombies@lemmy.world 1 points 6 months ago

Tell me about it. All the government contractors I work with. Just repeating the same submittal over and over and over again.

[–] rosemash@social.raincloud.dev 1 points 6 months ago

It can be both though.

[–] Harbinger01173430@lemmy.world 2 points 6 months ago

Legend says that humans developed pattern finding as a skill ages ago...