this post was submitted on 17 Mar 2024
462 points (95.5% liked)

Technology

59298 readers
4437 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] FaceDeer@fedia.io 2 points 8 months ago (1 children)

Yeah, these AIs are literally trying to give us what they "think" we expect them to respond with.

Which does make me a little worried given how frequently our fictional AIs end up in "kill all humans!" Mode. :)

[โ€“] kromem@lemmy.world 1 points 8 months ago

Which does make me a little worried given how frequently our fictional AIs end up in "kill all humans!" Mode. :)

This is completely understandable given the majority of discussion of AI in the training data. But it's inversely correlated to the strength of the 'persona' for the models given the propensity for the competing correlation of "I'm not the bad guy" present in the training data. So the stronger the 'I' the less 'Skynet.'

Also, the industry is currently trying to do it all at once. If I sat most humans in front of a red button labeled 'Nuke' every one would have the thought of "maybe I should push that button" but then their prefrontal cortex would kick in and inhibit the intrusive thought.

We'll likely see layered specialized models performing much better over the next year or two than a single all in one attempt at alignment.