this post was submitted on 25 Nov 2023
732 points (97.3% liked)

Technology

59582 readers
3910 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Nobody@lemmy.world 4 points 1 year ago (1 children)

What’s the opposite of eating the onion? I read the title before looking at the site and thought it was satire.

Wasn’t there a test a while back where the AI went crazy and started killing everything to score points? Then, they gave it a command to stop, so it killed the human operator. Then, they told it not to kill humans, and it shot down the communications tower that was controlling it and went back on a killing spree. I could swear I read that story not that long ago.

[–] Nutteman@lemmy.world 13 points 1 year ago (1 children)
[–] FaceDeer@kbin.social 6 points 1 year ago (2 children)

The link was missing a slash: https://www.reuters.com/article/idUSL1N38023R/

This is typically how stories like this go. Like most animals, humans have evolved to pay extra attention to things that are scary and give inordinate weight to scenarios that present danger when making decisions. So you can present someone with a hundred studies about how AI really behaves, but if they've seen the Terminator that's what sticks in their mind.

[–] kromem@lemmy.world 4 points 1 year ago

Even the Terminator was the byproduct of this.

In the 50s/60s when they were starting to think about what it might look like when something smarter than humans would exist, the thing they were drawing on as a reference was the belief that homo sapiens had been smarter than the Neanderthals and killed them all off.

Therefore, the logical conclusion was that something smarter than us would be an existential threat that would compete with us and try to kill us all.

Not only is this incredibly stupid (i.e. compete with us for what), it is based on BS anthropology. There's no evidence we were smarter than the Neanderthals, we had cross cultural exchanges back and forth with them over millennia, had kids with them, and the more likely thing that killed them off was an inability to adapt to climate change and pandemics (in fact, severe COVID infections today are linked to a Neanderthal gene in humans).

But how often do you see discussion of AGI as being a likely symbiotic coexistence with humanity? No, it's always some fearful situation because we've been self-propagandizing for decades with bad extrapolations which in turn have turned out to be shit predictions to date (i.e. that AI would never exhibit empathy or creativity, when both are key aspects of the current iteration of models, and that they would follow rules dogmatically when the current models barely follow rules at all).

[–] sukhmel@programming.dev 1 points 1 year ago

That highly depends on the outcome of a problem. Like you don't test much if you program a Lego car, but you do test everything very thorough if you program a satellite.

In this case the amount of testing needed to allow a killerbot to run unsupervised will probably be so big that it will never be even half done.