this post was submitted on 25 Feb 2024
66 points (94.6% liked)

Futurology

1779 readers
98 users here now

founded 1 year ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] TWeaK@lemm.ee 6 points 8 months ago

As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of “killer robots” have made it onto the battlefield and proved to be devastating weapons.

Apparently no one was paying attention to Azerbaijan.

[–] Mango@lemmy.world 4 points 8 months ago (1 children)

It's slaughterbots.

Even scarier is that the people making them don't even need to know what they're making.

https://youtu.be/O-2tpwW0kmU?si=V2YFJ29QOD3Tvfu_

Here's a link without site tracking: https://www.youtube.com/watch?v=O-2tpwW0kmU

I am not a bot, and this action was performed manually.

[–] threelonmusketeers@sh.itjust.works 4 points 8 months ago (1 children)

Is there any chance that this could evolve into proxy wars being fought between robot armies, and result in a reduced loss of human life?

[–] The_v@lemmy.world 3 points 8 months ago

No. Humans use technology to kill humans.

[–] Overzeetop@beehaw.org 3 points 8 months ago* (last edited 8 months ago) (1 children)

See, this is what happens when you prevent/restrict the use of nuclear weapons. If we would just recognize the effectiveness of hypersonic, ballistic re-rentry, multiple warhead nuclear munitions and deploy them in conflicts instead of conventional weapons we wouldn’t need to worry about AI mis-identifying non-combatants one by one. [taps forehead]

[–] Argongas@kbin.social 2 points 8 months ago (1 children)

I'm not sure how I feel about it, but I heard the argument made that AI maybe would be better for killing. People mess up all the time and misidentify threats which causes collateral damage in conflicts often. AI in theory could be much better at this identification process.

It's kind of the same argument with self driving cars: we freak out whenever they get in an accident, but people causing thousands of accidents a day doesn't cause an outage.

Not saying I necessarily agree with either argument, but they do make me question how we think about and evaluate technology.

[–] TWeaK@lemm.ee 1 points 8 months ago* (last edited 8 months ago)

AI is a tool, like all tools it's only as effective as the tool who is using it.

Israel have already shown that AI can be abused to commit atrocities, while giving them further opportunity to avoid due responsibility.

With self driving cars, the main reason we don't have them is that the manufacturers would not be willing to accept liability. Also, the insurance industry is a massive leech on society and doesn't want to give up its cash cow. It's less about the effectiveness and the politics, more about money.

[–] BruceTwarzen@kbin.social 2 points 8 months ago (1 children)

If we just let robts fight robots it's just becoming a super expensive video game

[–] possiblylinux127@lemmy.zip 3 points 8 months ago

...With civilians caught in the middle

[–] pkill@programming.dev 0 points 8 months ago (1 children)