this post was submitted on 17 May 2024
251 points (96.0% liked)

Technology

59106 readers
3391 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Thorry84@feddit.nl 14 points 5 months ago

No the "AI" isn't a threat in itself. And treating generative algorithms like LLM like it's general intelligence is dumb beyond words. However:

It massively increases the reach and capacity of foreign (and sadly domestic) agents to influence people. All of those Russian trolls that brought about fascism, Brexit and the rise of the far right used to be humans. Now a single human can do more than a whole army of people could in the past using AI. Spreading misinformation has never been easier.

Then there's the whole replacing peoples jobs with AI. No the AI can't actually do those jobs, not very well at least. But if management and the share holders think they can increase profits using AI, they will certainly fire a lot of folk. And even if that ends up ruining the company down the line, that costs even more jobs and usually impacts the people lower in the organization the most.

Also there's a risk of people literally becoming less capable and knowledgeable because of AI. If you can have a digital assistant you carry around on your pocket at all times answer every question ever, why bother learning anything yourself? Why take the hard road, when the easy road is available? People are at risk of losing information, knowledge and the ability to think for themselves because of this. And it can become so bad, when the AI just makes shit up, people think it's the truth. And in a darker tone, if the people behind the big AIs want something to not be known or misrepresented, they can make it happen. And people would be so reliant on it, they wouldn't even know this happens. This is already an issue with social media, AI is much much worse.

Then there is the resource usage for AI. This makes the impact of crypto currency seem like a rounding error. The energy and water usage is huge and becoming bigger every day. This has the potential to undo almost all of the climate wins we've had for the past two decades and push the Earth beyond the tipping point. What people seem to forget about climate change is once things start becoming bad, it's way too late and the situation will deteriorate at an exponential rate.

That's just a couple of big things I can think of on the top of my head. I'm sure there are many more issues (such as the death of the internet). But I think this is enough to call the current level of "AI" a threat to humanity.