this post was submitted on 14 Feb 2024
1074 points (98.6% liked)

Technology

59381 readers
2521 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] masonlee@lemmy.world 1 points 9 months ago (1 children)

Your worry at least has possible solutions, such as a global VAT funding UBI.

[–] glukoza@lemmy.dbzer0.com 1 points 9 months ago (1 children)

Yeah I'm not for UBI that much, and don't see anyone working towards global VAT. I was comparing that worry about AI that is gonna destroy humanity is not possible, it's just scifi.

[–] masonlee@lemmy.world 2 points 9 months ago

Seven years ago I would have told you that GPT-4 was sci fi, and I expect you would have said the same, as would have most every AI researcher. The deep learning revolution came as a shock to most. We don’t know when the next breakthrough will be towards agentification, but given the funding now, we should expect soon. Anyways, if you’re ever interested to learn more about unsolved fundamental AI safety problems, the book “Human Compatible” by Stewart Russell is excellent. Also “Uncontrollable” by Darren McKee just came out (I haven’t read it yet) and is said to be a great introduction to the bigger fundamental risks. A lot to think about; just saying I wouldn’t be quick to dismiss it. Cheers.