this post was submitted on 14 Oct 2024
185 points (98.4% liked)

InsanePeopleFacebook

2656 readers
242 users here now

Screenshots of people being insane on Facebook. Please censor names/pics of end users in screenshots. Please follow the rules of lemmy.world

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] sxan@midwest.social 2 points 1 month ago (2 children)

Ray Kurzweil thinks he'll be able to upload his brain to a computer in 10 years and has thought so since the 1990s.

Kurzweil fervently wishes he'll be able to do this; existential angst drives many people, uneducated or not, to all sorts of religions. At least Kurtzweil is making educated guesses based on technological progress - wrong guesses, but still within the realm of reasonable.

There's no mysticism to the singularity. There's nothing preventing what he hopes for except engineering sophistication. We know most of the what, and maybe even a good chunk of the how, and we're making progress. Nothing in the idea of brain uploading depends on an ineffable spirit, or anything we can't already prove.

If we don't destroy ourselves or the planet, there's no reason we won't get there eventually. Just not soon enough for Ray or his loved ones, and probably not in time for anyone currently alive. It's not likely we'll never achieve it simply because we burn up the planet first, and run out of resources to continue frivolous research like immortality.

[–] FlyingSquid@lemmy.world 2 points 1 month ago (1 children)

I realize it's not mysticism. But it is believing something silly considering he's been saying it's just around the corner for decades now.

Sure, maybe one day it will happen. But it's like space colonies or everyone using flying cars. It's always going to happen in the near future.

[–] sxan@midwest.social 3 points 1 month ago

Sure; I'm not saying you're wrong. Ray is unrealistically optimistic, and his predictions are heavily dependent on several iffy factors: that we'll create GAI; that it'll be able to exponentially improve itself; that it'll be benevolent; and that it'll see value in helping us obtain immortality, and decide that this is good for us.

I just don't think it's fair to lump him in with SovCits and homeopaths (or whatever Linus Pauling is). He's a different kind of "wrong"; not crazy or deluded, just optimistic.

[–] YourNetworkIsHaunted@awful.systems 2 points 1 month ago (1 children)

I wouldn't say there's no mysticism in the singularity, at least not in the sense you're implying here. While it uses secular and scientific aesthetics rather than being overtly religious the roadmap it presents relies on specific assumptions about the nature of consciousness, intelligence, and identity that may sound plausible but aren't really more rational than having an immortal soul that gets judged at the end of days.

And it doesn't help that when confronted by any kind of questioning of how plausible any of this is there's a tendency to assume a sufficiently powerful AI can solve the problem and assume that's the end of it. It's not less of a Deus ex Machina if you call it an AI instead of a God to focus on the Machina instead of the Deus.

[–] sxan@midwest.social 2 points 1 month ago

While it uses secular and scientific aesthetics rather than being overtly religious the roadmap it presents relies on specific assumptions about the nature of consciousness, intelligence, and identity that may sound plausible but aren't really more rational than having an immortal soul that gets judged at the end of days.

Do we have any scientific evidence that consciousness, intelligence, and identity reside anywhere else than the brain? People who lose all of their limbs don't become more stupid. People who get artificial hearts don't become soulless automotons. Certainly, the brain needs chemicals and hormones produced elsewhere in the body, but we successfully artificially produce these chemicals for people in whom the natural production is faulty all the time; it's only a matter of scale.

We're certainly far from a complete understanding of the brain, but we're not that far. There are no great unknowns.

there's a tendency to assume a sufficiently powerful AI can solve the problem and assume that's the end of it.

I think that's entirely reasonable. We're limited by our biology; perhaps we'll find a limit to our technology that puts an upper limit on AI growth, but we haven't observed that horizon yet. It's no more irrational to assume there's a limit, than to assume that the limit is so low we can't get super-intelligent general AI before we hit it.