this post was submitted on 07 Oct 2024
120 points (100.0% liked)

Technology

37669 readers
337 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.ml/post/20858435

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

you are viewing a single comment's thread
view the rest of the comments
[–] ChairmanMeow@programming.dev 21 points 2 weeks ago (14 children)

The actual paper is an interesting read. They present an actual computational proof, stating that even if you have essentially infinite memory, a computer that's a billion times faster than what we have now, perfect training data that you can sample without bias and you're only aiming for an AGI that performs slightly better than chance, it's still completely infeasible to do within the next few millenia. Ergo, it's definitely not "right around the corner". We're lightyears off still.

They prove this by proving that if you could train an AI in a tractable amount of time, you would have proven P=NP. And thus, training an AI is NP-hard. Given the minimum data that needs to be learned to be better than chance, this results in a ridiculously long training time well beyond the realm of what's even remotely feasible. And that's provided you don't even have to deal with all the constraints that exist in the real world.

We perhaps need some breakthrough in quantum computing in order to get closer. That is not to say that AI won't improve or anything, it'll get a bit better. But there is a computationally proven ceiling here, and breaking through that is exceptionally hard.

It also raises (imo) the question of whether or not we can truly consider humans to have general intelligence or not. Perhaps we're not as smart as we think we are either.

[–] zygo_histo_morpheus@programming.dev 9 points 2 weeks ago (6 children)

A breakthrough in quantum computing wouldn't necessarily help. QC isn't faster than classical computing in the general case, it just happens to be for a few specific algorithms (e.g. factoring numbers). It's not impossible that a QC breakthrough might speed up training AI models (although to my knowledge we don't have any reason to believe that it would) and maybe that's what you're referring to, but there's a widespread misconception that Quantum computers are essentially non-deterministic turing machines that "evaluate all possible states at the same time" which isn't the case.

[–] ChairmanMeow@programming.dev 8 points 2 weeks ago (5 children)

I was more hinting at that through conventional computational means we're just not getting there, and that some completely hypothetical breakthrough somewhere is required. QC is the best guess I have for where it might be but it's still far-fetched.

But yes, you're absolutely right that QC in general isn't a magic bullet here.

[–] Umbrias@beehaw.org 3 points 2 weeks ago (1 children)

the limitation is specifically using the primary machine learning technique, same one all chatbots use at places claiming to pursue agi, which is statistical imitation, is np-hard.

[–] ChairmanMeow@programming.dev 2 points 1 week ago (1 children)

Not just that, they've proven it's not possible using any tractable algorithm. If it were you'd run into a contradiction. Their example uses basically any machine learning algorithm we know, but the proof generalizes.

[–] Umbrias@beehaw.org 1 points 1 week ago

via statistical imitation. other methods, such as solving and implementing by first principles analytically, has not been shown to be np hard. the difference is important but the end result is still no agigpt in the foreseeable and unforeseeable future.

load more comments (3 replies)
load more comments (3 replies)
load more comments (10 replies)