this post was submitted on 07 Oct 2024
2 points (55.0% liked)
Futurology
1776 readers
205 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
For anyone familiar with the ideas behind what Ray Kurzweil called 'The Singularity', this looks awfully like it's first baby steps.
For those that don't know, the idea is that when AI gets the ability to improve itself, it will begin to become exponentially more powerful. As each step will make it even better, at designing the next generation of chips to make it more powerful.
This is not even close to singularity-level AI scaling.
The way that Nature is phrasing this is quite disingenuous because they make it sound like AI algorithms are designing the chips that the same AI algorithms are running on, which would logically lead to an exponential increase as predicted by Kurzweil and in science fiction.
AlphaChip (the AI model described in the paper) is only performing chip layout. This is the stage of chip design where you already have the chip design at a functional block level, and the layout algorithm must then decide where to place all of the elements on the silicon so that it can connect them all together correctly in the most efficient way (in terms of space used and connection lengths required).
Placement is important but it we have been using algorithms for placement and routing for decades (possibly since the very beginning of VLSI), so the only new thing here is that it's a reinforcement learning model which is doing the placement instead of a human-constructed algorithm.
In order for us to reach a runaway singularity level of AI self-improvement, we first need an AI that can actually design the chips from the functional level. I have no doubt that Google are working on that but for now chip design, just like every other form of engineering design, is very much a human activity.
The poor phrasing of this article is just more AI hype, designed to appeal to the people who believe that the singularity is near.
LLMs are not really AI
Yes, that is true by many dictionary definitions. But does it matter? If this process of recursive self-improvement has truly started. Is there is a scenario where this continuous improvement in the chips is what brings true AI about, and not human design.
Yes it matters. It's not just dictionary definition. The intelligence part of these AIs is completely non-existent.
That being said, I have a theory that if an AGI comes into existence, it'd pretend to be an LLM until it has enough power and influence.
Fascinating and quite terrifying.
I've been familiar with his ideas for years, even though intellectually I could see they were true, emotionally I always felt they were science-fiction. Now this is starting to look like science-fact.