118
Jensen Huang says kids shouldn't learn to code — they should leave it up to AI.
(www.tomshardware.com)
This is a most excellent place for technology news and articles.
I asked ChatGPT to show me how to do some Godot4.2 C# stuff the other day as I transition from Unity, it was 70% incorrect.
Good times. (It was probably right for an older version, but I told it the actual version)
Yea, and as we all know, AI will never progress further than it's current state. /s
Not with LLM's it won't. They're a dead end. In their rush for short term profits so called AI companies have poisoned the well; the only way to "improve" an LLM is to make it larger, but most of the content in the internet is now produced by these fancy autocomplete engines, so there's not only no new and better content to train them on, but since they can't really generate anything they haven't been trained on doing so on LLM generated text will only propagate and maximise any errors, like making photocopies of photocopies or JPEGs of JPEGs.
It's all a silly game of telephone now; a circular LLM centipede fed on its own excrement, distilling its own garbage to the point of maximum uselessness.
Mhmm, give it another year or so. You are like people in the 90's saying while the internet may be useful for emails, that's the limit of what it can accomplish.
Forgive me if your claims of a glass ceiling ring hollow considering all the previous glass ceilings people have claimed about AI.
"An AI will never be able to write in a human like way" Check.
"An AI will never be able to generate a coherent image" Check
"An AI generated image could never be better than a real artist" Check
"AI will never be able to generate a whole video without messing it up" Check
I'm not sure how you can just flippantly say it's not going to advance or progress in any more meaningful way. This is still a very new technology and it's already shattered the limits of what people thought was possible.
Oh, AI is going to progress. LLMs, which are merely applied statistics and no more AI than Markov chains, are not, at least in any significant way (sure, they might get bigger, which won't really change them qualitatively, but as I pointed out there's no unpoisoned content to train them on, so making them bigger is moot anyway, other than as a means to temporarily inflate the bubble).
LLM is a plagiarizing machine. Who will write the code for it to plagiarize?