After reading through Baldur's latest piece on how tech and the public view gen-AI, I've had some loose thoughts about how this AI bubble's gonna play out.
I don't have any particular structure to this, this is just a bunch of things I'm getting off my chest:
- AI's Dogshit Reputation
Past AI springs had the good fortune to have had no obvious negative externalities to sour the public's reputation (mainly because they weren't public facing, going by David Gerard).
This bubble, by comparison, has been pretty much entirely public facing, giving us, among other things:
-
A veritable slop-nami of garbage-looking art, interesting only when it comes off as completely fucking insane (say hi Biblically-accurate gymnasts)
-
Copyright infringement and art theft on a Biblical scale, leading to basically major AI company getting sued out the ass (with Suno and Udio being the latest targets)
-
Colossal amounts of power consumption, and thus planet-cooking levels of CO2 emissions (for the latest example, Google missed its climate targets as a direct result of AI)
-
High-profile public embarrassments left and right, with Google's pizza-glue pisstake the most obvious coming to mind
-
Scammers making use of voice-cloning tech to make their scams more convincing (with a particularly notorious flavour imitating a loved one under duress) (thanks to @mountainriver for pointing this one out)
-
And probably a few more I'm missing
All of these have done a lot of damage to AI's public image, to the point where its absence is an explicit selling point - damage which I expect to last for at least a decade.
When the next AI winter comes in, I'm expecting it to be particularly long and harsh - I fully believe a lot of would-be AI researchers have decided to go off and do something else, rather than risk causing or aggravating shit like this. (Missed this incomplete sentence on first draft)
- The Copyright Shitshow
Speaking of copyright, basically every AI company has worked under the assumption that copyright basically doesn't exist and they can yoink whatever they want without issue.
With Gen-AI being Gen-AI, getting evidence of their theft isn't particularly hard - as they're straight-up incapable of creativity, they'll puke out replicas of its training data with the right prompt.
Said training data has included, on the audio side, songs held under copyright by major music studios, and, on the visual side, movies and cartoons currently owned by the fucking Mouse..
Unsurprisingly, they're getting sued to kingdom come. If I were in their shoes, I'd probably try to convince the big firms my company's worth more alive than dead and strike some deals with them, a la OpenAI with Newscorp.
Given they seemingly believe they did nothing wrong (or at least Suno and Udio do), I expect they'll try to fight the suits, get pummeled in court, and almost certainly go bankrupt.
There's also the AI-focused COPIED act which would explicitly ban these kinds of copyright-related shenanigans - between getting bipartisan support and support from a lot of major media companies, chances are good it'll pass.
- Tech's Tainted Image
I feel the tech industry as a whole is gonna see its image get further tainted by this, as well - the industry's image has already been falling apart for a while, but it feels like AI's sent that decline into high gear.
When the cultural zeitgeist is doing a 180 on the fucking Luddites and is openly clamoring for AI-free shit, whilst Apple produces the tech industry's equivalent to the "face ad", its not hard to see why I feel that way.
I don't really know how things are gonna play out because of this. Taking a shot in the dark, I suspect the "tech asshole" stench Baldur mentioned is gonna be spread to the rest of the industry thanks to the AI bubble, and its gonna turn a fair number of people away from working in the industry as a result.