Hell, I could probably special-case that shit, and I'm barely a programmer.
BlueMonday1984
Also, it probably helped kneecap the popularity of Tolkien-style "Always Chaotic Evil" (TV Tropes page) races/species, by virtue of making the racialised elements much more difficult to ignore.
As TV Tropes' analysis page notes, however, there are a fair few ways to make Evil Minions^tm^ without throwing any racial baggage into the mix - ways that have let the base trope survive the change in political climate, even as its original version fell out of favour.
Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers
A trend on Reddit that sees Londoners giving false restaurant recommendations in order to keep their favorites clear of tourists and social media influencers highlights the inherent flaws of Google Search’s reliance on Reddit and Google's AI Overview.
Anyways, personal sidenote:
Beyond putting another blow to AI's reliability, this will probably also make the public more wary of user-generated material - its hard to trust something if you know the masses could be actively manipulating you.
In other news, Disney's apparently planning some kind of "major AI initiative".
Whatever it is, I'm expecting large-scale boycotts/strikes to kick off as a result of it, alongside AI's lack of copyright protection getting exploited to troll the shit out of Disney.
…but the game being free isn’t stopping my brain, raised on the Pokémon TCG, from wanting to impulse buy a print copy of the latest couple expansion packs. weird how that works
I mean the expansion packs are cool, and Fantasy Flight deserves to get that bag
Remember when wizards magicking away their shits was the stupidest thing to come out of Rowling's mouth? Pepperidge Farm remembers.
(Seriously, I was not prepared for Rowling's TERFward Turn)
"garbage in, garbage out" my beloathed
Not the first time this has happened Google's own AI overviews have misinterpreted u/fucksmith, eaten rocky onions and hallucinated cats on the moon before) but this is probably the worst such incident
Anyways, sidenote time:
Right now, there's no legal precedent determining whether or not "AI overviews" like Google's are protected under Section 230, but between shit like this and the recent lawsuit against character.ai, I suspect there's gonna be plenty of effort to deny them Section 230 protection.
If that happens, I expect it will put an immediate end to public-facing autoplag like this, as such products immediately become legal timebombs waiting to go off. I suspect it will also kill any future attempts at AI for the foreseeable future, for similar reasons.
As for AI as a concept, which I've discussed previously, I expect this incident will help further a public notion of "artificial intelligence" being an oxymoronic concept, and of intelligence being something that either cannot be replicated by artificial means, or something which should not be replicated by artificial means.
Quick update: The open letter on AI training (https://aitrainingstatement.org/) has reached 15k signatures:
‘They wish this technology didn’t exist’: Perplexity responds to News Corp’s lawsuit
“There are around three dozen lawsuits by media companies against generative AI tools. The common theme betrayed by those complaints collectively is that they wish this technology didn’t exist,” said the Perplexity team in the blog. “They prefer to live in a world where publicly reported facts are owned by corporations, and no one can do anything with those publicly reported facts without paying a toll.”
I wish the AI bros at Perplexity and elsewhere a very cope and fucking seethe.
Okay, quick personal sidenote:
With how much misinformation, manipulation, outright theft and other horrific shit this AI bubble has caused, I suspect we're gonna see some attempts at an outright ban on AI. How successful they're gonna be, I don't know, but at the bare minimum it'll enjoy some popularity on the political fringe.
Glad I could be of help.
Update on the character.ai lawsuit:
Gizmodo just reported on the story - in addition to the suicide that kicked this litigation off, they've also discovered an hour-long screen recording where a test account (self-reported as thirteen years old) gets sexted relentlessly by the site's chatbots.
So, in addition to driving one specific teen to suicide, character.ai is also facing accusations that their bots are sexually harassing children.
Update: As a matter of fact, I did. Here's some Python code to prove it:
There's probably a bug or two in this I missed, but hey, it still proves I'm more of a programmer than Sam Altman ever will be.