Mhm I have mixed feelings about this. I know that this entire thing is fucked up but isn't it better to have generated stuff than having actual stuff that involved actual children?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
A problem that I see getting brought up is that generated AI images makes it harder to notice photos of actual victims, making it harder to locate and save them
The arrest is only a positive. Allowing pedophiles to create AI CP is not a victimless crime. As others point out it muddies the water for CP of real children, but it also potentially would allow pedophiles easier ways to network in the open (if the images are legal they can easily be platformed and advertised), and networking between abusers absolutely emboldens them and results in more abuse.
As a society we should never allow the normalization of sexualizing children.
Interesting. What do you think about drawn images? Is there a limit to how will the artist can be at drawing/painting? Stick figures vs life like paintings. Interesting line to consider.
If it was photoreal and difficult to distinguish from real photos? Yes, it's exactly the same.
And even if it's not photo real, communities that form around drawn child porn are toxic and dangerous as well. Sexualizing children is something I am 100% against.
networking between abusers absolutely emboldens them and results in more abuse.
Is this proven or a common sense claim you’re making?
Actually, that's not quite as clear.
The conventional wisdom used to be, (normal) porn makes people more likely to commit sexual abuse (in general). Then scientists decided to look into that. Slowly, over time, they've become more and more convinced that (normal) porn availability in fact reduces sexual assault.
I don't see an obvious reason why it should be different in case of CP, now that it can be generated.
Did we memory hole the whole ‘known CSAM in training data’ thing that happened a while back? When you’re vacuuming up the internet you’re going to wind up with the nasty stuff, too. Even if it’s not a pixel by pixel match of the photo it was trained on, there’s a non-zero chance that what it’s generating is based off actual CSAM. Which is really just laundering CSAM.
IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren't actually available in the dataset because they had already been removed from the internet.
Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that's what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI's hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That's the power and danger of these things.
Yeah, it’s very similar to the “is loli porn unethical” debate. No victim, it could supposedly help reduce actual CSAM consumption, etc… But it’s icky so many people still think it should be illegal.
There are two big differences between AI and loli though. The first is that AI would supposedly be trained with CSAM to be able to generate it. An artist can create loli porn without actually using CSAM references. The second difference is that AI is much much easier for the layman to create. It doesn’t take years of practice to be able to create passable porn. Anyone with a decent GPU can spin up a local instance, and be generating within a few hours.
In my mind, the former difference is much more impactful than the latter. AI becoming easier to access is likely inevitable, so combatting it now is likely only delaying the inevitable. But if that AI is trained on CSAM, it is inherently unethical to use.
Whether that makes the porn generated by it unethical by extension is still difficult to decide though, because if artists hate AI, then CSAM producers likely do too. Artists are worried AI will put them out of business, but then couldn’t the same be said about CSAM producers? If AI has the potential to run CSAM producers out of business, then it would be a net positive in the long term, even if the images being created in the short term are unethical.
Just a point of clarity, an AI model capable of generating csam doesn't necessarily have to be trained on csam.
The headline/title needs to be extended to include the rest of the sentence
"and then sent them to a minor"
Yes, this sicko needs to be punished. Any attempt to make him the victim of " the big bad government" is manipulative at best.
Edit: made the quote bigger for better visibility.
That's a very important distinction. While the first part is, to put it lightly, bad, I don't really care what people do on their own. Getting real people involved, and minor at that? Big no-no.
All LLM headlines are like this to fuel the ongoing hysteria about the tech. It's really annoying.
Bad title.
They caught him not simply for creating pics, but also for trading such pics etc.
It's worth mentioning that in this instance the guy did send porn to a minor. This isn't exactly a cut and dry, "guy used stable diffusion wrong" case. He was distributing it and grooming a kid.
The major concern to me, is that there isn't really any guidance from the FBI on what you can and can't do, which may lead to some big issues.
For example, websites like novelai make a business out of providing pornographic, anime-style image generation. The models they use deliberately tuned to provide abstract, "artistic" styles, but they can generate semi realistic images.
Now, let's say a criminal group uses novelai to produce CSAM of real people via the inpainting tools. Let's say the FBI cast a wide net and begins surveillance of novelai's userbase.
Is every person who goes on there and types, "Loli" or "Anya from spy x family, realistic, NSFW" (that's an underaged character) going to get a letter in the mail from the FBI? I feel like it's within the realm of possibility. What about "teen girls gone wild, NSFW?" Or "young man, no facial body hair, naked, NSFW?"
This is NOT a good scenario, imo. The systems used to produce harmful images being the same systems used to produce benign or borderline images. It's a dangerous mix, and throws the whole enterprise into question.
America has some of the most militant anti pedophilic culture in the world but they far and away have the highest rates of child sexual assault.
I think AI is going to revel is how deeply hypocritical Americans are on this issue. You have gigantic institutions like churches committing industrial scale victimization yet you won't find a 1/10th of the righteous indignation against other organized religions where there is just as much evidence it is happening as you will regarding one person producing images that don't actually hurt anyone.
It's pretty clear by how staggering a rate of child abuse that occurs in the states that Americans are just using child victims as weaponized politicalization (it's next to impossible to convincingly fight off pedo accusations if you're being mobbed) and aren't actually interested in fighting pedophilia.
Most states will let grown men marry children as young as 14. There is a special carve out for Christian pedophiles.
These cases are interesting tests of our first amendment rights. "Real" CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.
Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.
So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for "real" images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?
We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.
A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as "real," we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.
OMG. Every other post is saying their disgusted about the images part but it's a grey area, but he's definitely in trouble for contacting a minor.
Cartoon CSAM is illegal in the United States. AI images of CSAM fall into that category. It was illegal for him to make the images in the first place BEFORE he started sending them to a minor.
https://www.thefederalcriminalattorneys.com/possession-of-lolicon
Yeah that's toothless. They decided there is no particular way to age a cartoon, they could be from another planet that simply seem younger but are in actuality older.
It's bunk, let them draw or generate whatever they want, totally fictional events and people are fair game and quite honestly I'd Rather they stay active doing that then get active actually abusing children.
Outlaw shibari and I guarantee you'd have multiple serial killers btk-ing some unlucky souls.
Would Lisa Simpson be 8 years old, or 43 because the Simpsons started in 1989?
Ah yes, more bait articles rising to the top of Lemmy. The guy was arrested for grooming, he was sending these images to a minor. Outside of Digg, anyone have any suggestions for an alternative to Lemmy and Reddit? Lemmy's moderation quality is shit, I think I'm starting to figure out where I lean on the success of my experimental stay with Lemmy
Edit: Oh god, I actually checked digg out after posting this and the site design makes it look like you're actually scrolling through all of the ads at the bottom of a bulshit clickbait article
Article title is a bit misleading. Just glancing through I see he texted at least one minor in regards to this and distributed those generated pics in a few places. Putting it all together, yeah, arrest is kind of a no-brainer. Ethics of generating csam is the same as drawing it pretty much. Not much we can do about it aside from education.
This is tough, the goal should be to reduce child abuse. It's unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don't abuse children. Like everything else AI, we won't know the real impact for many years.
He then allegedly communicated with a 15-year-old boy, describing his process for creating the images, and sent him several of the AI generated images of minors through Instagram direct messages. In some of the messages, Anderegg told Instagram users that he uses Telegram to distribute AI-generated CSAM. “He actively cultivated an online community of like-minded offenders—through Instagram and Telegram—in which he could show off his obscene depictions of minors and discuss with these other offenders their shared sexual interest in children,” the court records allege. “Put differently, he used these GenAI images to attract other offenders who could normalize and validate his sexual interest in children while simultaneously fueling these offenders’ interest—and his own—in seeing minors being sexually abused.”
I think the fact that he was promoting child sexual abuse and was communicating with children and creating communities with them to distribute the content is the most damning thing, regardless of people's take on the matter.
Umm ... That AI generated hentai on the page of the same article, though ... Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.
I had an idea when these first AI image generators started gaining traction. Flood the CSAM market with AI generated images( good enough that you can't tell them apart.) In theory this would put the actual creators of CSAM out of business, thus saving a lot of children from the trauma.
Most people down vote the idea on their gut reaction tho.
Looks like they might do it on their own.
Breaking news: Paint made illegal, cause some moron painted something stupid.