this post was submitted on 22 May 2024
296 points (96.8% liked)

News

23268 readers
3627 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Darkassassin07@lemmy.ca 16 points 5 months ago (4 children)

....no

That'd be like outlawing hammers because someone figured out they make a great murder weapon.

Just because you can use a tool for crime, doesn't mean that tool was designed/intended for crime.

[–] greentreerainfire@kbin.social 1 points 5 months ago (1 children)

That’d be like outlawing hammers because someone figured out they make a great murder weapon.

Just because you can use a tool for crime, doesn’t mean that tool was designed/intended for crime.

Not exactly. This would be more akin to a company that will 3D printer metal parts and assemble them for you. You use this service and have them create and assemble a gun for you. Then you use that weapon in a violent crime. Should the company have known better that you were having them create an illegal weapon on your behalf?

[–] FaceDeer@fedia.io 11 points 5 months ago

The person who was charged was using Stable Diffusion to generate the images on their own computer, entirely with their own resources. So it's akin to a company that sells 3D printers selling a printer to someone, who then uses it to build a gun.

[–] Crismus@lemmy.world 0 points 5 months ago

Sadly that's what most of the gun laws are designed about. Book banning and anti-abortion both are limiting tools because of what a small minority choose to do with the tool.

AI image generation shouldn't be considered in obscenity laws. His distribution or pornography to minor should be the issue, because not everyone stuck with that disease should be deprived tools that can be used to keep them away from hurting others.

Using AI images to increase charges should be wrong. A pedophile contacting and distributing pornography to children should be all that it takes to charge a person. This will just setup new precedent that is beyond the scope of the judiciary.

[–] xmunk@sh.itjust.works -5 points 5 months ago (2 children)

It would be more like outlawing ivory grand pianos because they require dead elephants to make - the AI models under question here were trained on abuse.

[–] Darkassassin07@lemmy.ca 6 points 5 months ago* (last edited 5 months ago) (1 children)

A person (the arrested software engineer from the article) acquired a tool (a copy of Stable Diffusion, available on github) and used it to commit crime (trained it to generate CSAM + used it to generate CSAM).

That has nothing to do with the developer of the AI, and everything to do with the person using it. (hence the arrest...)

I stand by my analogy.

[–] xmunk@sh.itjust.works -3 points 5 months ago (1 children)

Unfortunately the developer trained it on some CSAM which I think means they're not free of guilt - we really need to rebuild these models from the ground up to be free of that taint.

[–] Darkassassin07@lemmy.ca 5 points 5 months ago

Reading that article:

Given it's public dataset not owned or maintained by the developers of Stable Diffusion; I wouldn't consider that their fault either.

I think it's reasonable to expect a dataset like that should have had screening measures to prevent that kind of data being imported in the first place. It shouldn't be on users (here meaning the devs of Stable Diffusion) of that data to ensure there's no illegal content within the billions of images in a public dataset.

That's a different story now that users have been informed of the content within this particular data, but I don't think it should have been assumed to be their responsibility from the beginning.

[–] wandermind@sopuli.xyz 4 points 5 months ago (1 children)

Sounds to me it would be more like outlawing grand pianos because of all of the dead elephants - while some people are claiming that it is possible to make a grand piano without killing elephants.

[–] xmunk@sh.itjust.works -2 points 5 months ago (2 children)

There's CSAM in the training set[1] used for these models so some elephants have been murdered to make this piano.

  1. https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
[–] FaceDeer@fedia.io 5 points 5 months ago

3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model's capabilities.

[–] wandermind@sopuli.xyz 1 points 5 months ago (1 children)

I know. So to confirm, you're saying that you're okay with AI generated CSAM as long as the training data for the model didn't include any CSAM?

[–] xmunk@sh.itjust.works 0 points 5 months ago (1 children)

No, I'm not - I still have ethical objections and I don't believe CSAM could be generated without some CSAM in the training set. I think it's generally problematic to sexually fantasize about underage persons though I know that's an extremely unpopular opinion here.

[–] wandermind@sopuli.xyz 0 points 5 months ago (1 children)

So why are you posting all over this thread about how CSAM was included in the training set if that is in your opinion ultimately irrelevant with regards to the topic of the post and discussion, the morality of using AI to generate CSAM?

[–] xmunk@sh.itjust.works 1 points 5 months ago (1 children)

Because all over this thread are claims that AI CSAM doesn't need actual CSAM to generate. We currently don't have AI CSAM that is taint free and it's unlikely we ever will due to how generative AI works.

[–] wandermind@sopuli.xyz 1 points 5 months ago

So at best we don't know whether or not AI CSAM without CSAM training data is possible. "This AI used CSAM training data" is not an answer to that question. It is even less of an answer to the question "Should AI generated CSAM be illegal?" Just like "elephants get killed for their ivory" is not an answer to "should pianos be illegal?"

If your argument is that yes, all AI CSAM should be illegal whether or not the training used real CSAM, then argue that point. Whether or not any specific AI used CSAM to train is an irrelevant non sequitur. A lot of what you're doing now is replying to "pencils should not be illegal just because some people write bad stuff" with the equivalent of "this one guy did some bad stuff before writing it down". That is completely unrelated to the argument being made.