this post was submitted on 28 Nov 2023
91 points (96.9% liked)

Technology

34904 readers
668 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kibiz0r@lemmy.world 27 points 11 months ago (23 children)

So judges are saying:

If you trained a model on a single copyrighted work, then that would be a copyright violation because it would inevitably produce output similar to that single work.

But if you train it on hundreds of thousands of copyrighted works, that’s no longer a copyright violation, because output won’t closely match any single work.

How is something a crime if you do it once, but not if you do it a million times?

It reminds me of the scheme from Office Space: https://youtu.be/yZjCQ3T5yXo

[–] magnetosphere@kbin.social 9 points 11 months ago* (last edited 11 months ago) (1 children)

How is something a crime if you do it once, but not if you do it a million times?

Because doing it a million times seriously dilutes the harm to any single content creator (assuming those million sources are from many, many different content creators, of course). Potential harm plays a major role in how copyright cases are determined, and in cases involving such a huge amount of sources, harm can be immeasurably small.

In addition to right and wrong, the practicality of regulation and enforcement is often a part of groundbreaking decisions like these, and I’m not certain this particular issue is something our legal system is equipped to handle.

I’m not sure I agree with the reasoning here, but I see their thinking.

[–] bioemerl@kbin.social 3 points 11 months ago

An AI trained on a single image would also probably be fine if it was somehow a generalist AI that didn't overfit on that single image. The quantity really doesn't matter.

load more comments (21 replies)