this post was submitted on 06 Sep 2024
1719 points (90.1% liked)

Technology

59168 readers
3126 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Those claiming AI training on copyrighted works is "theft" misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they're extracting general patterns and concepts - the "Bob Dylan-ness" or "Hemingway-ness" - not copying specific text or images.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in "vector space". When generating new content, the AI isn't recreating copyrighted works, but producing new expressions inspired by the concepts it's learned.

This is fundamentally different from copying a book or song. It's more like the long-standing artistic tradition of being influenced by others' work. The law has always recognized that ideas themselves can't be owned - only particular expressions of them.

Moreover, there's precedent for this kind of use being considered "transformative" and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

While it's understandable that creators feel uneasy about this new technology, labeling it "theft" is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn't make the current use of copyrighted works for AI training illegal or unethical.

For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744

you are viewing a single comment's thread
view the rest of the comments
[–] mriormro@lemmy.world 36 points 2 months ago (14 children)

You know, those obsessed with pushing AI would do a lot better if they dropped the patronizing tone in every single one of their comments defending them.

It's always fun reading "but you just don't understand".

[–] FatCrab@lemmy.one 7 points 2 months ago (13 children)

On the other hand, it's hard to have a serious discussion with people who insist that building a LLM or diffusion model amounts to copying pieces of material into an obfuscated database. And then having to deal with the typical reply after explanation is attempted of "that isn't the point!" but without any elaboration strongly implies to me that some people just want to be pissy and don't want to hear how they may have been manipulated into taking a pro-corporate, hyper-capitalist position on something.

[–] mriormro@lemmy.world 12 points 2 months ago (8 children)

I love that the collectivist ideal of sharing all that we've created for the betterment of humanity is being twisted into this disgusting display of corporate greed and overreach. OpenAI doesn't need shit. They don't have an inherent right to exist but must constantly make the case for it's existence.

The bottom line is that if corporations need data that they themselves cannot create in order to build and sell a service then they must pay for it. One way or another.

I see this all as parallels with how aquifers and water rights have been handled and I'd argue we've fucked that up as well.

[–] VoterFrog@lemmy.world 0 points 2 months ago* (last edited 2 months ago)

They do, though. They purchase data sets from people with licenses, use open source data sets, and/or scrape publicly available data themselves. Worst case they could download pirated data sets, but that's copyright infringement committed by the entity distributing the data without the legal authority.

Beyond that, copyright doesn't protect the work from being used to create something else, as long as you're not distributing significant portions of it. Movie and book reviewers won that legal battle long ago.

load more comments (7 replies)
load more comments (11 replies)
load more comments (11 replies)