this post was submitted on 30 Apr 2024
133 points (100.0% liked)

Technology

37724 readers
673 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] 0x1C3B00DA@fedia.io 2 points 6 months ago (1 children)

What legislation like this would do is essentially let the biggest players pull the ladders up behind them

But you're claiming that there's already no ladder. Your previous paragraph was about how nobody but the big players can actually start from scratch.

All this aside from the conceptual flaws of such legislation. You'd be effectively outlawing people from analyzing data that's publicly available

How? This is a copyright suit. Like I said in my last comment, the gathering of the data isn't in contention. That's still perfectly legal and anyone can do it. The suit is about the use of that data in a paid product.

[–] FaceDeer@fedia.io 2 points 6 months ago* (last edited 6 months ago)

But you're claiming that there's already no ladder. Your previous paragraph was about how nobody but the big players can actually start from scratch.

Adding cost only makes the threshold higher. The opposite of the way things should be going.

All this aside from the conceptual flaws of such legislation. You'd be effectively outlawing people from analyzing data that's publicly available

How? This is a copyright suit.

Yes, and I'm saying that it shouldn't be. Analyzing data isn't covered by copyright, only copying data is covered by copyright. Training an AI on data isn't copying it. Copyright should have no hold here.

Like I said in my last comment, the gathering of the data isn't in contention. That's still perfectly legal and anyone can do it. The suit is about the use of that data in a paid product.

That's the opposite of what copyright is for, though. Copyright is all about who can copy the data. One could try to sue some of these training operations for having made unauthorized copies of stuff, such as the situation with BookCorpus (a collection of ebooks that many LLMs have trained on that is basically pirated). But even in that case the thing that is a copyright violation is not the training of the LLM itself, it's the distribution of BookCorpus. And one detail of piracy that the big copyright holders don't like to talk about is that generally speaking downloading pirated material isn't the illegal part, it's uploading it, so even there an LLM trainer might be able to use BookCorpus. It's whoever it is that gave them the copy of BookCorpus that's in trouble.

Once you have a copy of some data, even if it's copyrighted, there's no further restriction on what you can do with that data in the privacy of your own home. You can read it. You can mulch it up and make paper mache sculptures out of it. You can search-and-replace the main character's name with your own, and insert paragraphs with creepy stuff. Copyright is only concerned with you distributing copies of it. LLM training is not doing that.

If you want to expand copyright in such a way that rights-holders can tell you what analysis you can and cannot subject their works to, that's a completely new thing and it's going down a really weird and dark path for IP.