this post was submitted on 30 Apr 2024
133 points (100.0% liked)

Technology

37724 readers
673 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 18 comments
sorted by: hot top controversial new old
[–] SnotFlickerman@lemmy.blahaj.zone 41 points 6 months ago* (last edited 6 months ago) (2 children)

Good, sick as fuck of these chucklefucks committing piracy on an absolutely massive scale, making a fucking mint off of it and then hiding behind "fair use." (Last I checked fair use was for fucking people, not giant fucking corporations to copy the entire fucking internet and profit from it.)

You know what they did to the Pirate Bay when they committed mass piracy but didn't actually make much money? The music industry literally pushed for laws to change in Sweden so they could send them to jail. It feels like they did this because The Pirate Bay simply didn't make enough money.

It's like in the US, as long as you commit the biggest crime, and make the biggest profit, they'll make up any worthless fucking excuse to justify you raping the public for a quick buck. Because in my eyes, these guys are doing something no different than The Pirate Bay, but because Microsoft/OpenAI is Big Business, a blind eye is conveniently turned.

[–] sonori@beehaw.org 12 points 6 months ago

But if we don’t feed the entire internet into Siri, China would, and you don’t want China to have an advantage in the autocomplete wars, now do you?/s

[–] FaceDeer@fedia.io 7 points 6 months ago (2 children)

You realize that if cases like this are won then only the "giant fucking corporations" are going to be able to afford the datasets to train AI with?

[–] 0x1C3B00DA@fedia.io 9 points 6 months ago (1 children)

Harvesting the dataset isn't the problem. Using copyrighted work in a paid product is the problem. Individuals could still train their own models for personal use

[–] FaceDeer@fedia.io 8 points 6 months ago* (last edited 6 months ago) (1 children)

I don't think you're familiar with the sort of resources necessary to train a useful LLM up from scratch. Individuals won't have access to that for personal use.

[–] 0x1C3B00DA@fedia.io 3 points 6 months ago (2 children)

I'm not familiar with the exact amount of resources, but I know it takes a lot. My point was about what specifically is in contention here.

Also, you were the one pointing out that this case could entrench "giant fucking corporations" in the space. But if they're the only ones who can afford the resources to train them, then this case won't have an effect on that entrenchment

[–] SaltySalamander@fedia.io 5 points 6 months ago

No, it'll just lock out the common man running an LLM in a homelab completely from it. No biggie I guess.

[–] FaceDeer@fedia.io 2 points 6 months ago (1 children)

They're the ones training "base" models. There are a lot of smaller base models floating around these days with open weights that individuals can fine-tune, but they can't start from scratch.

What legislation like this would do is essentially let the biggest players pull the ladders up behind them - they've got their big models trained already, but nobody else will be able to afford to follow in their footsteps. The big established players will be locked in at the top by legal fiat.

All this aside from the conceptual flaws of such legislation. You'd be effectively outlawing people from analyzing data that's publicly available to anyone with eyes. There's no basic difference between training an LLM off of a website and indexing it for a search engine, for example. Both of them look at public data and build up a model based on an analysis of it. Neither makes a copy of the data itself, so existing copyright laws don't prohibit it. People arguing for outlawing LLM training are arguing to dramatically expand the concept of copyright in a dangerous new direction it's never covered before.

[–] 0x1C3B00DA@fedia.io 2 points 6 months ago (1 children)

What legislation like this would do is essentially let the biggest players pull the ladders up behind them

But you're claiming that there's already no ladder. Your previous paragraph was about how nobody but the big players can actually start from scratch.

All this aside from the conceptual flaws of such legislation. You'd be effectively outlawing people from analyzing data that's publicly available

How? This is a copyright suit. Like I said in my last comment, the gathering of the data isn't in contention. That's still perfectly legal and anyone can do it. The suit is about the use of that data in a paid product.

[–] FaceDeer@fedia.io 2 points 6 months ago* (last edited 6 months ago)

But you're claiming that there's already no ladder. Your previous paragraph was about how nobody but the big players can actually start from scratch.

Adding cost only makes the threshold higher. The opposite of the way things should be going.

All this aside from the conceptual flaws of such legislation. You'd be effectively outlawing people from analyzing data that's publicly available

How? This is a copyright suit.

Yes, and I'm saying that it shouldn't be. Analyzing data isn't covered by copyright, only copying data is covered by copyright. Training an AI on data isn't copying it. Copyright should have no hold here.

Like I said in my last comment, the gathering of the data isn't in contention. That's still perfectly legal and anyone can do it. The suit is about the use of that data in a paid product.

That's the opposite of what copyright is for, though. Copyright is all about who can copy the data. One could try to sue some of these training operations for having made unauthorized copies of stuff, such as the situation with BookCorpus (a collection of ebooks that many LLMs have trained on that is basically pirated). But even in that case the thing that is a copyright violation is not the training of the LLM itself, it's the distribution of BookCorpus. And one detail of piracy that the big copyright holders don't like to talk about is that generally speaking downloading pirated material isn't the illegal part, it's uploading it, so even there an LLM trainer might be able to use BookCorpus. It's whoever it is that gave them the copy of BookCorpus that's in trouble.

Once you have a copy of some data, even if it's copyrighted, there's no further restriction on what you can do with that data in the privacy of your own home. You can read it. You can mulch it up and make paper mache sculptures out of it. You can search-and-replace the main character's name with your own, and insert paragraphs with creepy stuff. Copyright is only concerned with you distributing copies of it. LLM training is not doing that.

If you want to expand copyright in such a way that rights-holders can tell you what analysis you can and cannot subject their works to, that's a completely new thing and it's going down a really weird and dark path for IP.

[–] davehtaylor@beehaw.org 2 points 6 months ago

So we shouldn't do anything about it, and just let big corps scoop up all the data they want, regardless of ownership?

[–] megopie@beehaw.org 6 points 6 months ago

People who make the information fed in to the automatic plagiarism machine suing the automatic plagiarism machine company.

Wild to me how far this has gotten before some institutional actors realized that this “amazing new technology” is only financially viable if they don’t have to pay a fair price for the training data.

[–] Banzai51@midwest.social 3 points 6 months ago (1 children)

This is going to be a pyric victory like when they sued Google where they won, but then the traffic and views dropped through the floor.

[–] darkkite@lemmy.ml 29 points 6 months ago

how? open ai isn't funneling views to news sites it ripped from

[–] FlashMobOfOne@beehaw.org 3 points 6 months ago (1 children)

This is going to go the same way Napster did.

[–] remington@beehaw.org 2 points 6 months ago (1 children)
[–] FlashMobOfOne@beehaw.org 3 points 6 months ago* (last edited 6 months ago)

Really, I just mean 'in the courts'.

When American business interests just decided to unite and collectively steal all of the IP on the Internet, it was always going to end up in the courts. Neither Congress nor the President will act because, by and large, the people doing the thieving are their golf buddies and fund their campaigns.

[–] darkphotonstudio@beehaw.org 2 points 6 months ago

Where was all that outrage from you all when everyone was downloading and pirating copyrighted music by groups like Metallica? If you were around back then, my guess is, you were all firing up Kazaa and Napster and didn’t give a shit.