this post was submitted on 05 Nov 2023
217 points (94.3% liked)

Technology

58133 readers
4476 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

AI companies have all kinds of arguments against paying for copyrighted content::The companies building generative AI tools like ChatGPT say updated copyright laws could interfere with their ability to train capable AI models. Here are comments from OpenAI, StabilityAI, Meta, Google, Microsoft and more.

top 50 comments
sorted by: hot top controversial new old
[–] tabular@lemmy.world 59 points 10 months ago (2 children)

Then feel free to give your copyrighted AI code a free software license :3

[–] Sibbo@sopuli.xyz 31 points 10 months ago (1 children)

This. If the model and its parameters are open source and under an unrestricted license, they can scrape anything they want in my opinion. But if they make money with someone's years of work writing a book, then please give that author some money as well.

[–] abhibeckert@lemmy.world 2 points 10 months ago* (last edited 10 months ago) (3 children)

But if they make money with someone’s years of work writing a book, then please give that author some money as well.

Why? I've read many books on programming, and now I work as a programmer. The authors of those books don't get a percentage of my income just because they spent years writing the book. I've also read (and written) plenty of open source code over the years, and learned from that code. That doesn't mean I have to give money to all the people who contributed to those projects.

[–] OrangeCorvus@lemmy.world 20 points 10 months ago (1 children)
[–] LufyCZ@lemmy.dbzer0.com 3 points 10 months ago

He might've borrowed them from a library.

OpenAI could've trained on borrowed ebooks as well

[–] Davin@lemmy.world 6 points 10 months ago

Like with most things, consent and intent matter. I went out on Halloween when I was a kid and got free candy, so why is it bad if I break in and steal other people's candy?

load more comments (1 replies)
[–] makyo@lemmy.world 10 points 10 months ago

I will never be totally happy with this situation until they're required to offer a free version of all the models that were created with unlicensed content.

[–] sunbeam60@lemmy.one 37 points 10 months ago (2 children)

As I’ve said before: Sure, go ahead, train on copyrighted material, but anything AI generates shouldn’t be copyrightable either - it wasn’t created by a human.

[–] Dkarma@lemmy.world 14 points 10 months ago (1 children)

That's exactly the way it is now.

[–] sunbeam60@lemmy.one 16 points 10 months ago

Not where I live. In the U.K. copyright for AI generated works is owned by the person that caused the AI to create the work. See s178 and s9(3) of the Copyright, Design and Patent Act of 1987.

[–] Black616Angel@feddit.de 10 points 10 months ago

Problem is that small modifications already make it copyrightable again.

[–] RanchOnPancakes@lemmy.world 33 points 10 months ago

Well I mean...so do I.

[–] FireTower@lemmy.world 24 points 10 months ago

Stock image companies have probably the strongest CR claim here IMO. An AI trained off their images without paying for licence could act as a market replacement for their service.

[–] Fisk400@feddit.nu 21 points 10 months ago (6 children)

Tough tits. Imagine all the books, movies and games that could have been made if copyright didn't exist. Nobody else gets to ignore the rules just because it's inconvenient.

[–] CosmicTurtle@lemmy.world 10 points 10 months ago (1 children)

Honestly, if tech companies have to battle it out in court with Disney, imma grab some popcorn and watch the money go brrrrr.

[–] Bgugi@lemmy.world 2 points 10 months ago

Epic lawsuits round 2... "There are no heroes here"

[–] Evotech@lemmy.world 4 points 10 months ago

And if it's ok. Then what's the limit on what an AI is, do you have to prove an AI made it? Or can you just write some repetitive work and say it's made by AI and dodge copyright?

load more comments (4 replies)
[–] uriel238@lemmy.blahaj.zone 16 points 10 months ago (1 children)

Here is what deliberative experts way smarter and more knowledgeable than I am are saying ( on TechDirt )

TLDR: Letting AI be freely trained on human-made artistic content may be dangerous. We may decide to stop it so long as capitalists control who eats and lives. But copyright is not the means to legally stop it. This is a separate issue to how IP law is way, way broken. And precedents stopping software from training on copyrighted work will be used to stop humans from training on copyrighted work. And that's bad.

[–] ryannathans@aussie.zone 6 points 10 months ago (2 children)

Agree, it's not much different from a human learning from all these sources and then applying said knowledge

[–] realharo@lemm.ee 9 points 10 months ago* (last edited 10 months ago) (2 children)

Scale matters. For example

  • A bunch of random shops having security cameras, where their employees can review footage

  • Every business in a country having a camera connected to a central surveillance network with facial recognition and search capabilities

Those two things are not the same, even though you could say they're "not much different" - it's just a bunch of cameras after all.

Also, the similarity between human learning and AI training is highly debatable.

[–] ChairmanMeow@programming.dev 4 points 10 months ago (1 children)

Both of your examples are governed by the same set of privacy laws, which talk about consent, purpose and necessity, but not about scale. Legislating around scale open up the inevitable legal quagmires of "what scale is acceptable" and "should activity x be counted the same as activity y to meet the scale-level defined in the law".

Scale makes a difference, but it shouldn't make a legal difference w.r.t. the legality of the activity.

[–] lollow88@lemmy.ml 2 points 10 months ago (2 children)

Scale makes a difference, but it shouldn't make a legal difference w.r.t. the legality of the activity.

What do you think the difference between normal internet traffic and a ddos attack is?

[–] ChairmanMeow@programming.dev 2 points 10 months ago (1 children)

Lack of consent and the intent to cause harm.

[–] lollow88@lemmy.ml 1 points 10 months ago (2 children)

Ok, then how about automated cold calling vs "live" cold calling?

load more comments (2 replies)
[–] fsmacolyte@lemmy.world 2 points 10 months ago (3 children)

Intent is part of it as well. If you have too many people who want to use your service, you're not being attacked, you have an actual shortage of ability to service requests and need to adjust accordingly.

load more comments (3 replies)
load more comments (1 replies)
[–] topinambour_rex@lemmy.world 6 points 10 months ago* (last edited 10 months ago)

When Google trained their playing neural network, they trained it to starcraft2 . It was better at it than professional gamer. It trained by watching 100years of play. Or 36500 days of play. Or 876000 hours of play.

Does a human can do that ? We both know it's impossible. As the other person said, the issue is scale.

[–] Mnemnosyne@sh.itjust.works 16 points 10 months ago (3 children)

The way I see it, if training on copyrighted content is forbidden, then that should apply universally.

Since all people mix together ideas they've learned from their own input to create new things, just like AI does, then all people-produced content should also be inherently uncopyrightable, unless produced by a person who has never been exposed to copyrighted content.

Oh, also all copyrighted content should lose its copyright. The only copyrighted content should be the original cave paintings by the first cavemen to develop art, since all art since then uses its influence.

And if this sounds ridiculous, then it's no less so than arguments that AI shouldn't be allowed to learn.

[–] theluddite@lemmy.ml 19 points 10 months ago (2 children)

Copyright is broken, but that's not an argument to let these companies do whatever they want. They're functionally arguing that copyright should remain broken but also they should be exempt. That's the worst of both worlds.

[–] Koof_on_the_Roof@lemmy.world 10 points 10 months ago

Yes it seems they want copyright when it suits them and not when it doesn’t.

[–] abhibeckert@lemmy.world 2 points 10 months ago* (last edited 10 months ago) (1 children)

Who said anything about "do whatever they want"? They should obviously comply with the law.

When a human reads a comment here on Lemmy and learns something they didn't know before - copyright law doesn't stop them from using that knowledge. The same rule should apply to AI.

In my opinion if you don't want AI to learn from your work, then you shouldn't allow humans to learn from it either. That's fine - everyone has the right to keep their work private if they choose to do so... but if you make it publicly available, then you don't get to control who learns from it.

You can control who makes exact replicas of it, and if AI is doing that then sure - charge the company with copyright infringement - but generally that's not how these systems work. They generally don't produce exact copies except for highly structured content where there isn't much creative flexibility (and those tend to not be protected under copyright by the way - they would be protected by patents).

[–] theluddite@lemmy.ml 4 points 10 months ago (1 children)

Computers aren't people. AI "learning" is a metaphorical usage of that word. Human learning is a complex mystery we've barely begun to understand, whereas we know exactly what these computer systems are doing; though we use the word "learning" for both, it is a fundamentally different process. Conflating the two is fine for normal conversation, but for technical questions like this, it's silly.

It's perfectly consistent to decide that computers "learning" breaks the rules but human learning doesn't, because they're different things. Computer "learning" is a a new thing, and it's a lot more like creating replicas than human learning is. I think we should treat it as such.

[–] BURN@lemmy.world 2 points 10 months ago (1 children)

I’m so fed up trying to explain this to people. People thing LLMs are real GAI and are treating them as such.

Computers do not learn like humans. It cannot, and should not be regulated in the same way.

[–] theluddite@lemmy.ml 2 points 10 months ago

Yes 100%. Once you drop the false equivalence, the argument boils down to X does Y and therefore Z should be able to do Y, which is obviously not true, because sometimes we need different rules for different things.

[–] hellothere@sh.itjust.works 7 points 10 months ago* (last edited 10 months ago) (3 children)

Since all people mix together ideas they've learned from their own input to create new things, just like AI does, then all people-produced content should also be inherently uncopyrightable, unless produced by a person who has never been exposed to copyrighted content.

While copyright and IP law at present is massively broken, this is a very poor interpretation of the core argument at play.

Let me break it down:

  • Yes, all human created art takes significant influence - purposefully, and accidently - from work which has come before it
  • To have been influenced by that piece, legally, the human will have had to pay the copyright holder to; go to the cinema, buy the bluray, see the performance, go to the gallery, etc. Works out of copyright obviously don't apply here.
  • To be trained in a discipline, the human likely pays for teaching by others, and those others have also paid copyright holders to view the media that influenced them aswell
  • Even thought the vast majority of art is influenced by all other art, humans are capable of novel invention- ie things which have not come before - but GenAI fundamentally isn't.

Separately, but related, see the arguments the Pirate Parties used to make about personal piracy being OK, which were fundamentally down to an argument of scale:

  • A teenager pirating some films to watch cos they are interested in cinema, and being inspired to go to film school is very limited in scope. Even if they pirate hundreds of films, it can't be argued that it's 100 lost sales because the person may have never bought them anyway.
  • A GenAI company consuming literally all artistic output of humanity, with no payment to the artists what so ever, "learning" to create "new" art, without paying for teaching, and spitting out whatever is asked of it, is massive copyright infringement on the consumption side, and an existential threat to the arts on the generation side

That's the reason people are complaining, cos they aren't being paid today, and they won't be paid tomorrow.

load more comments (3 replies)
[–] echo64@lemmy.world 1 points 10 months ago (7 children)

AI legally can't create its own copywritable content. Indeed, it can not learn. It can only produce models that we tune on datasets. Those datasets being copywritten content. Im a little tired of the anthropomorphizing of ais. They are statistical models not children.

No sir, I didn't copy this book, I trained ten thousand ants to eat cereal but only after running an ink well and then a maze that I got them to move through in a way that deposits the ink where I need it to be in order to copy this book.

[–] abhibeckert@lemmy.world 1 points 10 months ago* (last edited 10 months ago)

The AI isn't being accused of copyright infringement. Nothing is being anthropomorphized.

Wether you write a copy of a book with a pen, or type it into a keyboard, or photograph every page, or scan it with a machine learning model is completely irrelevant. The question is - did you (the human using the pen/keyboard/camera/ai model) break the law?

I'd argue no, but other people disagree. It'll be interesting to see where the courts side on it. And perhaps more importantly, wether new legislation is written to change copyright law.

load more comments (6 replies)
[–] Mahlzeit@feddit.de 10 points 10 months ago (2 children)

This thread is interesting reading. Normally, people here complain about capitalism left and right. But when an actual policy choice comes up, the opinions become firmly pro-capitalist. I wonder how that works.

[–] yoz@aussie.zone 5 points 10 months ago

Human beings are funny characters. They only care when it starts to affect them personally otherwise they say all kinda shit.

[–] ThatWeirdGuy1001@lemmy.world 4 points 10 months ago (1 children)

Everyone's always up for changing things until it comes to making the actual sacrifices necessary to enact the changes

[–] Mahlzeit@feddit.de 5 points 10 months ago

That's the thing. I don't see how there is sacrifice involved in this. I would guess that the average user here has personally more to lose than to gain from expanded copyrights.

[–] eldrichhydralisk@lemmy.sdf.org 10 points 10 months ago (1 children)

Most of these companies are just arguing that they shouldn't have to license the works they're using because that would be hard and inconvenient, which isn't terribly compelling to me. But Adobe actually has a novel take I hadn't heard before: they equate AI development to reverse engineering software, which also involves copying things you don't own in order to create a compatible thing you do own. They even cited a related legal case, which is unusual in this pile of sour grapes. I don't know that I'm convinced by Adobe's argument, I still think the artists should have a say in whether their works go into an AI and a chance to get paid for it, but it's the first argument I've seen for a long while that's actually given me something to think about.

load more comments (1 replies)
[–] autotldr@lemmings.world 2 points 10 months ago

This is the best summary I could come up with:


The US Copyright Office is taking public comment on potential new rules around generative AI’s use of copyrighted materials, and the biggest AI companies in the world had plenty to say.

We’ve collected the arguments from Meta, Google, Microsoft, Adobe, Hugging Face, StabilityAI, and Anthropic below, as well as a response from Apple that focused on copyrighting AI-written code.

There are some differences in their approaches, but the overall message for most is the same: They don’t think they should have to pay to train AI models on copyrighted work.

The Copyright Office opened the comment period on August 30th, with an October 18th due date for written comments regarding changes it was considering around the use of copyrighted data for AI model training, whether AI-generated material can be copyrighted without human involvement, and AI copyright liability.

There’s been no shortage of copyright lawsuits in the last year, with artists, authors, developers, and companies alike alleging violations in different cases.

Here are some snippets from each company’s response.


The original article contains 168 words, the summary contains 168 words. Saved 0%. I'm a bot and I'm open source!

[–] Luisp@lemmy.dbzer0.com 1 points 10 months ago

Tables turned

load more comments
view more: next ›