this post was submitted on 21 Aug 2023
621 points (95.9% liked)
Technology
59414 readers
3538 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
In my opinion, the copyright should be based on the training data. Scraped the internet for data? Public domain. Handpicked your own dataset created completely by you? The output should still belong to you. Seems weird otherwise.
I think excluding all AI creations from copyright might be one part of a good solution to all this. But you’re right that something has to be done at the point of scraping and training. Perhaps training should be considered outside of fair use and a copyright violation (without permission).
totally. and if scraped, they must be able to provide the source. I don't care if it costs them money/compute time. They are allowed to grow with fake money after all
Of course, just because material is on the internet does not mean that material is public domain.
So AI is likely the worst of both worlds: It can infringe copyright and the publisher be held liable for the infringement, but offers no protection in and of itself down the line.
I think the next big thing is going to be proving the provenience of training data. Kinda like being able to track a burger back to the farm(s) to prevent the spread of disease.
There was an onlyfans creator on a chat group for one of the less restricted machine learning image generators a while ago.
They provided a load of their content, and there was a cash prize for generating content that was indistinguishable from them.
Provided they were sure that the dataset was only their content, they might be able to claim copyright under this.
I can start to imagine some ways that we might get a company like OpenAI to play nice, but this software is going to be in so many hands in the coming years, and most of them won’t be good actors with an enterprise business behind them.
The issue here is if you'd need to prove where your data came from. So the default should be public unless you can prove the source of all the training data
That's not the take (although in a sense I agree training data should influence it especially if it materially reproduce training samples)
Instead the argument is that the individual outputs from ML can only be copyrighted if they carry a human expression (because that's what the law is specifically meant to cover), if there's creative height in the inputs to it resulting in an output carrying that expression.
Compare to photography - photographs aren't protected automatically just because a button is pressed and an image is captured, rather you gain copyright protection as a result of your choice of motive which carries your expression.
Too simple prompts to ML models would under this ruling be considered to be comparable to uncopyrightable lists of facts (like a recipe) and thus the corresponding output is also not protected.