this post was submitted on 23 Jan 2024
49 points (67.4% liked)

Technology

59168 readers
2113 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

I fucked with the title a bit. What i linked to was actually a mastodon post linking to an actual thing. but in my defense, i found it because cory doctorow boosted it, so, in a way, i am providing the original source here.

please argue. please do not remove.

top 50 comments
sorted by: hot top controversial new old
[–] charonn0@startrek.website 58 points 9 months ago (3 children)

I think we should have a rule that says if a LLM company invokes fair use on the training inputs then the outputs are public domain.

[–] Steve@communick.news 26 points 9 months ago* (last edited 9 months ago) (1 children)

That's already been ruled on once.

A recent lawsuit challenged the human-authorship requirement in the context of works purportedly “authored” by AI. In June 2022, Stephen Thaler sued the Copyright Office for denying his application to register a visual artwork that he claims was authored “autonomously” by an AI program called the Creativity Machine. Dr. Thaler argued that human authorship is not required by the Copyright Act. On August 18, 2023, a federal district court granted summary judgment in favor of the Copyright Office. The court held that “human authorship is an essential part of a valid copyright claim,” reasoning that only human authors need copyright as an incentive to create works. Dr. Thaler has stated that he plans to appeal the decision.

Why would companies care about copyright of the output? The value is in the tool to create it. The whole issue to me revolves around the AI company profiting on it's service. A service built on a massive library of copyrighted works. It seems clear to me, a large portion of their revenue should go equally to the owners of the works in their database.

[–] Even_Adder@lemmy.dbzer0.com 11 points 9 months ago (1 children)

You can still copyright AI works, you just can't name an AI as the author.

[–] Steve@communick.news 9 points 9 months ago (1 children)

That's just saying you can claim copyright if you lie about authorship. The problem then is, you may step into the realm of fraud.

[–] Even_Adder@lemmy.dbzer0.com 8 points 9 months ago (1 children)

You don't have to lie about authorship. You should read the guidance.

[–] Aatube@kbin.social 4 points 9 months ago (1 children)

Well, what you initially said sounded like fraud, but the incredibly long page indeed doesn't talk about fraud. However, it also seems a bit vague. What counts as your contributions to the work? Is it part of the input the model was trained on, "I wrote the prompt", or making additionally changes based on the result?

[–] Even_Adder@lemmy.dbzer0.com 4 points 9 months ago

The vagueness surrounding contributions is particularly troubling. Without clearer guidelines, this seems like a recipe for lawsuits.

load more comments (2 replies)
[–] NevermindNoMind@lemmy.world 35 points 9 months ago (1 children)

Google scanned millions of books and made them available online. Courts ruled that was fair use because the purpose and interface didn't lend itself to actually reading the books in Google books, but just searching them for information. If that is fair use, then I don't see how training an LLM (which doesn't retain the exact copy of the training data at least in the vast majority of cases) isn't fair use. You aren't going to get an argument from me.

I think most people who will disagree are reflexively anti AI, and that's fine. But I just haven't heard a good argument that AI training isn't fair use.

[–] commie@lemmy.dbzer0.com 5 points 9 months ago (2 children)

here's a sidechannel attack on your position: every use, even infringing uses, are fair use until adjudicated, because what fair use means is that a court has agreed that your infringing use is allowed. so of course ai training (broadly) is always fair use. but particular instances of ai training may be found to not be fair use, and so we can't be sure that you are always going to be right (for the specific ai models that may come into question legally).

[–] semperverus@lemmy.world 10 points 9 months ago (1 children)

"Its perfectly legal unless you get caught!"

load more comments (1 replies)
[–] runefehay@kbin.social 3 points 9 months ago (1 children)

I am no lawyer, but I suspect what will be considered either fair use or infringing will probably depend on how the programmed AI model is used.

For example, if you train it on a book of poetry, asking it questions about the poetry will probably be considered fair use. If you ask the AI to write poetry in the style of the book's poems and you publish the AI's poetry, I suspect it might be considered laundering copyright and infringing. Especially if it is substantially similar to specific poems in the book.

[–] commie@lemmy.dbzer0.com 11 points 9 months ago

If you ask the AI to write poetry in the style of the book’s poems and you publish the AI’s poetry, I suspect it might be considered laundering copyright and infringing.

is the image of a cabin in a snowy landscape copyrighted by Thomas kinkade? fuck no. That's an idea. ideas can't be copyrighted. a style isn't a discreet work. it is an idea. it can't be copyrighted. if I produce something in the style of Keats or Stephen King or Rowling, they can't sue me for copyright unless I make a substantially infringing use of their work. The style isn't sufficient, because the style can't be copyrighted.

[–] yuki2501@lemmy.world 21 points 9 months ago (7 children)

What constitutes fair use?

17 U.S.C. § 107

Notwithstanding the provisions of sections 17 U.S.C. § 106 and 17 U.S.C. § 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright.

GenAI training, at least regarding art, is neither criticism, comment, news reporting scholarship, nor research.

AI training is not done by scientists but engineers of a corporative entity with a long term profit goal.

So, by elimination, we can conclude that none of the purposes covered by the fair use doctrine apply to Generative AI training.

Q.E.D.

[–] General_Effort@lemmy.world 7 points 9 months ago (1 children)

"Such as" means that these are examples and not an exhaustive list.

Can you explain how the 3 factors you listed rule out scholarship or research purpose? Regarding the first factor, how do you determine that AI developers are all engineers and never computer scientists?

[–] TheFriar@lemm.ee 4 points 9 months ago (4 children)

I’d argue that the community benefit aspect of the “scholarship or research purposes”language preclude for-profit AI companies from falling under fair use. These aren’t education programs. They’re not research for the greater good. They are private entities trying to create a machine that can copy until it creates. For their own needs, not the greater good. Education has a net positive effect on society, and those stipulations in the law are meant to better serve the whole.

If these generative AI machines were being built by students, it would fall under these specifications of fair use. But the profit motive changes everything.

I’d say “fair use” pretty much covers educational and community benefit. Private companies do neither. They are stealing and reproducing for themselves, not society.

load more comments (4 replies)
[–] toast@retrolemmy.com 4 points 9 months ago (5 children)

You skipped right over "teaching".

Why is that?

load more comments (5 replies)
load more comments (5 replies)
[–] snooggums@kbin.social 14 points 9 months ago (19 children)

Selling an AI model (or usage of that model) that allows for producing works that are clearly based upon those copyrighted works and would be considered copyright infringement if a person did the same thing is not fair use.

If a person creating the same thing as generative AI would be infringing, then it isn't magically not infringing because it is on the internet or done by a program. Basically, AI needs to follow the same rules and restrictions as a person would. That does mean that the AI also needs to be trained to not create copyright infringing works if the use of the AI is being sold.

As a downloadable model that anyone can use at no cost? Sure, whatever is fine. Then it is on the person who uses it and tries to infringe. But if someone pays a company to use their AI to create infringing work, that is on the company and they are just as at fault as if they sold T shirts that infringed on copyright.

load more comments (19 replies)
[–] MoogleMaestro@kbin.social 7 points 9 months ago* (last edited 9 months ago) (2 children)

It isn't fair use, See most of faq @ fairuse faq.

"Fair Use" is often the subject of discussion when talking about online copyright with regards to online video content or music sampling, but it's notably a flawed defense as it generally has no legal definition for how much of certain content can be used or referenced. The very first line of that faq has the following note:

How do I get permission to use somebody else's work?
You can ask for it. If you know who the copyright owner is, you may contact the owner directly. If you are not certain about the ownership or have other related questions, you may wish to request that the Copyright Office conduct a search of its records or you may search yourself. See the next question for more details.

All artists / writers and others are asking LLM model producers to do is a) Ask for permission or B) Attribute the artists work in some kind of ledger, respecting the copyright of their work. Every work you make (write/play/draw/whatever) has a copyright that should be respected by companies and are not waived by EULA or TOS (ever) and must be respected in order for author attribution as a concept to work at all. There is plenty of free, permissive copyrighted content on the internet that can be used instead to train an LLM, but simply asking for permission or giving attribution would at least be a step in the right direction for these companies and for the industry as a whole.

Defenders of AI will note that the "use" of art in LLM is limited and thus protected by fair use, but that is debatable based on the content of the above listed FAQ.

How much of someone else's work can I use without getting permission?
Under the fair use doctrine of the U.S. copyright statute, it is permissible to use limited portions of a work including quotes, for purposes such as commentary, criticism, news reporting, and scholarly reports. There are no legal rules permitting the use of a specific number of words, a certain number of musical notes, or percentage of a work. Whether a particular use qualifies as fair use depends on all the circumstances. See, Fair Use Index, and Circular 21, Reproductions of Copyrighted Works by Educators and Librarians.

You can see that the use cases above (commentary, criticism, news reporting and scholarly reports) does not qualify LLM companies to use or train their models with copyrighted data for privatized industry. Additionally, you'll note that "market disruptive" uses cannot be protected by fair use in it's definition, meaning that displacing artists with AI automatically makes LLM use of copyrighted material an infraction of copyright that is not protected by the fair use clause.

Regardless, this will need to be proved in court and even if it passes certain criteria, it will not apply to all infractions. Fair use is a defense, not a protection, and thus LLM producers will have to spend time in court in order to defend individual infractions. There's no way for them to catch all copyright infringement with one ruling, it needs to be proved on a case-by-case basis.

IANAL but this is my 2 cents on the matter.

[–] commie@lemmy.dbzer0.com 3 points 9 months ago

this will need to be proved in court

this is true of all fair use. this is almost the definition of fair use. Fair use can only exist after a judge has adjudicated it. before it is questionable.

load more comments (1 replies)
[–] Even_Adder@lemmy.dbzer0.com 5 points 9 months ago
[–] cyd@lemmy.world 4 points 9 months ago (4 children)

Agreed. I would also argue that trained model weights are not copyrightable.

load more comments (4 replies)
[–] Ebby@lemmy.ssba.com 3 points 9 months ago* (last edited 9 months ago)

I agree with your statement as stated. (EDIT: title has changed since posting)

The part I have an issue with is the output that tends to fall into 4 categories:

  1. output that is creative and unique is acceptable

  2. output tricked into revealing source is infringement

  3. output not creative, as in biometric identification, utilizes the entirety of copyright work in a non-transformative way and needs to be licensed.

  4. output of creative work, while not copyrightable, can still infringe trademark.

load more comments
view more: next ›