this post was submitted on 10 Jul 2023
210 points (100.0% liked)

Technology

37727 readers
527 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] HughJanus@lemmy.ml 18 points 1 year ago (3 children)

This is what I never understood about the whole training on AI thing.

When a human creates an artwork, they don't do it out of a vacuum. They've had a lifetime of inspiration from artwork they've discovered that inspires then to create something wholly new. AI does the same thing

[–] luciole@beehaw.org 27 points 1 year ago (4 children)

The AIs we are talking about are large language models. They take human work as input and produce facsimiles. They are owned by individuals or companies that have no permission to exploit in this way intellectual property tied to other people's livelihoods to copy them.

LLMs are not sentient, they don't have inspiration, they are not creative and therefore do not create in the sense an artist would. They are an elaborate mathematical equation.

"Training" an AI has nothing to do with training an actual living being. It's just tuning: adjusting an algorithm incrementally until the operator is satisfied with the result. I think it's defendable to amount this form of extraction to plagiarism.

Most likely, if you ask ChatGPT to summarize a famous book, it does not need to have ever trained on the book itself. The easiest way for an LLM to create a summary of something is to base its summary off existing summaries created by humans. If it's ruled in court that ChatGPT is infringing on the copyright of a book's author only by repeating information it acquired from other summaries created by humans, what implications does that have for the humans who wrote the other summaries?

[–] sunflower_scribe@beehaw.org 5 points 1 year ago

Intellectual property in general is a ridiculous concept.

[–] SinAdjetivos@beehaw.org 4 points 1 year ago

I partially agree with you, but I think you're missing the end goal of Facebook et al.

As HughJanus pointed out it's not really any different than a person reading a book and by that reasoning using copyrighted material to train models like these falls well within the existing framework of "fair use".

However, that depends entirely on "the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes." I agree completely with you that OpenAI's products/business (the most blatant violator) does easily violate "fair use" due to that clause. However they're doing it, at least partially, to "force the issue" on the open question of "how much can public information be privatized?" with the goal of further privatizing and increasing commercial applications of raw data.

As you pointed out LLMs can only create facsimiles and not the original work, and by that logic they can't exactly replicate the inputs either.

No I don't think artists can claim that they own any and all "cheap facsimiles" of their works, but by that same reasoning none of these models produced should be allowed to be the enforceable property of any individual/company either.

For further reading check out:

  • Kelly v. Arriba Soft Corporation on why "thumbnails" (and by extension LLMs, "eigen-images", etc.) are inherently transformatve and constitute fair use.
  • Bridgeport Music, Inc. v. Dimension Films for the negative impacts that ruling has had and how it still doesn't protect the artists from their stuff being used for training and LLM.
  • "Variational auto-encoders" for understanding on how the latest LLMs actually do achieve a significant amount of "originality" and I would argue are able to be minimally creative.
[–] HughJanus@lemmy.ml 1 points 1 year ago
[–] kaizervonmaanen@reddthat.com 6 points 1 year ago (1 children)

Yeah, people are just trying to cash in on AI by suing companies that train AI.

[–] luciole@beehaw.org 19 points 1 year ago

It's the AI companies cashing in with other people's work so far.

[–] Dominic@beehaw.org 4 points 1 year ago

AIs are trained for the equivalent of thousands of human lifetimes (if not more). There's no precedent for anything like this.