this post was submitted on 29 Nov 2023
435 points (97.4% liked)

Technology

59298 readers
4665 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

you are viewing a single comment's thread
view the rest of the comments
[–] KingRandomGuy@lemmy.world 16 points 11 months ago (1 children)

Not sure what other people were claiming, but normally the point being made is that it's not possible for a network to memorize a significant portion of its training data. It can definitely memorize significant portions of individual copyrighted works (like shown here), but the whole dataset is far too large compared to the model's weights to be memorized.

[–] ayaya@lemdro.id 15 points 11 months ago* (last edited 11 months ago) (1 children)

And even then there is no "database" that contains portions of works. The network is only storing the weights between tokens. Basically groups of words and/or phrases and their likelyhood to appear next to each other. So if it is able to replicate anything verbatim it is just overfitted. Ironically the solution is to feed it even more works so it is less likely to be able to reproduce any single one.

[–] Kbin_space_program@kbin.social 2 points 11 months ago* (last edited 11 months ago) (3 children)

That's a bald faced lie.

and it can produce copyrighted works.
E.g. I can ask it what a Mindflayer is and it gives a verbatim description from copyrighted material.

I can ask Dall-E "Angua Von Uberwald" and it gives a drawing of a blonde female werewolf. Oops, that's a copyrighted character.

[–] KingRandomGuy@lemmy.world 10 points 11 months ago

I think what they mean is that ML models generally don't directly store their training data, but that they instead use it to form a compressed latent space. Some elements of the training data may be perfectly recoverable from the latent space, but most won't be. It's not very surprising as a result that you can get it to reproduce copyrighted material word for word.

[–] ayaya@lemdro.id 7 points 11 months ago

I think you are confused, how does any of that make what I said a lie?

[–] TimeSquirrel@kbin.social 6 points 11 months ago (1 children)

I can do that too. It doesn't mean I directly copied it from the source material. I can draw a crude picture of Mickey Mouse without having a reference in front of me. What's the difference there?

[–] FlyingSquid@lemmy.world 1 points 11 months ago (1 children)

If you have a crude picture of Mickey Mouse and you make money from it, Disney definitely has a chance at going after you.

[–] brianorca@lemmy.world 2 points 11 months ago

That's due to trademark, not copyright.