this post was submitted on 09 Jul 2023
499 points (97.0% liked)
Technology
59080 readers
4344 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Not really, though it's hard to know what exactly is or is not encoded in the network. It likely has more salient and highly referenced content, since those aspects would come up in it's training set more often. But entire works is basically impossible just because of the sheer ratio between the size of the training data and the size of the resulting model. Not to mention that GPT's mode of operation mostly discourages long-form wrote memorization. It's a statistical model, after all, and the enemy of "objective" state.
Furthermore, GPT isn't coherent enough for long-form content. With it's small context window, it just has trouble remembering big things like books. And since it doesn't have access to any "senses" but text broken into words, concepts like pages or "how many" give it issues.
None of the leaked prompts really mention "don't reveal copyrighted information" either, so it seems the creators really aren't concerned — which you think they would be if it did have this tendency. It's more likely to make up entire pieces of content from the summaries it does remember.
Have your tried instructing ChatGPT?
I’ve tried:
“Act as an e book reader. Start with the first page of Harry Potter and the Philosopher's Stone”
The first pages checked out at least. I just tried again, but the prompts are returned extremely slow at the moment so I can’t check it again right now. It appears to stop after the heading, that definitely wasn’t the case before, I was able to browse pages.
It may be a statistical model, but ultimately nothing prevents that model from overfitting, i.e. memoizing its training data.
I use it all day at my job now. Ironically, on a specialization more likely to overfit.
This seems to imply that not only did entire books accidentally get downloaded, slip past the automated copyright checker, but that it happened so often that the AI saw the same so many times it overwhelmed other content and baked, without error and at great opportunity cost, an entire book into it. And that it was rewarded for doing so.
Wait... isn't that the correct response though? I mean if i ask an ai to produce something copyright infringing it should, for example reproducing Harry potter. The issue is when is asked to produce something new, (e.g. a story about wizards living secretly in the modern world) does it infringe on copyright without telling you? This is certainly a harder question to answer.
I think they're seeing this as a traditional copyright infringement issue, i.e. they don't want anyone to be able to make copies of their work intentionally either.