485
this post was submitted on 17 Aug 2023
485 points (96.0% liked)
Technology
59582 readers
2645 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's not repeating its training data verbatim because it can't do that. It doesn't have the training data stored away inside itself. If it did the big news wouldn't be AI, it would be the insanely magical compression algorithm that's been discovered that allows many terabytes of data to be compressed down into just a few gigabytes.
Do you remember quotes in english ascii /s
Tokens are even denser than ascii. simmlar to word "chunking" My guess is it's like lossy video compression but for text, [Attacked] with [lazers] by [deatheaters] apon [margret];[has flowery language]; word [margret] [comes first] (Theoretical example has 7 "tokens")
It may have actually impressioned a really good copy of that book as it's lilely read it lots of times.
If it's lossy enough then it's just a high-level conceptual memory, and that's not copyrightable.
It varries based on how much time its been given with the media.