Hackworth

joined 4 months ago
[–] Hackworth@lemmy.world -1 points 1 week ago

How do you imagine those works are used?

[–] Hackworth@lemmy.world 0 points 1 week ago (4 children)

It's called learning, and I wish people did more of it.

[–] Hackworth@lemmy.world 3 points 1 week ago* (last edited 1 week ago) (2 children)

This is an inaccurate understanding of what's going on. Under the hood is a neutral network with weights and biases, not a database of copyrighted work. That neutral network was trained on a HEAVILY filtered training set (as mentioned above, 45 terabytes was reduced to 570 GB for GPT3). Getting it to bug out and generate full sections of training data from its neutral network is a fun parlor trick, but you're not going to use it to pirate a book. People do that the old fashioned way by just adding type:pdf to their common web search.

[–] Hackworth@lemmy.world 7 points 1 week ago (2 children)

You've made a lot of confident assertions without supporting them. Just like an LLM! :)

[–] Hackworth@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

Just taking GPT 3 as an example, its training set was 45 terabytes, yes. But that set was filtered and processed down to about 570 GB. GPT 3 was only actually trained on that 570 GB. The model itself is about 700 GB. Much of the generalized intelligence of an LLM comes from abstraction to other contexts.

Table 2.2 shows the final mixture of datasets that we used in training. The CommonCrawl data was downloaded from 41 shards of monthly CommonCrawl covering 2016 to 2019, constituting 45TB of compressed plaintext before filtering and 570GB after filtering, roughly equivalent to 400 billion byte-pair-encoded tokens. Language Models are Few-Shot Learners

*Did some more looking, and that model size estimate assumes 32 bit float. It's actually 16 bit, so the model size is 350GB... technically some compression after all!

[–] Hackworth@lemmy.world 2 points 1 week ago

Aye, flux [pro] via glif.app, though it's funny, sometimes I get better results from the smaller [schnell] model, depending on the use case.

[–] Hackworth@lemmy.world 9 points 1 week ago (5 children)
[–] Hackworth@lemmy.world 9 points 1 week ago (2 children)
[–] Hackworth@lemmy.world 11 points 1 week ago (1 children)

As a person with myopia, I find this comment tone deaf.

view more: ‹ prev next ›