this post was submitted on 05 Sep 2024
888 points (97.5% liked)
Greentext
4618 readers
1926 users here now
This is a place to share greentexts and witness the confounding life of Anon. If you're new to the Greentext community, think of it as a sort of zoo with Anon as the main attraction.
Be warned:
- Anon is often crazy.
- Anon is often depressed.
- Anon frequently shares thoughts that are immature, offensive, or incomprehensible.
If you find yourself getting angry (or god forbid, agreeing) with something Anon has said, you might be doing it wrong.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think I got the point just fine... you're wasting a ton of electricity and potentially your own money on making text that is not bad training data. Which is exactly what I said would happen.
LLMs are made of billions of lines of text, the last we know is for GPT3 with sources ranging from 570 GB to 45 TB of text. A short reddit comment is quite literally a drop in a swimming pool. It's word prediction ability isnt going to change for the worse if you just post a readable comment. It will simply reinforce it.
And sure you can lie in it, but LLM are trained on fiction as well and have to deal with that as well. There are supplementary techniques they apply to make the AI less prone to hallucinations that dont involve the training data, such as RLHF (Reinforcement learning from humans). But honestly speaking the truth is a dumb thing they try to use the AI for anyways. Its primary function has always been to predict words, not truth.
You would have to do this at such a scale and so succesfully voting wise that by that time you are significantly represented in the data to poison it you are either dead, banned, bankrupt, excluded from the data, or Google will have moved on from Reddit.
If you hate or dislike LLMs and want to stop them, let your voice be known. Talk to people about it. Convincing one person succesfully will be worth more than a thousand reddit comments. Poisoning the data directly is a thing, but it's essentially impossible to inflict alone. It's more a consequence of bad data gathering, bad storage practice, and bad training. None of those are in your control through a reddit comment.