News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source.
Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.
7. No duplicate posts.
If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners.
The auto mod will contact you if a link shortener is detected, please delete your post if they are right.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
view the rest of the comments
Now watch everyone jumping to the support of AI.
I do support Ai.
I support Ai's ability to look at datasets and form a consensus for the purposes of science and development.
Most of the mice in this experiment died due to a lack of oxygen...
"...fascinating."
I do not support Ai's ability to exploit the poor for personal gain, or for marketing, or for... horrors beyond all human comprehension.
Why? No one ever accused chatbots of always being wrong. In fact, it would be actually be better if they were. The biggest problem with LLMs is that they're right just often enough that its hard to catch when they're wrong.
To be fair, I have actually seen fringe cases where people always accuse AI always wrong.
You're right, it would be easier if we could just ignore it, but sadly it's correct enough that it becomes useful for widespread usage, which is why it's seeing widespread usage. Like always, trust but verify, or just don't trust it. lol
You can find fringe cases of anything. That's why they're fringe. I refuse to constantly add ten pages of fucking legal disclaimers to every comment I make just to account for the possibility that one idiot tweeted something one time to their ten followers.
So exactly like humans
Not even remotely, and it's really important to understand a) why there is a difference, and b) why that difference matters, or else you are going to hoover up every bit of propoganda these desperate conmen feed you.
People are not automated systems, and automated systems are not people.
Something that people are generally pretty good at is understanding that a process has failed, even if we can't understand how it has failed. As the adage goes "I don't need to be a helicopter pilot to see one stuck in a tree and immediately conclude that someone fucked up."
LLMs can't do that. A human and an LLM will both cheerfully produce the wrong answer to "How many Rs in Strawberry." But a human, even one who knows nothing about cooking, will generally suspect that something might be up when asked to put glue on pizza. That's because the human is capable of two things the LLM isn't; reasoning, and context. The human can use their reasoning to draw upon the context provided by their real life experience and deduce that "Glue is not food, and I've never previously heard of it being used in food. So something here seems amiss."
That's the first key difference. The second is in how these systems are deployed. You see the conmen trying to sell us all on their "AI" solutions will use exactly the kind of reasoning that you bought - "Hey, humans fuck up too, it's OK" - in order to convince us that these AI systems can take the place of human beings. But in the process that requires us to place an automated system in the position of a human system.
There's a reason why we don't do that.
When we use automation well, it's because we use it for tasks where the error rate on the automated system can be reduced to something far, far lower than that of a well trained human. We don't expect an elevator to just have a brain fart and take us to the wrong floor every now and then. We don't expect that our emails will sometimes be sent to a completely different address to the one we typed in. We don't expect that there's a one in five chance that our credit card will be billed a different about to what was shown on the machine. None of those systems would ever have seen widespread adoption if they had a standard error rate of even 5%, or 1%.
Car manufacturing is something that can be heavily automated, because many of the procedures are simple, repeatable, and controllable. The last part is especially important. If you move all the robots in a GM plant to new spots they will instantly fail. If you move the u humans to new spots, they'll be quite annoyed, but perfectly capable of moving themselves back to the correct places. Yet despite how automatable car manufacturing is, it still employs a LOT of humans, because so many of those tasks do not automate sufficiently well.
And at the end of the day, a fucked up car is just a fucked up car. Healthcare uses a lot less automation than car manufacturing. That's not because healthcare companies are stupid. Healthcare is one of the largest industries in North America. They will gladly take any automation they can get. I know this because my line of work involves healthcare companies regularly asking me for automotion. But they also have a very, very low threshold for failure. If one of our systems fails even one time they will demand a full investigation of the failure.
This is because automated systems, when they are employed, have to be load bearing. They have to be something reliable enough that people can stop thinking about it, even though that same level of reliability isn't demanded from the human components of these systems.
This is largely because, generally speaking, humans have much more ability to recognize and correct the failures of other humans. Medical facilities organise themselves around multiple layers of trust and accountability. One of the demands we get most is for more tools to give oversight into what the humans in the system are doing. But that's because a human is well equipped to recognize when another human is in a failure state. A human can spot that another human came into work hungover. A human can build a context for which of their fellow humans are reliable and which aren't. Human systems are largely self-healing. High risk work is doled out to high reliability humans. Low reliability humans have their work checked more often.
But it's very hard for a human to build context for how reliable an automated system is. This is because the workings of that system are opaque; they do not have the context to understand why the system fails when it fails. In fact, when presented with an automated system that sometimes fails, the way most humans will react its to treat the system as if it always fails. If a button fails to activate on the first press one or two times, you will come back to that same facility a year later to find that it has become common practice for every staff member to press the button five times in a row, because they've all been told that sometimes it fails on the first press.
When presented with an unreliable automated system, humans will choose to use a human instead, because they have assessed that they can better determine when the human has failed and what to do about it.
And, paradoxically, because we have such a low tolerance for failure in automated systems, when presented with an automated system that will be taking on the work of a human, humans naturally expect that system to be more or less perfect. They expect it to meet the threshold that we tend to set for automated systems. So they don't check its work, even when when told to.
The lie that LLMs fuck up in the same way that humans do is used to get a foot in the door, to sell LLM driven systems as a replacement for human labour. But as soon as that replacement is actually being sold, the lie goes away, replaced by a different lie (often a lie by omission); that this will be as reliable as every other automated system you use. Or, at the very least, that "It will be more reliable than a human." The sellers say this meaning, say, 5% more reliable (in reality the actual failure rate of humans in these tasks is often much, much lower than that of LLMs, especially when you account for false positives which are usually ignored whenever someone touts numbers saying that an LLM did a job better than a human). But the people using the system naturally assume it means "More reliable in the way you expect automated systems to be reliable."
All of this creates a massive possibility for real, meaningful hazard. And all of this is before you even get into the specific ways in which LLMs fuck up, and how those fucks up are much more difficult to correct or control for. But thats a whole separate rant.
It's not wrong, it's just an asshole.
Just because it gets it right for a change doesn't mean we support it.