this post was submitted on 31 Aug 2024
130 points (95.1% liked)

Fuck AI

1398 readers
488 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 8 months ago
MODERATORS
 
top 17 comments
sorted by: hot top controversial new old
[–] pete_the_cat@lemmy.world 59 points 2 months ago (1 children)

Before I read this I said "it's because they have no idea WTF AI actually is" and then it said

The most common cause of failure is that the people running the projects have no idea what “AI” even is or does. “In some cases, leaders understand AI only as a buzzword and do not realize that simpler and cheaper solutions are available.”

Called it! 🤣

[–] slacktoid@lemmy.ml 4 points 2 months ago

AI is some dark magic that can peer into the future to some of these clowns (who I have unfortunately interacted with)

[–] SGG@lemmy.world 15 points 2 months ago (1 children)

Most of the time, technology just makes things happen faster, or at a larger scale.

With "AI" we're getting both larger and faster at the same time as businesses try and cash in as quickly as possible once they find out that their "LLM" has been trained on data that means it is in permanent idiot mode, can be unlocked with a few words, hallucinates every second response (oh sorry you're correct raspberry only has 2 R's in it), or keeps generating completely racist images.

[–] leisesprecher@feddit.org 7 points 2 months ago (1 children)

And there's hardly any way to start small and improve upon it.

With regular code, I can write a small solution and piece by piece improve it. But with AI, it's more or less a gamble whether the results will ever get better at all. You might need to slightly rephrase the prompt, or it's completely impossible. But you don't know that. You can only try.

[–] lemon@sh.itjust.works 3 points 2 months ago (1 children)

Just fyi, that’s not entirely true. If we’re just focusing on LLMs, structured and guided generation exists. Combine that with an eval set (= unit tests), you can at least track how well you’re doing. For sure, prompt engineering misses the feeling of being in control. You’ll also never be able to claim 100% coverage (although even with unit tests that’s not something you can claim, as there are always blind spots). What you gain over traditional coding, however, is that you can tackle problems that might otherwise take an infinite number of years to express in code. For example, how would you define the rules for detecting whether an image shows a bird?

It’s just a tool like any other. Overuse is currently detestably rife. But its value is there.

Source: ML engineer who secretly hates a lot about ML but is also in awe at the developments of the last few years.

[–] leisesprecher@feddit.org 4 points 2 months ago (1 children)

And how often do you need to detect images of birds with an unknown accuracy?

That's what many tech bros don't seem to understand: much of the software in this world is boring business crap, and that software needs mainly reliability and explainability. You can't just throw a product around that poses an incalculable risk. And often enough the specifications of these apps is an amalgamation of decades of cruft, and needs to be changed and tweaked often in tiny ways.

I mean, there are certainly cases where AI products have their uses, but those seem to be very small niches.

[–] lemon@sh.itjust.works 1 points 2 months ago

I agree with you. I just wanted to share some nuance. The point I wanted to make is that it is in fact possible to incorporate LLMs in a fairly controlled way while calculating (estimates of) the risk of failure as well as the associated social and financial costs. I do it every day, but I’m no tech bro and dislike the ‘AI will fix everything’ types as much as everyone here.

[–] Lost_My_Mind@lemmy.world 6 points 2 months ago (2 children)

I have no idea if these numbers are accurate, but it's too low. Needs to be 100%

[–] orcrist@lemm.ee 2 points 2 months ago

There's two ways to make that number be what it is. The first is to remember that failure is different from poor performance. Maybe something is working kind of, so then the boss will say hey it's not a failure whatever, even though it's worse than what they had before or other options that they could have selected. The second way to skew the data is to define AI in a way that makes things that you already did count. And maybe that's legitimate, because what exactly is AI? If you're the project manager, maybe you get to choose the definition, in which case you're probably going to do something that makes your successful project look magical even if it's something that's been done for decades.

[–] Valmond@lemmy.world 1 points 2 months ago

80% failed, 20% not failed, yet!

[–] mozz@mbin.grits.dev 3 points 2 months ago

I think more than 40% of normal IT projects fail

[–] HubertManne@moist.catsweat.com 2 points 2 months ago (1 children)

What rate did crypto projects fail.

[–] JeeBaiChow@lemmy.world 0 points 2 months ago (1 children)

Wasn't this the rough number for any large IT project, e.g. ERP, CRM, Salesforce, Data Center, etc...?

[–] iknowitwheniseeit@lemmynsfw.com 5 points 2 months ago (1 children)

The title literally says "twice the rate of other IT projects".

[–] JeeBaiChow@lemmy.world 0 points 2 months ago (1 children)

Yes it does. And I'm saying it's just business as usual in the IT industry.

[–] FlyingSquid@lemmy.world 5 points 2 months ago

If it's twice the rate of other IT projects, then it's not business as usual.