this post was submitted on 07 Jul 2024
1247 points (95.1% liked)
Fuck AI
1387 readers
90 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 8 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
bullshit take. OP didn't post a screenshot about AI, it's about LLMs. They are absolutely doing more harm than good. And the examples you are quoting are also highly misleading at best:
You seem to be arguing against another stawman. OP didn't say they only dislike LLM the sub is even "Fuck AI". And this thread is talking about AI in general.
Machine Learning is a subset of AI and has always been. Also, LLM is a subset of Machine Learning. You are trying to split hairs, or at least do a "That's not a real Scotsman" out of the above post.
My bad bad for not seeing the sub's name before commenting. My points still stand though.
There's machine learning, and there's machine learning. Either way, pattern matching & statistics has nothing to do with intelligence beyond the actual pattern matching logic itself. Only morons call LLMs "AI". A simple rule "if value > threshold then doSomething" is more AI than an LLM. Because there's actual logic there. An LLM has no such logic behind word prediction, but thanks to statistic it is able to fool many people (including myself, depending on the context) into believing it is intelligent. So that makes it dangerous, but not AI.
ML didn't aggressively claim the name AI as a buzzword to scam massive investment in trash. Someone talking about ML calls it ML.
Someone talking about "AI" is almost certainly not referring to ML.
Companies hve always simplified smart things and called it AI. AI is hotter than ever now, not only LLM.
And again ML is a subset of AI, LLM is a subset of ML. With these definitions, everything is AI. Look up the definition of AI. It's just a collection of techniques to do "smarter" things with AI. It includes all of the above, e.g. "If this then that" but also more advanced mathematics, like statistical methods and ML. LLM is one of those statistical models.
It doesn't matter how similar the underlying math is. LLMs and ML are wildly different in every way that matters.
ML is taking a specific data set, in one specific problem space, to model a specific problem in that one specific space. It is inherently a limited application, because that's what the math can do. It finds patterns better than our brains. It doesn't reason. ML works.
LLMs are taking a broad data set, that's primarily junk, and trying to solve far more complicated problems, generally, without any tools to do so. LLMs do not work. They confabulate.
ML has been used heavily for a long time (because it's not junk) and companies have never made a point of calling it AI. This AI bubble is all about the dumpster fire that is LLMs being wildly overused. Companies selling "AI" to investors aren't doing tried and true ML.
Yea, this bubble is mostly LLM, but also deepfakes and other generative image algorithms. They are all ML. LLM has some fame because people can't seem to realise that it's crap. They definitely passed the Turing test, while still being pretty much useless.
There are many other useless ML algorithms. Just because you don't like something doesn't mean it doesn't belong. ML has some good stuff and some bad stuff. The statement "ML works" doesn't mean anything. It's like saying "math works".
There have been many AI bubbles in the past as well, as well as slumps. Look up the term AI winter. Most AI algorithms turn out not really working except for a few niche applications. You are probably referring to these few as "ML works". Most AI projects fail, but some prevail. This goes for all tech though. So... tech works.
What Microsoft is doing is they are trying to cast a wide net to see if they hit one of the few actual good applications for LLMs. Most of them will fail but there might be one or two really successful products. Good for them to have that kind of capital to just haphazardly try new features everywhere.
No, they're not "all ML". ML is the whole package, not one part of the algorithm.
Obviously if you apply any tech badly it isn't magic. ML does what it's intended to, which is find the best model to approximate a specific phenomena. But when it's applied correctly to an appropriately scoped problem, it does a good job.
LLMs do not do a good job at anything but telling you what language looks like, and all the investment is people trying to apply them to things they fundamentally cannot do. They are not capable of anything that resembles reasoning in any way, and that's how the scam companies are pretending to use them.
They are all ML. I don't know how to convince you of this so I give up. Bye. I have a Master's degree in Machine Learning, btw.
No, they absolutely are not. You should go get your money back, because you very clearly don't know what you're talking about.
Machine learning is, by definition, targeting a single problem space. Using similar techniques to just shove any and all data at an algorithm and taking whatever dogshit gets spit out is categorically not the same thing.