this post was submitted on 27 Jun 2023
172 points (100.0% liked)

Chat

7498 readers
20 users here now

Relaxed section for discussion and debate that doesn't fit anywhere else. Whether it's advice, how your week is going, a link that's at the back of your mind, or something like that, it can likely go here.


Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

For me I say that a truck with a cab longer than its bed is not a truck, but an SUV with an overgrown bumper.

you are viewing a single comment's thread
view the rest of the comments
[–] CylustheVirus@beehaw.org 16 points 1 year ago (6 children)

Large Language Models and other affiliated algorithms are not AI and no amount of marketing will convince me otherwise. As a result I refuse to call them AI when talking to people about them.

[–] DaleGribble88@programming.dev 3 points 1 year ago

As someone with published papers about machine learning, LLMs are artificially intelligent systems. At least according to the agreed-upon industry and academic definitions. I don't really care about your head canon definition. I just want to be clear for anyone else who comes across this comment and doesn't know otherwise.

[–] tool@r.rosettast0ned.com 3 points 1 year ago (1 children)

They are AI though. They're just not Artificial General Intelligence.

[–] Kaldo@beehaw.org 0 points 1 year ago (1 children)

My definition of AI is coming from books and media, unless it exhibits actual intelligence it is not an AI. Building a sensible sentence from large amounts of data while not understanding what it is actually saying or whether it's actually correct or consistent does not make an intelligence.

[–] Viktorian@beehaw.org 1 points 1 year ago (1 children)

But it does understand it since it's able to answer arbitrary questions, no?

[–] Kaldo@beehaw.org 0 points 1 year ago* (last edited 1 year ago) (1 children)

Nope, it's only matching the prompt with the most likely answer from its training set. Do you remember in the early days when it would be asked slightly tweaked riddles and it would get them incorrectly, it'd just spew out something that sounded like the original answer but was completely wrong in the current context? Or how it just made up nonexistent court cases for that one lawyer that tried to use it without actually checking if it's correct?

LLMs are just guessing the answer based on millions of similar answers they have been trained with. It's a language syntax generator, it has no clue what it is actually saying. They are extremely advanced and getting better at hiding their flaws but at their core, they are not actual intelligence.

[–] Viktorian@beehaw.org 1 points 1 year ago* (last edited 1 year ago)

I know this, I've worked on LLMs and other neural networks so I was wondering what kind of difference you could make out. Humans do the same thing, they just have more neurons and use more sophisticated training modes and activation mechanisms as well as propagation patterns.

So what I'm saying is that you can't tie intelligence to the fundamental mechanism because it's the same, only humans are more developed. And maturity on the other hand is a highly subjective and arbitrary criterion—when is the system mature enough to be considered intelligent?

[–] Longtimelerker@beehaw.org 2 points 1 year ago (1 children)

Will you differentiate your understanding of what AI is from LLMs?

[–] CylustheVirus@beehaw.org 1 points 1 year ago* (last edited 1 year ago)

Something with a mind. The term floating around now is "general artificial intelligence." My primary objection is that a giant pile of poorly understood machine learning trained on garbage scraped from social media bears no resemblance to a thinking mind and calling it "AI" makes the term practically useless. Where do we draw the line between a complex algorithm and an "AI?" What makes it an "AI" vs. a simple algorithm?

[–] orphiebaby@lemm.ee 1 points 8 months ago

Thanks, been arguing this for ages.

[–] catchy_name@feddit.it 1 points 8 months ago

I recently saw another lemming call LLMs “spicy autocomplete” instead of AI which seemed appropriate given that calling it AI, while technically correct, I think leads some people to think that the LLM is intelligent. I plan to use that terminology.

[–] KeavesSharpi@lemmy.ml 0 points 1 year ago (1 children)

What do you say about LLM's being better at diagnosing diseases than real doctors? It may not be intelligence, but it's more than simply regurgitating information.

[–] bloodfart@lemmy.ml 1 points 1 year ago

You should know that the article that headline is from glosses over the multiple choice nature of the data.

Chat gpt didn’t do an examination and get it right, it answered multiple choice questions correctly more often than mds.