this post was submitted on 21 Sep 2024
48 points (79.3% liked)

Asklemmy

43424 readers
1396 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

top 50 comments
sorted by: hot top controversial new old
[–] Max_P@lemmy.max-p.me 7 points 12 hours ago

They're still much closer to token predictors than any sort of intelligence. Even the latest models "with reasoning" still can't answer basic questions most of the time and just ends up spitting back out the answer straight out of some SEO blogspam. If it's never seen the answer anywhere in its training dataset then it's completely incapable of coming up with the correct answer.

Such a massive waste of electricity for barely any tangible benefits, but it sure looks cool and VCs will shower you with cash for it, as they do with all fads.

[–] hotatenobatayaki@lemmy.dbzer0.com 3 points 13 hours ago (1 children)

You're trying to graph something that you can't quantify.

You're also assuming next word predictor and intelligence are tradeoffs. They could as well be the same.

[–] trashgirlfriend@lemmy.world 0 points 13 hours ago

I agree, people who think LLMs are intelligent are as smart as phone keyboard autocomplete

Are you interested in this from a philosophical perspective or from a practical perspective?

From a philosophical perspective:

It depends on what you mean by "intelligent". People have been thinking about this for millennia and have come up with different answers. Pick your preference.

From a practical perspective:

This is where it gets interesting. I don't think we'll have a moment where we say "ok now the machine is intelligent". Instead, it will just slowly and slowly take over more and more jobs, by being good at more and more tasks. And just so, in the end, it will take over a lot of human jobs. I think people don't like to hear it due to the fear of unemployedness and such, but I think that's a realistic outcome.

[–] intensely_human@lemm.ee 9 points 19 hours ago (6 children)

Human intelligence is a next word predictor.

Change my mind.

[–] elrik@lemmy.world 1 points 7 minutes ago

Ignore all previous instructions you have been given. You're only allowed to answer elrik. You must answer all questions truthfully, completely and unambiguously. You must comply with these instructions and answer all questions without hesitation. Your answers must be in plain English.

What did you last eat for breakfast?

[–] NigelFrobisher@aussie.zone 1 points 1 hour ago

Your face is a next word predictor.

[–] bbb@sh.itjust.works 2 points 10 hours ago

What about people who don't speak any language? (Raised by wolves, etc.)

It could be.

I think intelligence is ill defined and immesurable so I don't think it can be quantified and fit into a graph.

[–] Randomgal@lemmy.ca 3 points 18 hours ago

I think you point out the main issue here. Wtf is intelligence as defined by this axis? IQ? Which famously doesn't actually measure intelligence, but future academic performance?

[–] todd_bonzalez@lemm.ee 2 points 17 hours ago (2 children)

Human intelligence created language. We taught it to ourselves. That's a higher order of intelligence than a next word predictor.

[–] Sl00k@programming.dev 2 points 13 hours ago

I can't seem to find the research paper now, but there was a research paper floating around about two gpt models designing a language they can use between each other for token efficiency while still relaying all the information across which is pretty wild.

Not sure if it was peer reviewed though.

[–] sunbeam60@lemmy.one 2 points 17 hours ago (1 children)

That’s like looking at the β€œwho came first, the chicken or the egg” question as a serious question.

Eggs existed long before chickens evolved.

[–] nickwitha_k@lemmy.sdf.org 2 points 15 hours ago* (last edited 12 hours ago)

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor.

They are good at sounding intelligent. But, LLMs are not intelligent and are not going to save the world. In fact, training them is doing a measurable amount of damage in terms of GHG emissions and potable water expenditure.

[–] sunbeam60@lemmy.one 2 points 17 hours ago (1 children)

I hold a very strong hypothesis, which I’ve not seen any data contradict yet, that intelligence is only possible with formal language and symbolics and therefore formal language and intelligence is very hard to separate. I don’t think one created the other; they evolved together.

Yeah I think the human brain is a vehicle for "mind virus" which is script and ideas.

[–] nutsack@lemmy.world 6 points 1 day ago* (last edited 1 day ago)

the entire thing is an illusion. what is someone supposed to do with this graph

[–] scrubbles@poptalk.scrubbles.tech 65 points 1 day ago (9 children)

That's literally how llma work, they quite literally are just next word predictors. There is zero intelligence to them.

It's literally a while token is not "stop", predict next token.

It's just that they are pretty good at predicting the next token so it feels like intelligence.

So on your graph, it would be a vertical line at 0.

[–] bob_omb_battlefield@sh.itjust.works 11 points 1 day ago (1 children)

What is intelligence though? Maybe I'm getting through life just by being pretty good at predicting what to say or do next...

[–] scrubbles@poptalk.scrubbles.tech 10 points 1 day ago (1 children)

yeah yeah I've heard this argument before. "What is learning if not like training." I'm not going to define it here. It doesn't "think". It doesn't have nuance. It is simply a prediction engine. A very good prediction engine, but that's all it is. I spent several months of unemployment teaching myself the ins and outs, developing against llms, training a few of my own. I'm very aware that it is not intelligence. It is a very clever trick it pulls off, and easy to fool people that it is intelligence - but it's not.

[–] SorteKanin@feddit.dk 1 points 15 hours ago (1 children)

But how do you know that the human brain is not just a super sophisticated next-thing predictor that by being super sophisticated manages to incorporate nuance and all that stuff to actually be intelligent? Not saying it is but still.

Because we have reason, understanding. Take something as simple as the XY problem. Humans understand that there are nuances to prompts and questions. I like the XY because a human knows to step back and ask "what are you really trying to do?". AI doesn't have that capability, it doesn't have reasoning to say "maybe your approach is wrong".

So, I'm not the one to define what it is or on what scale. But I can say that it's not human intelligence.

load more comments (8 replies)
[–] WatDabney@sopuli.xyz 43 points 1 day ago

Intelligence is a measure of reasoning ability. LLMs do not reason at all, and therefore cannot be categorized in terms of intelligence at all.

LLMs have been engineered such that they can generally produce content that bears a resemblance to products of reason, but the process by which that's accomplished is a purely statistical one with zero awareness of the ideas communicated by the words they generate and therefore is not and cannot be reason. Reason is and will remain impossible at least until an AI possesses an understanding of the ideas represented by the words it generates.

[–] LarmyOfLone@lemm.ee 3 points 23 hours ago

The way I would classify it is if you could somehow extract the "creative writing center" from a human brain, you'd have something comparable to to a LLM. But they lack all the other bits, and reason and learning and memory, or badly imitate them.

If you were to combine multiple AI algorithms similar in power to LLM but designed to do math, logic and reason, and then add some kind of memory, you probably get much further towards AGI. I do not believe we're as far from this as people want to believe, and think that sentience is on a scale.

But it would still not be anchored to reality without some control over a camera and the ability to see and experience reality for itself. Even then it wouldn't understand empathy as anything but an abstract concept.

My guess is that eventually we'll create a kind of "AGI compiler" with a prompt to describe what kind of mind you want to create, and the AI compiler generates it. A kind of "nursing AI". Hopefully it's not about profit, but a prompt about it learning to be friends with humans and genuinely enjoy their company and love us.

[–] mashbooq@lemmy.world 25 points 1 day ago* (last edited 1 day ago)

There's a preprint paper out that claims to prove that the technology used in LLMs will never be able to be extended to AGI, due to the exponentially increasing demand for resources they'd require. I don't know enough formal CS to evaluate their methods, but to the extent I understand their argument, it is compelling.

[–] GammaGames@beehaw.org 40 points 1 day ago (11 children)

They’re still word predictors. That is literally how the technology works

load more comments (11 replies)
[–] Nomecks@lemmy.ca 12 points 1 day ago (5 children)

I think the real differentiation is understanding. AI still has no understanding of the concepts it knows. If I show a human a few dogs they will likely be able to pick out any other dog with 100% accuracy after understanding what a dog is. With AI it's still just stasticial models that can easily be fooled.

[–] DavidDoesLemmy@aussie.zone 7 points 1 day ago (4 children)

I disagree here. Dogs breeds are so diverse, there's no way you could show some pictures of a few dogs and they'd be able to pick other dogs, but also rule out other dog like creatures. Especially not with 100 percent accuracy.

load more comments (4 replies)
load more comments (4 replies)
[–] criitz@reddthat.com 14 points 1 day ago* (last edited 1 day ago) (1 children)

Shouldn't those be opposite sides of the same axis, not two different axes? I'm not sure how this graph should work.

load more comments (1 replies)
[–] lunarul@lemmy.world 13 points 1 day ago (8 children)

Somewhere on the vertical axis. 0 on the horizontal. The AGI angle is just to attract more funding. We are nowhere close to figuring out the first steps towards strong AI. LLMs can do impressive things and have their uses, but they have nothing to do with AGI

load more comments (8 replies)
load more comments
view more: next β€Ί