this post was submitted on 06 Aug 2023
1765 points (98.5% liked)

Programmer Humor

32061 readers
1631 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Poob@lemmy.ca 46 points 1 year ago (7 children)

None of it is even AI, Predicting desired text output isn't intelligence

[–] freeman@lemmy.pub 27 points 1 year ago (3 children)

At this point i just interpret AI to be "we have lots of select statements and inner joins "

[–] eestileib@sh.itjust.works 6 points 1 year ago

There are also threshold functions and gradient calculations.

[–] dylanTheDeveloper@lemmy.world 1 points 1 year ago (1 children)

Pick a number from 1 to 2^63 - 1 ~= 9 x 10^19 randomly. See AI is easy /s

[–] freeman@lemmy.pub 1 points 1 year ago

Echo $RANDOM og ai

[–] drekly@lemmy.world 14 points 1 year ago (2 children)

I do agree, but on the other hand...

What does your brain do while reading and writing, if not predict patterns in text that seem correct and relevant based on the data you have seen in the past?

[–] fidodo@lemm.ee 15 points 1 year ago

I've seen this argument so many times and it makes zero sense to me. I don't think by predicting the next word, I think by imagining things both physical and metaphysical, basically running a world simulation in my head. I don't think "I just said predicting, what's the next likely word to come after it". That's not even remotely similar to how I think at all.

[–] cosmicboi@lemmy.world 2 points 1 year ago

Inject personal biases :)

[–] Noughmad@programming.dev 10 points 1 year ago (1 children)

AI is whatever machines can't do yet.

Playing chess was the sign of AI, until a computer best Kasparov, then it suddenly wasn't AI anymore. Then it was Go, it was classifying images, it was having a conversation, but whenever each of these was achieved, it stopped being AI and became "machine learning" or "model".

[–] dan@upvote.au 5 points 1 year ago

Machine learning is still AI. Specifically, it's a subset of AI.

[–] HankMardukas@lemmy.world 2 points 1 year ago (2 children)

Always remember that it will only get better, never worse.

They said "computers will never do x" and now x is assumed.

[–] Poob@lemmy.ca 12 points 1 year ago (1 children)

There's a difference between "this is AI that could be better!" and "this could one day turn into AI."

Everyone is calling their algorithms AI because it's a buzzword that trends well.

[–] MajorHavoc@lemmy.world 1 points 1 year ago

It usually also gets worse while it gets better.

But I take your point. This stuff will continue to advance.

But the important argument today isn't over what it can be, it's an attempt to clarify for confused people.

While the current LLMs are an important and exciting step, they're also largely just a math trick, and they are not a sign that thinking machines are almost here.

Some people are being fooled into thinking general artificial intelligence has already arrived.

If we give these unthinking LLMs human rights today, we expand orporate control over us all.

These LLMs can't yet take a useful ethical stand, and so we need to not rely on then that way, if we don't want things to go really badly.

[–] HardlightCereal@lemmy.world 2 points 1 year ago (3 children)

Language is a method for encoding human thought. Mastery of language is mastery of human thought. The problem is, predictive text heuristics don't have mastery of language and they cannot predict desired output

[–] cloudy1999@sh.itjust.works 2 points 1 year ago* (last edited 1 year ago) (1 children)

I thought this was an inciteful comment. Language is a kind of 'view' (in the model view controller sense) of intelligence. It signifies a thought or meme. But, language is imprecise and flawed. It's a poor representation since it can be misinterpreted or distorted. I wonder if language based AIs are inherently flawed, too.

Edit: grammar, ironically

[–] HardlightCereal@lemmy.world 1 points 1 year ago

Language based AIs will always carry the biases of the language they speak. I am certain a properly trained bilingual AI would be smarter than a monolingual AI of the same skill level

[–] MajorHavoc@lemmy.world 1 points 1 year ago (1 children)

"Mastery of language is mastery of human thought." is easy to prove false.

The current batch of AIs is an excellent data point. These things are very good at language, and they still can't even count.

The average celebrity provides evidence that it is false. People who excel at science often suck at talking, and vice-versa.

We didn't talk our way to the moon.

Even when these LLMs master language, it's not evidence that they're doing any actual thinking, yet.

[–] HardlightCereal@lemmy.world 1 points 1 year ago

I think the current batch of AIs and the Kardashians are bad at using language

[–] fidodo@lemm.ee 1 points 1 year ago

Depends on your definition of AI, and everyone's definition is different.