this post was submitted on 19 Jul 2023
183 points (84.0% liked)

Technology

59298 readers
4665 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

James Cameron on AI: "I warned you guys in 1984 and you didn't listen"::undefined

you are viewing a single comment's thread
view the rest of the comments
[–] orphiebaby@lemmy.world 135 points 1 year ago* (last edited 1 year ago) (12 children)

It's getting old telling people this, but... the AI that we have right now? Isn't even really AI. It's certainly not anything like in the movies. It's just pattern-recognition algorithms. It doesn't know or understand anything and it has no context. It can't tell the difference between a truth and a lie, and it doesn't know what a finger is. It just paints amalgamations of things it's already seen, or throws together things that seem common to it— with no filter nor sense of "that can't be correct".

I'm not saying there's nothing to be afraid of concerning today's "AI", but it's not comparable to movie/book AI.

Edit: The replies annoy me. It's just the same thing all over again— everything I said seems to have went right over most peoples' heads. If you don't know what today's "AI" is, then please stop assuming about what it is. Your imagination is way more interesting than what we actually have right now. This is why we should have never called what we have now "AI" in the first place— same reason we should never have called things "black holes". You take a misnomer and your imagination goes wild, and none of it is factual.

[–] eee@lemm.ee 33 points 1 year ago

THANK YOU. What we have today is amazing, but there's still a massive gulf to cross before we arrive at artificial general intelligence.

What we have today is the equivalent of a four-year-old given a whole bunch of physics equations and then being told "hey, can you come up with something that looks like this?" It has no understanding besides "I see squiggly shape in A and squiggly shape in B, so I'll copy squiggly shape onto C".

[–] Immersive_Matthew@sh.itjust.works 9 points 1 year ago (1 children)

I really think the only thing to be concerned of is human bad actors with AI and not AI. AI alignment will be significantly easier than human alignment as we are for sure not aligned and it is not even our nature to be aligned.

[–] PopShark@lemmy.world 2 points 1 year ago

I’ve had this same thought for decades now ever since I first heard of ai takeover scifi stuff as a kid. Bots just preform set functions. People in control of bots can create mayhem.

[–] raltoid@lemmy.world 8 points 1 year ago

The replies annoy me. It’s just the same thing all over again— everything I said seems to have went right over most peoples’ heads.

Not at all.

They just don't like being told they're wrong and will attack you instead of learning something.

[–] jeffw@lemmy.world 8 points 1 year ago (1 children)

Strong AI vs weak AI.

We’re a far cry from real AI

[–] Homo_Stupidus@lemmy.world 5 points 1 year ago

Isn't that also referred to as Virtual Intelligence vs Artificial Intelligence? What we have now I'd just very well trained VI. It's not AI because it only outputs variations of what's it been trained using algorithms, right? Actual AI would be capable of generating information entirely distinct from any inputs.

[–] terminhell@lemmy.world 4 points 1 year ago

GAI - General Artificial Intelligence is what most people jump too. And, for those wondering, that's the beginning of the end game type. That's the kind that will understand context. The ability to 'think' on its own with little to no input from humans. What we have now is basically autocorrect on super steroids.

[–] stooovie@lemmy.world -1 points 1 year ago (1 children)

True but that doesn't keep it from screwing a lot of things up.

[–] orphiebaby@lemmy.world 6 points 1 year ago (1 children)

I’m not saying there’s nothing to be afraid of concerning today’s “AI”, but it’s not comparable to movie/book AI.

[–] stooovie@lemmy.world -3 points 1 year ago* (last edited 1 year ago) (1 children)

Yes, sure. I meant things like employment, quality of output

[–] eee@lemm.ee 8 points 1 year ago (1 children)

Yes, sure. I meant things like employment, quality of output

That applies to... literally every invention in the world. Cars, automatic doors, rulers, calculators, you name it...

[–] stooovie@lemmy.world -4 points 1 year ago (1 children)

With a crucial difference - inventors of all those knew how the invention worked. Inventors of current AIs do NOT know the actual mechanism how it works. Hence, output is unpredictable.

[–] drekly@lemmy.world 4 points 1 year ago (2 children)

Lol could you provide a source where the people behind these LLMs say they don't know how it works?

Did they program it with their eyes closed?

[–] vrighter@discuss.tchncs.de 2 points 1 year ago

they program it to learn. They can tell you exactly how it learns, but not what it learned (there are some techniques to give some small insights, but not even close to the full picture)

Problem is, how it behaves nepends on how it was programmed and what it learned after being trained. Since what it learned is a black box, we cannot explain their behaviour

[–] stooovie@lemmy.world 0 points 1 year ago* (last edited 1 year ago) (1 children)

Yes I can. example

Opposed to other technology, nobody knows the internal structure. Input A does not necessarily produce output B.

Whether you like it or not is irrelevant.

[–] drekly@lemmy.world 2 points 1 year ago (2 children)

"Whether you like it or not is irrelevant."

That's a very hostile take.

I just think it's wild they wouldn't know how it works when they're the ones who created it. How do you program something that you don't understand?! It's crazy.

[–] BURN@lemmy.world 2 points 1 year ago

Basically with neural networks you program the way it injests data and how it outputs data. Everything else in between is constantly updating statistical algorithms. Developers can look at those algorithms, but it’s extremely hard to map that back out into human readable content.

[–] stooovie@lemmy.world 0 points 1 year ago

It is, sorry. It was a Reaction to the downvotes. But at this point I'm a bit allergic to the "it's the same as every other invention" argument. It's not, precisely for this reason. It's a bit like "climate is always changing" - yes, but not within decades or centuries. These details are crucial.

[–] Einstein@lemmy.world -3 points 1 year ago (1 children)

Sounds like you described a baby.

[–] orphiebaby@lemmy.world 5 points 1 year ago

Yeah, I think there's a little bit more to consciousness and learning than that. Today's AI doesn't even recognize objects, it just paints patterns.

[–] ButtholeAnnihilator@lemmy.world -3 points 1 year ago (1 children)

Regardless of if its true AI or not (I understand its just machine learning) Cameron's sentiment is still mostly true. The Terminator in the original film wasn't some digital being with true intelligence, it was just a machine designed with a single goal. There was no reasoning or planning really, just an algorithm that said "get weapons, kill Sarah Connor. It wasn't far off from an Boston Dynamics robot using machine learning to complete a task.

[–] orphiebaby@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

You don't understand. Our current AI? Doesn't know the difference between an object and a painting. Furthermore, everything it perceives is "normal and true". You give it bad data and suddenly it's broken. And "giving it bad data" is way easier than it sounds. A "functioning" AI (like a Terminator) requires the ability to "understand" and scrutinize— not just copy what others tell it without any context or understanding, and combine results.

[–] adeoxymus@lemmy.world -5 points 1 year ago (1 children)

That type of reductionism isn't really helpful. You can describe the human brain to also just be pattern recognition algorithms. But doing that many times, at different levels, apparently gets you functional brains.

[–] wizardbeard@lemmy.dbzer0.com 1 points 1 year ago

But his statement isn't reductionism.