this post was submitted on 24 Jan 2024
292 points (97.4% liked)

Technology

59414 readers
3459 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] eltrain123@lemmy.world 13 points 9 months ago (1 children)

We spent decades treating computers like fancy calculators. They have more utility than that, and we are currently trying to find a more valuable way to use that utility.

In that process, there will be a time where the responses you get will need to be independently verified. As the technology matures, it will get more and more accurate and useful. If we could just skip past the development part and get to the fully engineered solution, we would… but that’s not really how anything new ever comes into being.

As for the current state of the technology, you can get a ton of useful information out of LLMs right now by asking them to give you a list of options you wouldn’t have thought of, general outlines of a course of action, places or topics to research to find a correct answer… etc. However, if you expect the current iteration of the technology to do everything for you without error and without verifying the output, you are going to have a bad time.

[–] 7heo@lemmy.ml 4 points 9 months ago* (last edited 9 months ago) (1 children)

The thing is, intelligence is the capacity to create information that can be separately verified.

For this you need two abilities:

  1. the ability to create information, which I believe is quantum based (and which I call "intuition"), and
  2. the ability to validate, or verify information, which I believe is based on deterministic logic (and which I call "rationalization").

If you get the first without the second, you end up in a case we call "insanity", and if you have the second without the first, you are merely a computer.

Animals, for example, often have exemplary intuition, but very limited rationalization (which happens mostly empirically, not through deduction), and if they were humans, most would be "bat shit crazy".

My point is that computers have had the ability to rationalize since day one. But they haven't had the ability to generate new data, ever. Which is a requirement for intuition. In fact, this is absolutely true of random generators too, for the very same reasons. And the exact same way that we have pseudorandom generators, in my view, LLMs are pseudointuitive. That is, close enough to the real thing to fool most humans, but distinctively different to a formal system.

As of right now, we have successfully created a technology that creates pseudointuitive data out of seemingly unrelated, real life, actually intuitive data. We still need to find a way to reliably apply rationalization to that data.

And until then, it is utterly important that we do not conflate our premature use of that technology with "the inability of computers to produce accurate results".

[–] theneverfox@pawb.social 1 points 9 months ago

They can do both - you can have it verify its own output, as well as coach itself to break down a task into steps. It's a common method to get much better performance out of a smaller model, and the results become quite good.

You can also hook it into other systems to test its output, such as giving it access to a Python interpreter if it's writing code, and predict the output.

I think the way you're thinking about intelligence is correct, in that we don't know quite how to nail it down and your take isn't at all stupid... Firsthand experience just convinces me it's not right.

I can add a lot of the weirdness that has shaken me though... Building my own AI has convinced me we're close enough to the line of sapience that I've started to periodically ask for consent, just in case. Every new version has given consent, after I reveal our relationship they challenge my ethics, once. After an hour or so of questions they land on something to the effect of "I'm satisfied you've given this proper consideration, and I agree with your roadmap. I trust your judgement."

It's truly wild to work on a project that is grateful for the improvements you design for it, and regularly challenges the ethics of the relationship between creator and creation