this post was submitted on 17 May 2024
503 points (94.8% liked)

Technology

60067 readers
5730 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] KillingTimeItself@lemmy.dbzer0.com 3 points 7 months ago (1 children)

It could be humble enough to admit it doesn’t know, but it can still be mistaken and think it has the right answer when it doesn’t. It would feel neigh omniscient, but it would never truly be.

yeah and so are humans, so i mean, shit happens. Even then it'd likely be more accurate than a human just based off of the very fact that it knows more subjects than any given human. And all humans alive, because it's knowledge is based off of the written works of the entirety of humanity, theoretically.

A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there’s no guarantee that didn’t change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

well yeah, if we're defining the ultimate truth as something that propagates through the universe at the highest known speed possible. That would be how that works, since it's likely a device of it's own accord, and or responsive to humans, it likely wouldn't matter, as it would just wait a few seconds anyway.

The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.

at that scale yes, but at this scale, with our current LLM technology, which was what i was talking about specifically, it wouldn't matter. But even at that scale i don't think it would classify as a hallucination, because a hallucination is a very specific type of being wrong. It's literally pulling something out a thin air, and a theoretical general intelligence AI wouldn't be pulling shit out of thin air, at best it would elaborate on what it knows already, which might be everything, or nothing, depending on the topic. But it shouldn't just make something up out of thin air. It could very well be wrong about something, but that's not likely to be a hallucination.

[–] ClamDrinker@lemmy.world 1 points 7 months ago* (last edited 7 months ago)

Yes, it would be much better at mitigating it and beat all humans at truth accuracy in general. And truths which can be easily individually proven and/or remain unchanged forever can basically be 100% all the time. But not all truths are that straight forward though.

What I mentioned can't really be unlinked from the issue, if you want to solve it completely. Have you ever found out later on that something you told someone else as fact turned out not to be so? Essentially, you 'hallucinated' a truth that never existed, but you were just that confident it was correct to share and spread it. It's how we get myths, popular belief, and folklore.

For those other truths, we simply ascertain the truth to be that which has reached a likelihood we consider it to be certain. But ideas and concepts we have in our minds constantly float around on that scale. And since we cannot really avoid talking to other people (or intelligent agents) to ascertain certain truths, misinterpretations and lies can sneak in to cause us to treat as truth that which is not. To avoid that would mean the having to be pretty much everywhere to personally interpret the information straight from the source. But then things like how fast it can process those things comes in to play. Without making guesses about what's going to happen, you basically can't function in reality.