this post was submitted on 17 Aug 2023
325 points (97.9% liked)
Technology
59414 readers
2914 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Oh sure, they understand logic and their behavior, but they don't understand what's they're saying (particularly the validity of it) https://arstechnica.com/?p=1961606
They're like... a story author. They understand the rules of language well enough they can write a story, but they don't understand the data or reality well enough to know if they've told you the truth, told you a lie, or told you something in-between.
i.e. they have no idea if they've told you fact or fiction, they just know they've done a convincing job of conveying the message based on language patterns, and that is an extremely big problem.
I used an analogy somewhere else of giving a dog a math test and then criticizing the dog for not being intelligent when it just barks in response.
Large language models are trained on words in their relationships. They understand what they are trained on, they understand logic in the form of words in their relationships, but the beautiful thing is that are words and their relationships can express most human knowledge, so in learning to predict those things these LLMs have also picked up most human knowledge and can make rational conclusions from it.
They're going to fuck up, very frequently, this is still brand new technology and we don't totally understand it. But to suggest that these things don't have logic or reason behind what they do, I think that's just crazy.
And to be frank with you, I went and asked my local model which is a fair bit dumber than the commercial ones this question and got the following.
Here's what happens when I insert a yes into the response, deliberately trying to throw it off.