this post was submitted on 03 Apr 2024
99 points (87.8% liked)

Technology

59414 readers
3514 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] abhibeckert@lemmy.world 37 points 7 months ago* (last edited 7 months ago) (20 children)

I don't think OpenAI should be offering ChatGPT 3.5 at all except via the API for niche uses where quality doesn't matter.

For human interaction, GPT 4 should be the minimum.

[–] kromem@lemmy.world 15 points 7 months ago (14 children)

Yeah, I've lost count of the number of articles or comments going "AI can't do X" and then immediately testing and seeing that the current models absolutely do X no issue, and then going back and seeing the green ChatGPT icon or a comment about using the free version.

GPT-3.5 is a moron. The state of the art models have come a long way since then.

[–] ReallyKinda@kbin.social 2 points 7 months ago (4 children)

I haven’t played around with them, are the new models able to actually reason rather than just predictive text on steroids?

[–] realharo@lemm.ee 2 points 7 months ago* (last edited 7 months ago)

I gave GPT-4 a simple real-world question about how much alcohol volume there is in a certain weight (I think 16 grams) of a 40% ABV drink (the rest being water) and it gave complete nonsense answers on some attempts, and straight up refused to answer on others.

So I guess it still comes down to how often things appear in the training data.

(the real answer is roughly 6.99ml, weighing about 5.52grams)

After some follow-up prodding, it realized it's wrong and eventually provided a different answer (6.74ml), which was also wrong. With more follow-ups or additional prompting tricks, it might eventually get there, but someone would have to first tell it that it's wrong.

load more comments (3 replies)
load more comments (12 replies)
load more comments (17 replies)