this post was submitted on 12 Jul 2023
274 points (97.6% liked)

Technology

59080 readers
3886 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Users of OpenAI's GPT-4 are complaining that the AI model is performing worse lately. Industry insiders say a redesign of GPT-4 could be to blame.

top 50 comments
sorted by: hot top controversial new old
[–] nbailey@lemmy.ca 87 points 1 year ago* (last edited 1 year ago) (6 children)

The model has become inbred because it’s now impossible to scrape the web without AI content getting ingested, which is full of “hallucinations” and other weird artifacts. The last opportunity to get “uncontaminated” training data was sometime in mid 2022.

Not to say that it’s causing this particular problem, but this issue will emerge eventually. Garbage in = garbage out. Eventually GPT-19 will grow a mighty Habsburg chin.

[–] jantin@lemmy.world 28 points 1 year ago* (last edited 1 year ago) (3 children)

Maybe not yet, but...

  • Spez will turn Reddit into a bot farm and sell this as training data
  • Musk turns Twitter into a bigoted cesspool and will sell this as training data, which will subsequently be flagged for low quality (also: a botfarm)
  • Threads is a corporate ad dashboard (and we already know how easy it is to GPT copy) and Zuck will sell this as training data
  • Facebook is either dead or only good for boomers and Poles
  • blogs are dead
  • Fediverse is out there waiting to be scraped but possibly too small to sustain a big model

We'te getting there, hopefully.

[–] cyberpunk007@lemmy.world 7 points 1 year ago (1 children)
[–] jantin@lemmy.world 8 points 1 year ago (1 children)
[–] damnYouSun@sh.itjust.works 10 points 1 year ago

Also We'te, which I believe is a Klingon name.

load more comments (2 replies)
[–] minorninth@lemmy.world 4 points 1 year ago

That hasn't happened yet. Most likely they quantized GPT-4 more. It's still based on the same training data.

load more comments (2 replies)
[–] monerobull@monero.town 33 points 1 year ago* (last edited 1 year ago) (1 children)

The lobotomies will continue. Free models will keep getting better.

[–] HelloHotel@lemmy.world 11 points 1 year ago* (last edited 1 year ago)

The chatgpt people are really paranoid. Gpt-3 is so good at not halucinating that it often cant, even if it needs to do so to accomplish a task. Fearing the ai will confidently give the wrong answer.

[–] randon31415@lemmy.world 27 points 1 year ago (1 children)

Not the first time OpenAI has done this. DALLE2 used to be the best AI art program in the world. Then OpenAI decided that they didn't want to get sued by celebrities, so they made it so that if a face came out that resembled a celebrity, it would be distorted. But every face kind of looks like someone famous. Ta da! Now DALLE2 can't do faces.

Want a crane shot areal image of a teen couple in a corvette driving off into the sunset? Well, you are now banned for life from the DALLE2 service, because DALLE2 produced an image of a 'shot teen' and that violates it's terms of service.

[–] Slacking@sh.itjust.works 4 points 1 year ago (1 children)

Dalle2 was always kind of shit tbh.

[–] randon31415@lemmy.world 5 points 1 year ago

Dalle2 was great when it was free and stable diffusion didn't exist. I don't see the logic of: "Someone made a free version. Lets make the program worse and charge money for it!"

[–] hoshikarakitaridia@sh.itjust.works 26 points 1 year ago (1 children)

The only way in mind this dumbing down happens is by fumbling with the model. So that's the one thing we can be sure: the AI is most definitely changed while publicly staying "ChatGPT 4". I assume they are either using clipping or token limitations to split the server load but fucking up the result, or they are purposely dumbing it down to capitalise on it later by introducing other pay models like ppl already mentioned.

Either way they are shooting themselves in the foot because a bunch of ppl will unsubscribe either out of spite for the change or because it's just not worth it anymore for them.

[–] Donjuanme@lemmy.world 25 points 1 year ago

AI taking a running leap at enshittification.

[–] balder1991@lemmy.world 20 points 1 year ago (1 children)

Some people have been saying that since the beginning while some haven’t noticed this “decline”. It seems very subjective.

[–] tdawg@lemmy.world 17 points 1 year ago

Honestly as a daily user I think it's a combination of it getting worse at understanding vague prompts and people bumbing up against edge cases more. I would suspect the former is due to things like prompt hardening but can only speculate, while the latter isn't hard to imagine just from frequent use.

[–] zikk_transport2@lemmy.world 9 points 1 year ago (1 children)
[–] mexicancartel@lemmy.dbzer0.com 15 points 1 year ago (2 children)

You mean "I was right* or "i wrote*"?

[–] Dicska@lemmy.world 13 points 1 year ago

No no, he used to work as a wright. Built ships and shit.

load more comments (1 replies)
[–] cybersandwich@lemmy.world 9 points 1 year ago

You know how we have pre-bomb steel? We'll have pre-GPT data sets.

Yeah, when I first started using GPT-4, I didn't notice any hallucinations. Now I'm getting them all the time. Disappointing.

Just like most people after they achieve success.

load more comments
view more: next ›