this post was submitted on 26 Mar 2024
210 points (91.7% liked)

Technology

59298 readers
4992 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] agressivelyPassive@feddit.de 3 points 7 months ago (1 children)

That's maybe because we've reached the limits of what the current architecture of models can achieve on the current architecture of GPUs.

To create significantly better models without having a fundamentally new approach, you have to increase the model size. And if all accelerators accessible to you only offer, say, 24gb, you can't grow infinitely. At least not within a reasonable timeframe.

[–] Kbin_space_program@kbin.social -3 points 7 months ago* (last edited 7 months ago) (1 children)

Will increasing the model actually help? Right now we're dealing with LLMs that literally have the entire internet as a model. It is difficult to increase that.

Making a better way to process said model would be a much more substantive achievement. So that when particular details are needed it's not just random chance that it gets it right.

[–] agressivelyPassive@feddit.de 9 points 7 months ago (1 children)

That is literally a complete misinterpretation of how models work.

You don't "have the Internet as a model", you train a model using large amounts of data. That does not mean, that this model contains any of the actual data. State of the at models are somewhere in the billions of parameters. If you have, say, 50b parameters, each being a 64bit/8 byte double (which is way, way too much accuracy) you get something like 400gb of data. That's a lot, but the Internet slightly larger than that.

[–] Kbin_space_program@kbin.social -5 points 7 months ago* (last edited 7 months ago) (1 children)

It's an exaggeration, but its not far off given that Google literally has all of the web parsed at least once a day.

Reddit just sold off AI harvesting rights on all of its content to Google.

The problem is no longer model size. The problem is interpretation.

You can ask almost everyone on earth a simple deterministic math problem and you'll get the right answer almost all of the time because they understand the principles behind it.

Until you can show deterministic understanding in AI, you have a glorified chat bot.

[–] agressivelyPassive@feddit.de 8 points 7 months ago (1 children)

It is far off. It's like saying you have the entire knowledge of all physics because you skimmed a textbook once.

Interpretation is also a problem that can be solved, current models do understand quite a lot of nuance, subtext and implicit context.

But you're moving the goal post here. We started at "don't get better, at a plateau" and now you're aiming for perfection.

[–] Kbin_space_program@kbin.social -3 points 7 months ago (1 children)

You're building beautiful straw men. They're lies, but great job.

I said originally that we need to improve the interpretation of the model by AI, not just have even bigger models that will invariably have the same flaw as they do now.

Deterministic reliability is the end goal of that.

[–] agressivelyPassive@feddit.de 3 points 7 months ago

Will increasing the model actually help? Right now we're dealing with LLMs that literally have the entire internet as a model. It is difficult to increase that.

Making a better way to process said model would be a much more substantive achievement. So that when particular details are needed it's not just random chance that it gets it right.

Where exactly did you write anything about interpretation? Getting "details right" by processing faster? I would hardly call that "interpretation" that's just being wrong faster.