this post was submitted on 18 Mar 2024
130 points (93.3% liked)

Technology

59414 readers
2914 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] abhibeckert@lemmy.world 4 points 8 months ago* (last edited 8 months ago) (1 children)

Apple is working on models, but they seem to be focusing on ones that use tens of gigabytes of RAM, compared to tens of terabytes.

I wouldn't be surprised Apple ships an "iPhone Pro" with 32GB of RAM dedicated to AI models. You can do a lot of really useful stuff with a model like that... but it can't compete with GPT4 or Gemini today - and those are moving targets. OpenAI/Google will have even better models (likely using even more RAM) by the time Apple enters this space.

A split system, where some processing happens on device and some in the cloud, could work really well. For example analyse every email/message/call a user has ever sent/received with the local model, but if the user asks how many teeth a crocodile has... you send that one to the cloud.

[โ€“] Fubarberry@sopuli.xyz 2 points 8 months ago

Tbf, Google has versions of Gemini that will run locally on phones too, and their open source Gemini models run on 16GB of ram or so.