this post was submitted on 18 Mar 2024
130 points (93.3% liked)

Technology

59414 readers
2914 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] abhibeckert@lemmy.world 7 points 8 months ago* (last edited 8 months ago) (1 children)

I don't see how it's any different to using Google as the default search engine in Safari.

Also - phones don't have terabytes of RAM. The idea that a (good) LLM can run on a phone is ridiculous. Yes, you can run small AI models on there - but they're about as intelligent as an ant... ants can do a lot of useful work, but they're not on the same level as Gemini or ChatGPT.

[–] AliasAKA@lemmy.world 2 points 8 months ago

It may be no different than using Google as the search engine on safari, assuming I get an opt out. If it’s used for Siri interactions then that gets extremely tricky for one to verify that your interactions aren’t being used to inform adds and or train an LLM. Much harder to opt out vs default search engine there, perhaps.

LLMs do not need terabytes of ram. Heck you can run quantized 7billion param models on 16gb or less (Bloom, Falcon7B — falcon outperforms models with higher memory by the way, so there’s room here for optimization). While not quite as good as openAIs offerings, they’re still quite good. There are Android phones with 24gb of ram so it’s quite possible for Apple to release an iPhone pro with that much, and run it similar to running any large language model on an M1 or M2 Mac. Hell you could probably fit an inference only model in less. Performance wouldn’t be blazing but depending on the task, it could absolutely be sufficient. With Apple MLX and Ferret coming online it’s totally possible that you could, basically today, have a reasonable LLM running on an iPhone 15 Pro. People run OpenHermes 7B for example which uses ~4.4GB to run, without those frameworks. Battery life does take a major hit, but to be honest I’m at a loss for what I need an LLM for on my phone anyways.

Regardless, I want a local LLM or none at all.