this post was submitted on 19 Jun 2023
8 points (100.0% liked)
ObsidianMD
4105 readers
1 users here now
Unofficial Lemmy community for https://obsidian.md/
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How is this possible? I thought that local LLM models nearly all required ludicrous amounts if VRAM, but I don’t see anything about system requirements on their website. Is this run in the cloud?
It supposedly runs on CPUs too as far as I understand it
It actually runs locally! I did that just two days ago. it's amazing!
it's all based on research by many people who wanted to make the LLMs more accessible because gating them behind large computational work isn't really fair/nice.