this post was submitted on 19 Jun 2023
8 points (100.0% liked)

ObsidianMD

4105 readers
1 users here now

Unofficial Lemmy community for https://obsidian.md/

founded 1 year ago
MODERATORS
 

Would you like to see some plugins which integrate with local/self-hosted AI instead of sending it to ChatGPT? Or don't you care about privacy there as long as the results are good?

You might be interested in GPT4All (https://gpt4all.io/index.html), which can be easily downloaded as Desktops GUI. Simply download a model (like nous Hermes) for 7.5GB and run in even without a GPU right on your CPU l (albeit slightly slowish).

It's amazing what's already possible with local AI instead of relying on large scale expensive and corporate-dependent AIs such as ChatGPT

top 5 comments
sorted by: hot top controversial new old
[–] dethb0y@lemmy.world 6 points 1 year ago

I've not thought of a good use case for the technology myself, but if i were to use it i'd prefer it be local just for conveniences sake.

[–] maiskanzler@feddit.de 1 points 1 year ago

I am generally interested in giving an LLM more context about my data and hobby project code, but I'd never give someone else such deep access. GPT4ALL sounds great and it's making me hopeful that we won't have to rely entirely on commercial GPTs in the future. It's the AI equivalent to what Linux and FreeBSD are to OSs.

But that still leaves the question of what to do with it. I see 2 main purposes:

  • Asking the GPT questions about your material
  • Writing more content for your vault

That both seems useful at first, but I don't think it's really necessary. A good fuzzy search like obsidian has and a good vault and note structure makes the first point pretty irrelevant.

Also, writing more content is really two things:

  • Text generation/completion
  • Research

I think a plugin might be nice and user friendly UI for the first point, but research is much better done in a chat-like environment. And for that I don't need an integration, as I probably have a web browser open anyways.

[–] DrakeRichards@lemmy.world 0 points 1 year ago (2 children)

How is this possible? I thought that local LLM models nearly all required ludicrous amounts if VRAM, but I don’t see anything about system requirements on their website. Is this run in the cloud?

[–] maiskanzler@feddit.de 2 points 1 year ago

It supposedly runs on CPUs too as far as I understand it

[–] swnt@feddit.de 1 points 1 year ago

It actually runs locally! I did that just two days ago. it's amazing!

it's all based on research by many people who wanted to make the LLMs more accessible because gating them behind large computational work isn't really fair/nice.