this post was submitted on 23 Dec 2023
16 points (94.4% liked)

Free Open-Source Artificial Intelligence

2886 readers
2 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 1 year ago
MODERATORS
 

Currently I'm using the ollama runner for messing around with the mistral 7b models (only on CPU, I have no discrete gpu >.<) - I like that it has a very simple CLI and fairly minimal configuration (the Arch Linux package even comes with a systemd service, it's pretty neat).

However, I don't know how sustainable it is. It hosts a database of models on it's own here, but I don't know how dependent the code is on a central online repository.

Ideally, I'd love if we had an AI runner (including with the ability to use LoRA modules) that can natively pull from torrentfiles or something with similar p2p architecture. I imagine this would be better for long-term sustainability and hosting/download costs of the projects ^.^

Thoughts on this, and any other suggestions/comparisons/etc?

top 4 comments
sorted by: hot top controversial new old
[–] midnight@kbin.social 6 points 10 months ago

I agree that some sort of decentralized model repository would be awesome, but ollama works with local files too, so I'm not too worried about it. I've used many LLM backends, and ollama is my favorite so far, but given how fast everything is moving, that could change in the future.

[–] icewall@lemmy.world 5 points 10 months ago
[–] silas@programming.dev 3 points 10 months ago

I’ve thought the same thing actually, but I haven’t really looked into who’s behind Ollama or how the repository is managed yet. It’s a really great project

[–] rufus@discuss.tchncs.de 2 points 10 months ago* (last edited 10 months ago)

What does ollama add on top of llama.cpp?

I use KoboldCPP that also works very well on CPU.

And Oobabooga's UI (with llama.cpp as a CPU backend) is also easy to set up.