this post was submitted on 02 Aug 2023
58 points (100.0% liked)

Free and Open Source Software

17934 readers
57 users here now

If it's free and open source and it's also software, it can be discussed here. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 8 comments
sorted by: hot top controversial new old
[–] TheOtherJake@beehaw.org 21 points 1 year ago

Oobabooga is the main GUI used to interact with models.

https://github.com/oobabooga/text-generation-webui

FYI, you need to find checkpoint models. In the available chat models space, naming can be ambiguous for a few reasons I'm not going to ramble about here. The main source of models is Hugging Face. Start with this model (or get the censored version):

https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGML

First, let's break down the title.

  • This is a model based in Meta's Llama2.
  • This is not "FOSS" in the GPL/MIT type of context. This model has a license that is quite broad in scope with the key point stipulating it can not be used commercially for apps that have more than 700 million users.
  • Next, it was quantized by a popular user going by "The Bloke." I have no idea who this is IRL but I imagine this is a pseudonym or corporate alias given how much content is uploaded by this account on HF.
  • This model is based on a 7 Billion parameter dataset, and is fine tuned for chat applications.
  • This is uncensored meaning it will respond to most inputs as best it can. It can get NSFW, or talk about almost anything. In practice there are still some minor biases that are likely just over arching morality inherent to the datasets used, or it might be coded somewhere obscure.
  • Last part of the title is that this is a GGML model. This means it can run on CPU or GPU or a split between the two.

As for options on the landing page or "model card"

  • you need to get one of the older style models that have "q(numb)" as the quantization type. Do not get the ones that say "qK" as these won't work with the llama.cpp file you will get with Oobabooga.
  • look at the guide at the bottom of the model card where it tells you how much ram you need for each quantization type. If you have a Nvidia GPU with the CUDA API, enabling GPU layers makes the model run faster, and with quite a bit less system memory from what is stated on the model card.

The 7B models are about like having a conversation with your average teenager. Asking technical questions yielded around 50% accuracy in my experience. A 13B model got around 80% accuracy. The 30B WizardLM is around 90-95%. I'm still working on trying to get a 70B running on my computer. A lot of the larger models require compiling tools from source. They won't work directly with Oobabooga.

[–] redw0rm@kerala.party 13 points 1 year ago* (last edited 1 year ago) (1 children)

If you want a completely offline (local) one, you can take a look at gpt4all

[–] tubbadu@lemmy.kde.social 1 points 1 year ago (1 children)
[–] redw0rm@kerala.party 1 points 1 year ago

It usually depends on the model. When i tried nous-hermes model, it used approximate ~5GB extra RAM. I use a system with 16GB ram.

[–] xilliah@beehaw.org 8 points 1 year ago
[–] shreddy_scientist@lemmy.ml 8 points 1 year ago

I know www.open-assistant.io was looking for help from anyone to put the final touches on their AI a little while back. Haven't kept up recently but it'll be a solid option once live.

[–] catsup@lemmy.one 5 points 1 year ago

LlaMA / Alpaca

[–] gutter564@feddit.uk 5 points 1 year ago* (last edited 1 year ago)

HuggingFace chat