Large Language Models

89 readers
3 users here now

A place to discuss large language models.

founded 1 year ago
MODERATORS
1
 
 
2
 
 

y2u.be/aVvkUuskmLY

Llama 3.1 (405b) seems ๐Ÿ‘. It and Claude 3.5 sonnet are my go-to large language models. I use chat.lmsys.org. Openai may be scrambling now to release Chatgpt 5?

3
 
 

I'm an avid Marques fan, but for me, he didn't have to make that vid. It was just a set of comparisons. No new info. No interesting discussion. Instead he should've just shared that Wired podcast episode on his X.

I wonder if Apple is making their own large language model (llm) and it'll be released this year or next year. Or are they still musing re the cost-benefit analysis? If they think that an Apple llm won't earn that much profit, they may not make 1.

4
5
6
 
 

Click Here to be Taken to the Megathread!

from !fosai@lemmy.world

Vicuna v1.5 Has Been Released!

Shoutout to GissaMittJobb@lemmy.ml for catching this in an earlier post.

Given Vicuna was a widely appreciated member of the original Llama series, it'll be exciting to see this model evolve and adapt with fresh datasets and new training and fine-tuning approaches.

Feel free using this megathread to chat about Vicuna and any of your experiences with Vicuna v1.5!

Starting off with Vicuna v1.5

TheBloke is already sharing models!

Vicuna v1.5 GPTQ

7B

13B


Vicuna Model Card

Model Details

Vicuna is a chat assistant fine-tuned from Llama 2 on user-shared conversations collected from ShareGPT.

Developed by: LMSYS

  • Model type: An auto-regressive language model based on the transformer architecture
  • License: Llama 2 Community License Agreement
  • Finetuned from model: Llama 2

Model Sources

Uses

The primary use of Vicuna is for research on large language models and chatbots. The target userbase includes researchers and hobbyists interested in natural language processing, machine learning, and artificial intelligence.

How to Get Started with the Model

Training Details

Vicuna v1.5 is fine-tuned from Llama 2 using supervised instruction. The model was trained on approximately 125K conversations from ShareGPT.com.

For additional details, please refer to the "Training Details of Vicuna Models" section in the appendix of the linked paper.

Evaluation Results

Vicuna Evaluation Results

Vicuna is evaluated using standard benchmarks, human preferences, and LLM-as-a-judge. For more detailed results, please refer to the paper and leaderboard.

7
 
 

I noticed there didn't seem to be a community about large language models, akin to r/localllama. So maybe this will be it.

For the uninitiated, you can easily try a bleeding edge LLM in your browser here.

If you loved that, some places to get started with local installs and execution are here-

https://github.com/ggerganov/llama.cpp

https://github.com/oobabooga/text-generation-webui

https://github.com/LostRuins/koboldcpp

https://github.com/turboderp/exllama

and for models in general, the renowned TheBloke provides the best and fastest releases-

https://huggingface.co/TheBloke