this post was submitted on 02 Aug 2023
14 points (93.8% liked)

LocalLLaMA

2249 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

I've been using airoboros-l2-70b for writing fiction, and while overall I'd describe the results as excellent and better than any llama1 model I've used, it doesn't seem to be living up to the promise of 4k token sequence length.

Around 2500 tokens output quality degrades rapidly, and either starts repeating previous text verbatim, or becomes incoherent (grammar, punctuation and capitalization disappear, becomes salad of vaguely related words)

Any other experiences with llama2 and long context? Does the base model work better? Are other fine tunes behaving similarly? I'll try myself eventually, but the 70b models are chunky downloads, and experimentation takes a while at 1 t/s.

(I'm using GGML Q4_K_M on kobold.cpp, with rope scaling off like you're supposed to do with llama2)

top 10 comments
sorted by: hot top controversial new old
[–] Sims@lemmy.ml 3 points 1 year ago (1 children)

No experience, but just adding that long context models have a tendency of 'forgetting' whats in the middle of the text. Worth noting if you work on long texts I assume. I can't remember the paper tho. There's so many..

[–] flamdragparadiddle@sh.itjust.works 4 points 1 year ago (2 children)

Lost in the middle: https://arxiv.org/abs/2307.03172

Happens for all models, not just Llama and it is really frustrating to deal with.

[–] h3ndrik@feddit.de 4 points 1 year ago (2 children)

But is that a bug or a feature? I think it is plausible that relevant information is most likely either at the beginning of a document or in the previous few lines. So that is where attention should be focused.

Like when you get an assignment, the important instructions are at the beginning and not somewhere in the middle. And when writing a document or a book, the most important thing is your current sentence fits in with that paragraph. At that point you don't worry about remembering exactly what the hobbits did back in the Shire.

I remember reading some criticism on that paper. But i cannot comment on the technical aspects.

[–] noneabove1182@sh.itjust.works 3 points 1 year ago

You raise an interesting point though in that most examples likely follow exactly as you suggest, there would have to be large amounts of training specifically for focusing on middle content, there probably just isn't enough in the dataset

[–] flamdragparadiddle@sh.itjust.works 2 points 1 year ago (1 children)

I'm my application (summarising excerpts from several papers) it is a bug. I had assumed the context would be given equal weight throughout, but the distribution of information in the generated summaries suggests it is following the lost in the middle shape. This is most evident when the early chunks of text say something contradicted by the middle. I'd expect the models to talk about the contradiction at least, but it hasn't been mentioned in any that I've looked at.

I can see what you mean, when generating text you need to pay most attention to what you just wrote, but you also don't want to claim the hobbits started out in Mordor. I have no idea how to mitigate it, other than making the context short enough that it is all 'remembered'.

If you remember where you read some criticism, I'd be very grateful for a link. That paper is doing a lot of heavy lifting in how I understand what I'm seeing, so it would be good to know where the holes in it are.

[–] h3ndrik@feddit.de 2 points 1 year ago* (last edited 1 year ago) (1 children)

Sorry, didn't find it. If i remember correctly it was either for using models where the foundation model was trained to fewer (2048?) tokens. Or for the measurement/benchmark being too 'synthetic' / not meaningful for real-world scenarios or something.

I read this: https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/ (And maybe also related to this topic: https://arize.com/blog/lost-in-the-middle-how-language-models-use-long-contexts-paper-reading/ and https://github.com/THUDM/LongBench )

Also: I've played around a bit with llama. I haven't had good results with summarizing things whatsoever. Maybe it's not the context length, but the wrong model for the task? Aren't there other language models out there, specifically suited for the task of summarization? Llama is kind of generalist and maybe just not exceptionally good at this specific task.

https://huggingface.co/learn/nlp-course/chapter7/5?fw=tf#models-for-text-summarization and https://www.width.ai/post/bart-text-summarization

Regarding the original question: I'm not sure whether KoboldCPP does it correctly for the newer 4k context length. For me it says Using automatic RoPE scaling (scale:1.000, base:32000.0) But is that the correct base value? That's the same as if i were using an LLaMA1 model with artificially increased context length.

[–] actuallyacat@sh.itjust.works 3 points 1 year ago* (last edited 1 year ago)

You are supposed to manually set scale to 1.0 and base to 10000 when using llama 2 with 4096 context. The automatic scaling assumes the model was trained for 2048. Though as I say in the OP, that still doesn't work, at least with this particular fine tune.

[–] Sims@lemmy.ml 1 points 1 year ago

I was unaware that the smaller context models exhibited the same effect. It does seem logical that broad important information and conclusions is naturally put at the ends of a sentence by us. I haven't read the paper yet, but wonder if the training set - our communication - also contains more information at the ends, so the effect isn't caused by the algorithm, but by the data. I'll give the paper a read, thx..

[–] h3ndrik@feddit.de 2 points 1 year ago* (last edited 1 year ago)

I read some other people complain, too. Maybe try the base model. I'm not sure if it's the fine-tune or llama2's fault.

There are ways to measure that. To measure perplexity across the context. And whatever people did to measure if the things went into the right direction that increased the first llama context size past 2048. But I didn't find measurements for Llama2 at least with a quick google .

Edit: And people mentioned Llama2 has a different attention mechanism at the 70B version. This also might be specific to the 70B version. Make sure to use the most recent version of KoboldCPP or whatever you use and to configure the scaling correctly. At 4096 it shouldn't need any context scaling as far as i understand.

[–] creolestudios@sh.itjust.works 1 points 6 months ago

Yes, the 4k context length of Llama2 is indeed real. Llama2 is a cutting-edge language model developed by OpenAI, and its impressive capability to understand and generate text with such a lengthy context is one of its remarkable features. If you're interested in leveraging advanced AI models like Llama2 for chatbot development or other applications, you may consider reaching out to an AI chatbot development company for assistance in harnessing this technology effectively.