darkeox

joined 1 year ago
[–] darkeox@kbin.social 2 points 1 year ago (2 children)

Ah thank you for the trove of information. What would be the best general knowledge model according to you?

[–] darkeox@kbin.social 1 points 1 year ago

This. It's not easy or trivial but as a long term strategy, they should already plan investing efforts into consolidating something like Godot or another FOSS engine. They should play like you calm down an abuser you can't just escape yet while planning their demise when the time has come.

[–] darkeox@kbin.social 1 points 1 year ago (1 children)

@nosnahc c'est deux commentaires différents, j'imagine que les deux méthodes sont possibles. Pour ce qui est de la légalité, j'y ai pas réfléchi effectivement. Je vire la soumission ?

[–] darkeox@kbin.social 1 points 1 year ago (2 children)

@Camus

Hum ? je supprime du coup ?

[–] darkeox@kbin.social 2 points 1 year ago (4 children)

Don't be sorry, you're being so helpful, thank you a lot.

I finally replicated your config:

localhost/koboldcpp:v1.43 --port 80 --threads 4 --contextsize 8192 --useclblas 0 0 --smartcontext --ropeconfig 1.0 32000 --stream "/app/models/mythomax-l2-kimiko-v2-13b.Q5_K_M.gguf"

And had satisfying results! The performance of LLaMA2 really is nice to have here as well.

[–] darkeox@kbin.social 2 points 1 year ago (6 children)

Thanks a lot for your input. It's a lot to stomach but very descriptive which is what I need.

I run this Koboldcpp in a container.

What I ended up doing and which was semi-working is:

  • --model "/app/models/mythomax-l2-13b.ggmlv3.q5_0.bin" --port 80 --stream --unbantokens --threads 8 --contextsize 4096 --useclblas 0 0

In the Kobboldcpp UI, I set max response token to 512 and switched to an Instruction/response model and kept prompting with "continue the writing", with the MythoMax model.

But I'll be re-checking your way of doing it because the SuperCOT model seemed less streamlined and more qualitative in its story writing.

[–] darkeox@kbin.social 2 points 1 year ago (2 children)

The MythoMax looks nice but I'm using it in story mode and it seems to have problems progressing once it's reached the max token, it appears stuck:

Generating (1 / 512 tokens)
(EOS token triggered!)
Time Taken - Processing:4.8s (9ms/T), Generation:0.0s (1ms/T), Total:4.8s (0.2T/s)
Output:

And then stops when I try to prompt it to continue the story.

[–] darkeox@kbin.social 1 points 1 year ago

I'll try that Model. However, your option doesn't work for me:

koboldcpp.py: error: argument model_param: not allowed with argument --model

[–] darkeox@kbin.social 14 points 1 year ago

Can confirm it's the same on Proton / Linux. This game keeps being a joke on the technical side.

[–] darkeox@kbin.social 6 points 1 year ago

How can an AMD sponsored game that litteraly runs better on all AMD GPU vs their NVIDIA counterpart, doesn't embark any tech that may unfavor AMD GPU can be less QA-ed on AMD GPUs because of market share?

This game IS better optimized on AMD. It has FSR2 enabled by default on all graphics presets. That particular take especially doesn't work for this game.

[–] darkeox@kbin.social 6 points 1 year ago

Let's not kid ourselves. LTT comes out on the top because their way of operating reflects the community: as long as we get our daily shot of tech/geek stuff, we ignore the rest.

Not mentionning the significant amount of people in the community who are always eager to defend a "bro" against them "woke bitches".

view more: ‹ prev next ›