vluz

joined 1 year ago
[–] vluz@kbin.social 5 points 6 months ago

Messing around with system python/pip and newly installed versions till all was broken and then looking at documentation.
This was way back on the 00's and I'm still ashamed on how fast completely I messed it up.

[–] vluz@kbin.social 2 points 6 months ago (1 children)

Just figured out there are 10 places called Lisbon dotted around the US, according to the search.

[–] vluz@kbin.social 3 points 8 months ago

Got one more for you: https://gossip.ink/
I use it via a docker/podman container I've made for it: https://hub.docker.com/repository/docker/vluz/node-umi-gossip-run/general

[–] vluz@kbin.social 3 points 9 months ago (4 children)

I got cancelled too and chose Hetzner instead. Will not do business with a company that can't get their filters working decently.

[–] vluz@kbin.social 7 points 11 months ago (1 children)

Lovely! I'll go read the code as soon as I have some coffee.

[–] vluz@kbin.social 3 points 1 year ago

I do SDXL generation in 4GB at extreme expense of speed, by using a number of memory optimizations.
I've done this kind of stuff since SD 1.4, for the fun of it. I like to see how low I can push vram use.

SDXL takes around 3 to 4 minutes per generation including refiner but it works within constraints.
Graphics cards used are hilariously bad for the task, a 1050ti with 4GB and a 1060 with 3GB vram.

Have an implementation running on the 3GB card, inside a podman container, with no ram offloading, 1 vcpu and 4GB ram.
Graphical UI (streamlit) run on a laptop outside of server to save resources.

Working on a example implementation of SDXL as we speak and also working on SDXL generation on mobile.
That is the reason I've looked into this news, SSD-1B might be a good candidate for my dumb experiments.

[–] vluz@kbin.social 3 points 1 year ago (1 children)

Oh my Gwyn, this comment section is just amazing.

[–] vluz@kbin.social 6 points 1 year ago (1 children)

Goddammit! Don't tell that one, I use it to impress random people at parties.

[–] vluz@kbin.social 12 points 1 year ago

HateLLM will be a smash. /s

[–] vluz@kbin.social -2 points 1 year ago
[–] vluz@kbin.social 2 points 1 year ago

That's wonderful to know! Thank you again.
I'll follow your instructions, this implementation is exactly what I was looking for.

 

Hi,

Not exactly my area and I'm lost in a sea of solutions, I need help.
There are so many out there but I don't know any of them or if they are still maintained, offer full solution, time to instance up, etc.

Problem is simple to describe.
I want to setup access to GPU instances in order to run any python code the project devs have built.
The hardware consists of several servers with GPUs that support vGPU, NVIDIA GPU virtualization solution.

I'm looking for something similar to https://www.runpod.io/

What open source software can be used to spawn the client machines from the existing hardware pool?

I'm looking into kubernetes for automation and MAAS from canonical for the rest. Am I missing something important?

Any help or insight would be helpful.

view more: next ›