maxwellfire

joined 1 year ago
[–] maxwellfire@lemmy.world 2 points 15 hours ago

Does it resolve correctly from the laptop or the server. What about resolvectl query server.local on the laptop?

[–] maxwellfire@lemmy.world 6 points 1 day ago (2 children)

Isn't .local a mdns auto configured domain? Usually I think you are supposed to choose a different domain for your local DNS zone. But that's probably not the source of the problem?

[–] maxwellfire@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

You definitely use a firewall, but there's no need for NAT in almost all cases with ipv6. But even with a firewall, p2p becomes easier even if you still have to do firewall hole punching

[–] maxwellfire@lemmy.world 2 points 2 weeks ago

I've setup okular signing and it worked, but I believe it was with a mime certificate tied to my email (and not pgp keys). If you want I can try to figure out exactly what I did to make it work.

Briefly off the top of my head, I believe it was

  1. Getting a mime certificate for my email from an authority that provides them. There's one Italian company that will do this for any email for free.
  2. Converting the mime certificate to some other format
  3. Importing the certificate to Thunderbird's (or maybe it was Firefox's) certificate store (and as a sidequest setting up Thunderbird to sign email with that certificate
  4. Telling Okular to use the Thunderbird/Firefox certificate store as the place to find certificates

I can't remember if there was a way to do this with pgp certificates easily

[–] maxwellfire@lemmy.world 3 points 3 weeks ago

From looking at the github, I think you don't need to/want to host this publicly. It doesn't automatically get and store your information. It's more a tool for visualizing and cross referencing your takeout/exported data from a variety of tech platforms. It's just developed as a web app for ease of UI/cross platform/ locally hostable.

[–] maxwellfire@lemmy.world 2 points 1 month ago (1 children)

Borg append only seems like the way to do this easily

[–] maxwellfire@lemmy.world 11 points 1 month ago

I feel like this really depends on what hardware you have access too. What are you interested in doing?How long are you willing to wait for it to generate, and how good do you want it to be?

You can pull off like 0.5 word per second of one of the mistral models on the CPU with 32GB of RAM. The stabediffusion image models work okay with like 8-16GB of vram.

[–] maxwellfire@lemmy.world 7 points 1 month ago

I'd be surprised if it was significantly less. A comparable 70 billion parameter model from llama requires about 120GB to store. Supposedly the largest current chatgpt goes up to 170 billion parameters, which would take a couple hundred GB to store. There are ways to tradeoff some accuracy in order to save a bunch of space, but you're not going to get it under tens of GB.

These models really are going through that many Gb of parameters once for every word in the output. GPUs and tensor processors are crazy fast. For comparison, think about how much data a GPU generates for 4k60 video display. Its like 1GB per second. And the recommended memory speed required to generate that image is like 400GB per second. Crazy fast.

[–] maxwellfire@lemmy.world 18 points 1 month ago (5 children)

Chatgpt is also probably around 50-100GB at most

[–] maxwellfire@lemmy.world 3 points 1 month ago

Second this router! It had the fastest CPU and antenna vs price when I last looked. I run zerotier as a VPN on it an it works great. Plenty of ram and flash for packages too.

[–] maxwellfire@lemmy.world 1 points 2 months ago

I think pacreport --unowned-files might be able to help with that too. Showing you files that aren't part of any installed package. Probably only does system files though, nothing in /home

 

We were in upstate NY, and got extremely lucky with a hole in the clouds right around the sun at totality.

The red at the bottom was unexpected and very cool to see. It's a solar prominence

view more: next ›