stardreamer

joined 1 year ago

"Would anyone at the table like to carve the rump?"

Hmm... I need to test this out then. I have about 200+ entries across multiple folders, but I'm not seeing much of a slowdown. But then again most of my hardware is pretty good (except for one or two devices).

[–] stardreamer@lemmy.blahaj.zone 9 points 1 year ago* (last edited 1 year ago) (2 children)

It doesn't matter how many passwords you are storing inside. It's the number of cycles of decryption needed to be performed in order to unlock the vault. More cycles = more time.

You can have an empty vault and it will still be slow to decrypt with a high kdf iteration count/expensive algorithm.

You can think of it as an old fashioned safe with a hand crank. You put in the key and turn the crank. It doesn't matter if the safe is empty or not, as long as you need to turn the crank 1000 times to open it it WILL be slower than a safe that only needs 10 turns. Especially so if you have a 10 year old (less powerful device) turning the crank.

[–] stardreamer@lemmy.blahaj.zone 5 points 1 year ago* (last edited 1 year ago) (5 children)

How many KDF iterations did you set your vault to? I have mine at 600,000 so it definitely takes a moment (~3 sec) to decrypt on older devices.

The decryption being compute heavy is by design. You only need to decrypt once to unlock your vault, but someone brute forcing it would need to decrypt a billion+ times. Increasing compute needed for decryption makes it more expensive to brute force your master password.

In fact, LastPass made the mistake of setting their default iteration count to 1000 before they got breached and got a ton of flak for it.

[–] stardreamer@lemmy.blahaj.zone 2 points 1 year ago* (last edited 1 year ago)

This kinda sounds like a TCP retransmission issue. Do you have a server available somewhere? Can you run iperf3 in both ways and see the retransmission rate?

You may also want to run TCP with both CUBIC and BBR for this test since that may also isolate shallow buffers versus corrupted packets.

Udon straight outta the pot while I try to slurp it down?

I'm a slow eater okay?

115C is a 600W GPU's throttle temp. I would love to see an iPhone pull off 600W with a battery.

Is there a good/easy way to defrag a btrfs filesystem after 3-4 years of continuous use? At this point I can't tell if my SUSE install was slow all those years ago or it's just been getting worse over time.

[–] stardreamer@lemmy.blahaj.zone 7 points 1 year ago (1 children)

Multimc devs refuse to let anyone else compile/provide packaging scripts for their application. Their own Linux packages installs into /home and can't be cleanly uninstalled. They also deliberately broke the compile process by removing key files from their git repo. When confronted about it, they decided to threaten to sue the AUR maintainer for trademark infringement on their discord instead.

[–] stardreamer@lemmy.blahaj.zone 1 points 1 year ago (1 children)

My only question is why are individual authors doing the suing and not the publishers?

For every mega-author like GRRM or Sanderson there are tens of thousands of authors that cannot afford to do anything about their works being stolen by LLMs. With how big a cut publishing takes it would make sense if publishers negotiate on behalf of all their authors. Instead, the big four in the US seem to be chasing after non-issues like limiting library and Internet Archive access, while leaving the real issues with AI out to dry...

[–] stardreamer@lemmy.blahaj.zone 7 points 1 year ago* (last edited 1 year ago) (1 children)

Isn't the whole point of these things the "bloated" (CI/CD, issue tracker, merge requests, mirroring, etc) part? Otherwise we'd all be using bare git repos over ssh (which works great btw!)

It's like complaining about IDE bloat while not using a text editor. Or complaining there's too many knives in a knife set instead of buying just the chef knife.

That's just a Thinkpad. If they keep making them smaller eventually it will fit in your pocket.

view more: ‹ prev next ›