Decipher0771

joined 1 year ago
[–] Decipher0771@lemmy.ca 18 points 1 year ago (2 children)

I loved Pi’s, but I hate the micro hdmi connectors

[–] Decipher0771@lemmy.ca 3 points 1 year ago

If you’re forwarding between haproxy instances, use proxy-protocol instead of forwardfor header forwarding.

[–] Decipher0771@lemmy.ca 3 points 1 year ago

To waste everyone’s time and make everyone look in the wrong direction while they do something terrible. To give their people something to point to when looking for examples of “nothingburgers”.

[–] Decipher0771@lemmy.ca 2 points 1 year ago

Indeed. It really was the end of an era when they went to shit.

[–] Decipher0771@lemmy.ca 2 points 1 year ago* (last edited 1 year ago) (3 children)

DOS has always? had chkdsk, but ndd had a knack for being able to recover data from minor corruptions way better than chkdsk did. Scandisk (dos 6 version of chkdsk) was just a prettier face, ndd was still better.

Between ndd, Spinrite, and I can’t remember the name of the undelete tools, I saved a lot of homework assignments.

[–] Decipher0771@lemmy.ca 3 points 1 year ago (6 children)

Different tools. Speed disk was a disk defragmenter, DriveSpace was whole disk compression. The Norton tool you’d have used a lot if you used DriveSpace was Norton Disk Doctor.

[–] Decipher0771@lemmy.ca 4 points 1 year ago

Stacker, then MS ripped off Stacker and made Doublespace, got sued and changed the compression algorithm and renamed it DriveSpace.

Couldn’t use DoubleSpace or Stacker with Windows 3.X, there was no 32bit driver so disk access was horrendously slow. Windows95 was needed to use DriveSpace with full driver support, but it was still slow and by that time hard drives had caught up with the growing size of the OS and applications somewhat and live disk compression lost popularity, particularly with the way DriveSpace did it. Storing your entire drive as a single giant file backed by FAT32 was a terrible idea and prone to corruption.

When NTFS came around and introduced transparent file compression, that pretty much ended DriveSpace style compression. All modern FS now include some kind of compression, NTFS, APFS, BTRFS, ZFS. Even HFS+ had some ability to compress similar to APFS, but wasn’t very well known.

[–] Decipher0771@lemmy.ca 3 points 1 year ago

I’m sure you’ve heard plenty through the forums, but Truenas virtualized is perfectly fine so long as you’re passing through an HBA directly. It doesn’t affect reliability any, but it doesn’t add any features either.

“Can I virtualized Truenas” is probably the second most popular question after “do I really need ECC ram”

[–] Decipher0771@lemmy.ca 3 points 1 year ago (1 children)

The movie was disappointing garbage.

The book was alright.

[–] Decipher0771@lemmy.ca 46 points 1 year ago

Sure. Whether they’re effective and actually able to execute is another question.

A simple way might simply be to put an actual executable in the file instead, and when a user double clicks to open it it’ll run instead. Or there’s stuff to hide in metadata that could exploit particular players, or even some OS preview systems, and get execution that way.

But…..really pretty unlikely. Possible definitely, but you’d have to go through a lot of effort to get hit by something.

[–] Decipher0771@lemmy.ca 1 points 1 year ago

I remember travelling and going to Internet cafes at each city to upload pictures back to my server at home. Was like a modern phone booth for a short period.

[–] Decipher0771@lemmy.ca 6 points 1 year ago* (last edited 1 year ago) (1 children)

Depends on your system. Desktop have different requirements than servers.

On both at minimum, I'd keep /home and /var/log separate. Those usually see the most writes, are least controlled, and so long as they're separate partitions they can fill up accidentally and your system should still remain functional. /tmp and /var/tmp should usually be mounted separately, for similar reasons.

/boot usually keep separate because bootloaders don't always understand the every weird filesystem you might use elsewhere. It would also be the one unencrypted partition you need to boot off of.

On a server, /opt and /srv would usually be separate, usually separate volumes for each directory within those as well, depending how you want to isolate each application/data store location. You could just use quotas; but mounting separately would also allow you to specify different flags, i.e. noexec, nosuid for volumes that should only ever contain data.

/var/lib/docker and other stuff in /var/lib I usually like to keep on separate mounts. i.e. put /var/lib/mysql or other databases on a separate faster disk, use a different file system maybe, and again different mount options. In distant past, you'd mount /var/spool on a different filesystem with more inodes than usual.

Highly secure systems usually require /var/log/audit to be separate, and needs to have enough space guaranteed that it won't ever run out of space and lock the system out due to inability to audit log.

Bottom line is its differnet depending on your requiremtns, but splitting unnecessarily is a good way to waste space and nothing else. Separate only if you need it on a different type of device, different mount options, different size guarantees etc, don't do it for no reason.

view more: ‹ prev next ›