this post was submitted on 24 Aug 2024
380 points (98.5% liked)

Linux

48184 readers
1406 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

There's been some Friday night kernel drama on the Linux kernel mailing list... Linus Torvalds has expressed regrets for merging the Bcachefs file-system and an ensuing back-and-forth between the file-system maintainer.

you are viewing a single comment's thread
view the rest of the comments
[–] pimeys@lemmy.nauk.io 53 points 2 months ago* (last edited 2 months ago) (5 children)

For me the reason was that I wanted encryption, raid1 and compression with a mainlined filesystem to my workstation. Btrfs doesn't have encryption, so you need to do it with luks to an mdadm raid, and build btrfs on top of that. Luks on mdadm raid is known to be slow, and in general not a great idea.

ZFS has raid levels, encryption and compression, but doesn't have fsck. So you better have an UPS for your workstation for electric outages. If you do not unmount a ZFS volume cleanly, there's a risk of data loss. ZFS also has a weird license, so you will never get it with mainline Linux kernel. And if you install the module separately, you're not able to update to the latest kernel before ZFS supports it.

Bcachefs has all of this. And it's supposed to be faster than ZFS and btrfs. In a few years it can really be the golden Linux filesystem recommended for everybody. I sure hope Kent gets some more help and stops picking fights with Linus before that.

[–] calamityjanitor@lemmy.world 27 points 2 months ago

ZFS doesn't have fsck because it already does the equivalent during import, reads and scrubs. Since it's CoW and transaction based, it can rollback to a good state after power loss. So not only does it automatically check and fix things, it's less likely to have a problem from power loss in the first place. I've used it on a home NAS for 10 years, survived many power outages without a UPS. Of course things can go terribly wrong and you end up with an unrecoverable dataset, and a UPS isn't a bad idea for any computer if you want reliability.

Totally agree about mainline kernel inclusion, just makes everything easier and ZFS will always be a weird add-on in Linux.

[–] zarenki@lemmy.ml 14 points 2 months ago

Btrfs doesn't have encryption, so you need to do it with luks to an mdadm raid, and build btrfs on top of that. Luks on mdadm raid is known to be slow, and in general not a great idea.

Why involve mdadm? You can use one btrfs filesystem on a pair of luks volumes with btrfs's "raid1" (or dup) profile. Both volumes can decrypt with the same key.

[–] xantoxis@lemmy.world 8 points 2 months ago (1 children)

Bcachefs has all of this. And it’s supposed to be faster than ZFS and btrfs. In a few years it can really be the golden Linux filesystem recommended for everybody

ngl, the number of mainline Linux filesystems I've heard this about. ext2, ext3, btrfs, reiserfs, ...

tbh I don't even know why I should care. I understand all the features you mentioned and why they would be good, but i don't have them today, and I'm fine. Any problem extant in the current filesystems is a problem I've already solved, or I wouldn't be using Linux. Maybe someday, the filesystem will make new installations 10% better, but rn I don't care.

[–] bastion@feddit.nl 7 points 2 months ago

It's a filesystem that supports all of these features (and in combination):

  • snapshotting
  • error correction
  • per-file or per-directory "transparently compress this"
  • per-file of per-directory "transparently back this up"

If that is meaningless to you, that's fine, but it sure as hell looks good to me. You can just stick with ext3 - it's rock solid.

[–] possiblylinux127@lemmy.zip 1 points 2 months ago* (last edited 2 months ago) (1 children)

ZFS doesn't have Linux fsck has it is its own thing. It instead has ZFS scrubbing which fixes corruption. Just make sure you have at least raid 1 as without a duplicate copy ZFS will have no way of fixing corruption which will cause it to scream at you.

If you just need to get data off you can disable error checking. Just use it at your own risk.

[–] pimeys@lemmy.nauk.io 1 points 2 months ago (1 children)

But scrub is not fsck. It just goes through the checksums and corrects if needed. That's why you need ECC ram so the checksums are always correct. If you get any other issues with the fs, like a power off when syncing a raidz2, there is a chance of an error that scrub cannot fix. Fsck does many other things to fix a filesystem...

So basically a typical zfs installation is with UPS, and I would avoid using it on my laptop just because it kind of needs ECC ram and you should always unmount it cleanly.

This is the spot where bcachefs comes into place. It will implement whatever we love about zfs, but also be kind of feasible for mobile devices. And its fsck is pretty good already, it even gets online checks in 6.11.

Don't get me wrong, my NAS has and will have zfs because it just works and I don't usually need to touch it. The NAS sits next to UPS...

[–] possiblylinux127@lemmy.zip 1 points 2 months ago (1 children)

I have never had an issue with ZFS as long as there is a redundant copy. A bad ram might cause an issue but that's never happened to me. I did have a bad motherboard that corrupted data on write. ZFS threw its hands up but there wasn't any unfixable corruption

[–] pimeys@lemmy.nauk.io 1 points 2 months ago

Me neither, but the risk is there and well documented.

The point was, ZFS is not great as your normal laptop/workstation filesystem. It kind of requires a certain setup, can be slow in certain kinds of workflows, expects disks of same size and is never available immediately for the latest kernel version. Nowadays you actually can add more disks to a pool, but for a very long time you needed to build a new one. Adding a larger disk to a pool will still not resize it, untill all the disks are replaced.

It shines with steady and stable raid arrays, which are designed to a certain size and never touched after they are built. I would never use it in my workstation, and this is the point where bcachefs gets interesting.

[–] LemmyHead@lemmy.ml -5 points 2 months ago (2 children)

Encryption and compression don't play well together though. You should consider that when storing sensitive files. That's why it's recommended to leave compression off in https because it weakens the encryption strength

[–] nous@programming.dev 5 points 2 months ago (1 children)

How does that work? Encryption should not care at all about the data that is being encrypted. It is all just bytes at the end of the day, should not matter if they are compressed or not.

[–] ThanksForAllTheFish@sh.itjust.works 4 points 2 months ago (1 children)

Disabling compression in HTTPS is advised to prevent specific attacks, but this is not about compression weakening encryption directly. Instead, it’s about preventing scenarios where compression could be exploited to compromise security. The compression attack is used to leak information about the content of the encrypted data, and is specific to HTTP, probably because HTTP has a fixed or guessable structure.

[–] nous@programming.dev 4 points 2 months ago* (last edited 2 months ago)

Looks to be an exploit only possible because compression changes the length of the response and the data can be injected into the request and is reflected in the response. So an attacker can guess the secret byte by byte by observing a shorter response form the server.

That seems like something not feasible to do to a storage device or anything that is encrypted at rest as it requires a server actively encrypting data the attacker has given it.

We should be careful of seeing a problem in one very specific place and then trying to apply the same logic to everything broadly.

[–] pimeys@lemmy.nauk.io 4 points 2 months ago (2 children)

It is only in TLS where you have to disable compression, not in HTTP.

https://security.stackexchange.com/questions/19911/crime-how-to-beat-the-beast-successor/19914#19914

Could you explain how a CRIME attack can be done to a disk?

[–] nous@programming.dev 4 points 2 months ago

There is also the BREACH which targets gzip/deflate compression on http as well. But also, don't see how that affects disk encryption.

[–] LemmyHead@lemmy.ml 2 points 2 months ago

I can't explain, perhaps due to my limited knowledge about the subject. I understood that compression was a weakening factor for encryption years ago when I heard about it. Always good to do your own research in the end 🙃