this post was submitted on 13 Aug 2023
190 points (100.0% liked)
196
16503 readers
2666 users here now
Be sure to follow the rule before you head out.
Rule: You must post before you leave.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm building an unRAID server and have my main array as 3 14TB HDDs and my cache pool is a 2TB NVMe, all 4 are set to ZFS. Are you saying I should swap my NVMe to BTRFS to reduce wear and tear?
ZFS will do the same thing as BTRFS in regards to wear and tear, so swapping to BTRFS is almost certainly not the answer in your case. Since you're using it as a cache pool I'm not quite sure what the performance implications of these features are, but the two main things related to wear and tear are:
Transparent compression: This basically means that when you write a file to the drive, ZFS will first compress it using your CPU, then write the compressed file to the drive instead. This file will be smaller than your original file, so it won't take as much space on the drive. Because it doesn't take as much space on the disk, you limit the number of bits being written and subsequently limit the wear and tear. When you read the file again, it decompresses it back to its normal state, which also limits the number of bits being read from the drive. The standard compression algorithm for this task is usually ZSTD, which is really good at compressing while still being very fast. One of the most interesting parts about ZSTD is that no matter how much effort you spend compressing, decompression speed will always be the same. I tend to refer to this benchmark as an approximation for ZSTD effort levels with regards to transfer speeds - the benchmark is done with BTRFS but the results will translate when enabling this feature on ZFS. Personally I usually run ZSTD level 1 as a default, though level 2 is also enticing. If you're not picking 1 or 2, you probably have a strong usecase to be going somewhere closer to ~7. This is probably not enabled by default, so just check if it's enabled for you and enable it if you want. Keep in mind that it will decrease transfer speed if the drive itself is not the bottleneck - HDDs love this feature because compressing/decompressing with your CPU is much faster than reading/writing to spinning rust. Also keep in mind that ZSTD is not magic, and it's not going to do much to already compressed files - photos, music, video, archives, etc.
Copy on write: Among other things, this mostly means that when you copy a file to the same drive, it will be instant - logical file markers will just be created to point at the previous physical file blocks. We don't need to read the full file, and we don't need to re-write the full file. In your usecase of cache pool I don't think this will affect much, since any duplicate hits are probably just getting redirected to the previous copy anyway? This should be the default behavior of ZFS tmk, so I don't think you need to check that it's enabled or anything.
i like level 3 because you enable it by putting :3 in the mount options :3
Apologies, this is the only correct answer from now on. Thank you for helping me to see the light.
Why not lz4?
LZ4 is faster but ZSTD compresses better. They're both viable on ZFS, but it depends on your usecase. Notably BTRFS does not offer LZ4, and in the case of BTRFS I would really only recommend ZSTD, as LZO is just not worth it.
Edit: Found some benchmarks you can look through: https://docs.google.com/spreadsheets/d/1TvCAIDzFsjuLuea7124q-1UtMd0C9amTgnXm2yPtiUQ/edit#gid=0
Amazing. Thank you. It's awesome that zstd has caught up in encode/decode time. There was a solid 5-10 years there nothing came close to lz4 in both.