For my jbod array, I use ext4 on gpt partitions. Fast efficient mature.
For anything else I use ext4 on lvm thinpools.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
For my jbod array, I use ext4 on gpt partitions. Fast efficient mature.
For anything else I use ext4 on lvm thinpools.
If it didn't give you problems, go for it. I've run it for years and never had issues either.
Not proxmox-specific, but I've been using btrfs on my servers and laptops for the past 6 years with zero issues. The only times it's bugged out is due to bad hardware, and having the filesystem shouting at me to make me aware of that was fantastic.
The only place I don't use zfs is for my nas data drives (since I want raidz2, and btrfs raid5 is hella shady) but the nas rootfs is btrfs.
Meh. I run proxmox and other boot drives on ext4, data drives on xfs. I don't have any need for additional features in btrfs. Shrinking would be nice, so maybe someday I'll use ext4 for data too.
I started with zfs instead of RAID, but I found I spent way too much time trying to manage RAM and tuning it, whereas I could just configure RAID 10 once and be done with it. The performance differences are insignificant, since most of the work it does happens in the background.
You can benchmark them if you care about performance. You can find plenty of discussion by googling "ext vs xfs vs btrfs" or whichever ones you're considering. They haven't changed that much in the past few years.
but I found I spent way too much time trying to manage RAM and tuning it,
I spent none, and it works fine. what was your issue?
I have four 6tb data drives and 32gb of RAM. When I set them up with zfs, it claimed quite a few gb of RAM for its cache. I tried allocating some of the other NVMe drive as cache, and tried to reduce RAM usage to reasonable levels, but like I said, I found that I was spending a lot of time fiddling instead of just configuring RAID and have it running just fine in much less time.
You can ignore the RAM usage, it's just cache. It uses up to half your RAM by default but if other things need it zfs will just clear RAM for that to happen.
That might be what was supposed to happen, but when I started up the VMs I saw memory contention.
Proxmox only supports btrfs or ZFS for raid
Or at least that's what I thought
Using it here. Love the flexibility and features.
I run it now because I wanted to try it. I haven't had any issues. A friend recommended it as a stable option.
I am using btrfs on raid1 for a few years now and no major issue.
It's a bit annoying that a system with a degraded raid doesn't boot up without manual intervention though.
Also, not sure why but I recently broke a system installation on btrfs by taking out the drive and accessing it (and writing to it) from another PC via an USB adapter. But I guess that is not a common scenario.
btrfs raid subsystem hasn't been fixed and is still buggy, and does weird shit on scrubs. But fill your boots, it's your data.
Used it in development environment, well I didn't need the snapshot feature and it didn't have a straightforward swap setup, it lead to performance issues because of frequent writes to swap.
Not a big issue but annoyed me a bit.