Hello there Selfhosted community!
This is an announcement of the completion of a project I've been working on. A Script for installing Ubuntu 24.04 on a ZFS RAID 10. Now, I'd like to describe why I choose to develop this and how I'd like for other people to have access to it as well. Let us start with the hardware.
Now, I am using an old host. My host in particular was originally a BCDR device that was based on a ZFS raidz implementation. Since it was designed for ZFS, it doesn't even have a RAID card, it only has an HBA anyways. So for redundancy, ZFS is a good way to go. Now, even though this was a backup appliance, it did not have root on ZFS. Instead, it had a separate harddrive for the operating system and three individual disks for the zpool. This was not my goal.
So I did a little research and testing. I looked at two particular guides (Debian/Ubuntu). Now, I performed those steps a dozens of times because I kept messing up the little things. And to eliminate the human error(that's me) I decided to just go ahead and script the whole thing.
The Github Repository I linked contains all the code needed to setup a generic ubuntu-server host using a ZFS RAID 10.
Instructions for starting the script are easy. Boot up a live cd(https://ubuntu.com/download/server). Hit CTRL+ALT+F2 to go into the shell. Run the following command:
bash <(wget -qO- https://raw.githubusercontent.com/Reddimes/ubuntu-zfsraid10/refs/heads/main/tools/install.sh)
This command does clone the repository, changes directory into it, and runs the entrypoint(sudo ./init.sh). Hopefully, this should be easy to customize to meet your needs.
More Engineering details are on the Github.
Compared to Z2? Not according to the link they just provided.
You're right, I must've still been half a sleep or something because I swear when I read that earlier I read the Read Speeds flipped(so Raid 10 read speed as belonging to Raid-Z2 and vice versa)... my bad
👍
Well... I have to admit my own mistake as well. I did assume it would have faster read and write speeds based upon my raid knowledge and didn't actually look it up until I was questioned about it. So I appreciate being kept honest.
While we have agreed on the read/write benefits of a ZFS RAID 10 there are a few disadvantages to a setup such as this. For one, I do not have the same level of redundancy. A raidz2 can lose two full hard drives. A zfs RAID10 can lose one guaranteed and up to two total. As long as an entire mirror isn't gone, I can lose two. So overall, this setup is less redundant than raidz2.
Another drawback that it faces is that for some reason, Ubuntu 24.04 does not recognize scsi drives except over live CD. Perhaps someone can help me with this to provide everyone with a better solution. Those same disks that were visible on the live CD are not visible once the system is installed. It still technically works, but
zpool status rpool
will show that it is using sdb3 instead of the scsi hdds. This is fine technically, my hdds are SATA anyways so I just changed to the SATA hdds. But if I could ensure that others don't face this issue, it would result in a more reliable ZFS installation for them.Here is the exact issue that I'm having. I've included screenshots of the command I use to list HDDs on the live cd versus the same command run on Ubuntu 24.04. I don't know anything about what is causing this issue so perhaps this is a time where someone else can assist. Now, the benefit to using /dev/disk/by-id/ is that you can be more specific about the device, so you can be sure that it is connected to the proper disk no matter the state that your environment is in. This is something that you need to do to have a stable ZFS install. But if I can't do that with scsi disks, then that advantage is limited.
Windows Terminal for the win, btw.
Live CD:
Ubuntu 24.04 Installed: