this post was submitted on 15 Jul 2024
66 points (98.5% liked)

Selfhosted

40386 readers
488 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Goal:

  • 16TB mirrored on 2 drives (raid 1)
  • Hardware raid?
  • Immich, Jellyfin and Nextcloud. (All docker)
  • N100, 8+ GB RAM
  • 500gb boot drive ssd
  • 4 HDD bays, start with using 2

Questions:

  • Which os?
    • My though was to use hardware raid, and just set that up for the 2 hdds, then boot off an ssd with Debian (very familiar, and use it for current server which has 30+ docker containers. Basically I like and am good at docker so would like to stick to Debian+docker. But if hardware raid isn't the best option for HDDs now a days, I'll learn the better thing)
  • Which drives? Renewed or refurb are half the cost, so should I buy extra used ones, and just be ready to swap when the fail?
  • Which motherboard?
  • Which case?
top 41 comments
sorted by: hot top controversial new old
[–] monkeyman512@lemmy.world 29 points 4 months ago (4 children)

You don't want hardware raid. Some options you can research:

  • Mdadm - Linux software raid
  • ZFS - Combo raid and filesystem
  • Btrfs - A filesystem that can also do raid things

Some OS options to consider:

  • Debian - good if you want to learn to do everything yourself
  • Truenas Scale - Comercial NAS OS. I bit of work to get started, but very stable once going.
  • Unraid - Enthusiast focused NAS OS. Not as stable as Truenas, but easier to get started and a lot of community support.

There are probably other software/OS's to consider, but those are the ones I have any experience with. I personally use ZFS on Truenas with a lot of help from this YouTube channel. https://youtube.com/@lawrencesystems?si=O1Z4BuEjogjdsslF

[–] tburkhol@lemmy.world 18 points 4 months ago (1 children)

Ditto on hardware raid. Adding a hardware controller just inserts a potentially catastrophic point of failure. With software raid and raid-likes, you can probably recover/rebuild, and it's not like the overhead is the big burden it was back in the 90s.

[–] JustARegularNerd@aussie.zone 3 points 4 months ago (1 children)

I got a server from ewaste because the RAID card did fail and having SAS drives they couldn't even pull data from it with anything else. It was the domain controller and NAS so as you can imagine, very disruptive to the business. As they should they had an offsite backup of the system and so we just restored onto a gaming PC as a temporary solution until we moved them to M365 instead.

I just use software RAID on it now and so far so good for about 180 days.

[–] lud@lemm.ee 1 points 4 months ago (1 children)

It was the domain controller

Bruh are you telling me they only had a single DC?

You need a minimum of two.

Also putting general storage on a DC is a really bad idea. The VM or machine running ADDS should run exclusively ADDS (and required services like DNS)

[–] JustARegularNerd@aussie.zone 1 points 4 months ago* (last edited 4 months ago)

This all happened two weeks before I started, so I don't know the exact details. If it was set up the way I think it was, I'd say yes, the DC was in it's own VM and then a separate VM would've been used as a NAS. Of course being hardware RAID the whole host server went down when that card failed.

They probably didn't have a second DC set up due to the DEFCON 5 levels of "We can't work!"

They were ultimately planning on going to the cloud anyway from what I heard and that catastrophe just accelerated that plan ahead

[–] unrushed233@lemmings.world 7 points 4 months ago

Just want to mention that TrueNAS is FOSS and unRAID is not. And I wouldn't necessarily say that unRAID is much easier.

[–] ShortN0te@lemmy.ml 3 points 4 months ago (1 children)
  • Truenas Scale - Comercial NAS OS. I bit of work to get started, but very stable once going.
  • Unraid - Enthusiast focused NAS OS. Not as stable as Truenas, but easier to get started and a lot of community support.

Since OP wants to use Docker i would not recommend either. Trunas scale does not support it usefully and the implementation in Unraid is also weird. Also the main benefit of unraid is the mixing of drives, OP wants to raid.

[–] Krill@feddit.uk 1 points 4 months ago (1 children)

TrueNAS Scale will have Docker in the next release in August, along with the ability to expand Vdevs.

[–] ShortN0te@lemmy.ml 1 points 4 months ago (1 children)

I am aware of vdev expansion since i am following it closely but not heard about docker support, thanks for that new, i will read into it. Would be actually a game changer for a project i am planning.

[–] Krill@feddit.uk 1 points 4 months ago* (last edited 4 months ago)

https://forums.truenas.com/t/the-future-of-electric-eel-and-apps/5409

I'm not the best person to explain the how or why but they are looking at Q3 for beta and Q4 for main release.

I'm running Immich, Nextcloud and Jellyfin on TNS and it's fine. Nextcloud takes a bit of work though.

[–] TheHolm@aussie.zone 1 points 4 months ago

I would add LVM to the list of software raids, and remove btrfs as poorly engineered.

[–] OneCardboardBox@lemmy.sdf.org 9 points 4 months ago (2 children)

I'd recommend BTRFS in RAID1 over hardware or mdadm raid. You get FS snapshotting as a feature, which would be nice before running a system update.

For disk drives, I'd recommend new if you can afford them. You should look into shucking: It's where you buy an external drive and then remove (shuck) the HDD from inside. You can get enterprise grade disks for cheaper than buying that same disk on its own. The website https://shucks.top tracks the price of various disk drives, letting you know when there are good deals.

[–] Dust0741@lemmy.world 2 points 4 months ago (1 children)

How does the replacing of a HDD work on btrfs? Like if one failed and I'm using Debian, how do I rebuild the raid 1?

Or should I use an actual raid os?

[–] OneCardboardBox@lemmy.sdf.org 3 points 4 months ago

Assuming that the disk is of identical (or greater) capacity to the one being replaced, you can run btrfs replace.

https://wiki.tnonline.net/w/Btrfs/Replacing_a_disk#Replacing_with_equal_sized_or_a_larger_disk

[–] tired_n_bored@lemmy.world 1 points 4 months ago

I second this. I use BTRFS over ZFS for its reduced footprint but has always been very reliable. With a couple of commands I replaced a disk and btrfs scrub on a monthly basis makes me sleep peacefully (relatively)

[–] ShortN0te@lemmy.ml 7 points 4 months ago (1 children)

Then just go with debian+docker. As raid software i would recommend ZFS, its a filesystem that does both and also integrity on file level. (and lots more)

I personally would only buy new ones. No matter the brand just the best TB/€ you can get.

For MB basically every Chipset gives you 4 SATA ports. You could consider picking one that Supports unbuffered ECC memory but that is not a must. If you want to Hardware Transcode in Jellyfin, then Intel is probably your best since the dGPU with Quicksync is pretty good and well supported, otherwise i would go AMD.

For 4 drives you can use most ATX cases have no recommendations here.

[–] AbidanYre@lemmy.world 6 points 4 months ago

Just make sure the drives are CMR, not SMR.

[–] eleitl@lemm.ee 7 points 4 months ago (1 children)

No hardware RAID. Use zfs, if you can. Mirror the boot SSD. I would use a stripe over mirror and 4 HDDs. Two drives are not enough redundancy. Use enterprise or nearline drives, if you can. Debian is great, you can install Proxmox on top of it, but from the sound of it plain Debian would work for you.

[–] Dust0741@lemmy.world 2 points 4 months ago (1 children)

For Debian how does the drive restore/rebuild process work?

[–] eleitl@lemm.ee 4 points 4 months ago (1 children)

https://wiki.debian.org/ZFS has nice docs. I would practice in nonproduction first if you're unfamiliar with zfs.

[–] Dust0741@lemmy.world 3 points 4 months ago

Awesome thanks. My current system would become my test environment.

[–] monkeyman512@lemmy.world 5 points 4 months ago

For HDDs the best way is to think of them like shoes or tires. They will eventually fail, but they also may fail prematurely. I always recommend having a spare drive ready.

[–] Decronym@lemmy.decronym.xyz 4 points 4 months ago* (last edited 4 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
LVM (Linux) Logical Volume Manager for filesystem mapping
LXC Linux Containers
NAS Network-Attached Storage
NUC Next Unit of Computing brand of Intel small computers
PCIe Peripheral Component Interconnect Express
PSU Power Supply Unit
Plex Brand of media server package
RAID Redundant Array of Independent Disks for mass storage
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)
ZFS Solaris/Linux filesystem focusing on data integrity

[Thread #872 for this sub, first seen 15th Jul 2024, 16:35] [FAQ] [Full list] [Contact] [Source code]

[–] ipkpjersi@lemmy.ml 4 points 4 months ago

Don't use hardware RAID, use a nice software RAID like zfs. 2 HDDS and an OS SSD would be a great use case for zfs.

[–] JustEnoughDucks@feddit.nl 4 points 4 months ago (2 children)

If you want to build it yourself, you have to decide on size.

Are you trying to keep it as small as possible?

Do you want a dedicated GPU for multiple jellyfin streams? (Definitely get the Intel A380, cheap and an encoding beast)

If you don't want to start a rack and don't want to go with a prebuilt NUC, there are 2 PC cases I would recommend.

Node 304 and Node 804.

Node 304 is mini-ITX (1 PCIe slot, 1 M.2 slot for boot OS, 4 HDDs, SFX-L PSU, and great cooling)

Node 804 is micro-ATX (2 PCIe slots, 2 M.2 slots, 8-10 HDDs, ATX PSU, and 2 chambers for the HDDs to stay cool)

Why do you want a N100? Is electricity very expensive where you are that idle power is a big factor? Because desktop CPUs are more powerful and the CPUs can idle down to 10W or so without a GPU and they can have way more RAM.

Tldr; go with prebuilt NUC or go with a desktop CPU for a custom build.

[–] thermal_shock@lemmy.world 2 points 4 months ago

I just rebuilt my truenas in a node 804 and LOVE it. so much hard drive space. wanted to get the 304 for my personal backup server, but got thermal takes corev1 instead. looks uglier, but works well too.

[–] BennyInc@feddit.org 1 points 4 months ago

Any recommendations for rack setups? I have a (small) rack I could use.

[–] Corgana@startrek.website 3 points 4 months ago

Nobody's mentioned Homarr or CasaOS but if you want an out of the box "Just works" but still open source experience they're the best bet.

[–] cosmicrose@lemmy.world 3 points 4 months ago (2 children)

I’ve had a great experience with the TrueNAS Mini-X system I bought. ZFS has great raid options, and TrueNAS makes managing a system really easy. You can get a box built & configured by them, with 16 GB ECC RAM and five (empty) drive bays, for about $1150 at the most affordable end. https://www.truenas.com/truenas-mini/

One thing to be careful about: you can’t add drives to a ZFS vdev once it’s been created, but you can add new vdevs to an existing pool. So, you can start with two mirrored drives, then add another two mirrored drives to that pool later.

(A vdev is a sub-unit of a ZFS storage pool, and you have to choose your RAID topology for each vdev and then compose those into a storage pool)

[–] Saik0Shinigami@lemmy.saik0.com 3 points 4 months ago

If it's a raidz, you can.

[–] ShortN0te@lemmy.ml 3 points 4 months ago

ZFS vdev expansion is a thing that will probably be added to the next ZFS release.

Ofc it is not released yet, so i would not recommend designing a system for it for the near future.

[–] slazer2au@lemmy.world 2 points 4 months ago

Does it need to be 4 bay?

Aoostoar it is only a 2 bay though.

They have a AMD variant if you want to go down the Proxmox route with LXC or docker in a VM.

[–] Bishma@discuss.tchncs.de 2 points 4 months ago* (last edited 4 months ago)

The majority of our household stuff is on a Synology DS920+ (x86). I installed Docker and Portainer on it and then run most of my local services (Immich, Invidious, Alexandrite (the Lemmy frontend), Miniflux, Dokuwiki, and Heimdall) using the Portainer UI.

I'm still running Plex as a manually installed Syno package, because I haven't taken the time to figure out hardware trans-coding for other setups.

The 920 also manages cameras (via Surveillance Station), all off site backups (we all backup workstations to the 920 and it backs up online), handles private DNS and the reverse proxy for Docker, and hosts my personal VPN. I'm currently in the process of swapping the 4+ year old drives with new ones what will up my capacity (using SHR) from 12TB to 30 (with redundancy).

[–] Evil_Shrubbery@lemm.ee 2 points 4 months ago* (last edited 4 months ago)

ProxMox.

But also, in case if your only data backup plan for them is raid 1 - in such cases I prefer to have only one HDD in the machine & use the other one as a backup on a separate machine, preferably in another location. I find it that the missing 12h (or whatever) of the latest data overshadows the (lower) probability of losing all the data (fire, flood, burglary, weirdly specific accidents, etc).

And ofc you can select what to backup/rsync or not.

Eg Immich, after return to operation the apps will just resync any missing pics from the last backup.

Also with two systems you don't have to care that much about drive quality. Im now buying Exos 22+TB bcs why not. But when I needed quiet drives I bought Red Plus (not the regular Red ones, nor the Pro ones), they are even quieter than Exos, but smol.

[–] possiblylinux127@lemmy.zip 1 points 4 months ago* (last edited 4 months ago) (2 children)

If you can do at least three nodes with high availability. It is more expensive and trickier to setup but in the long run it is worth it when hosting for others. You can literally unplug a system and it will fail over

It is overkill but you can use Proxmox with a docker swarm.

Again way overkill but future proof and reliable

[–] Emotet 5 points 4 months ago* (last edited 4 months ago)

While this is a great approach for any business hosting mission critical or user facing ressources, it is WAY overkill for a basic selfhosted setup involving family and friends.

For this to make sense, you need to have access to 3 different physical locations with their own ISPs or rent 3 different VPS.

Assuming one would use only 1 data drive + an equal parity drive, now we're talking about 6 drives with the total usable capacity of one. If one decides to use fewer drives and link your nodes to one or two data drives (remotely), I/O and latency becomes an issue and you effectively introduced more points of failure than before.

Not even talking about the massive increase in initial and running costs as well as administrive headaches, this isn't worth it for basically anyone.

[–] Evil_Shrubbery@lemm.ee -1 points 4 months ago* (last edited 4 months ago) (1 children)

I think this is the way and not an overkill at all!

Its super easy to swarm ProxMox, and you make your inevitable admin job easier. Not to mention backups, first testing & setting up a VM on your server before copying it to their, etc.

[–] lud@lemm.ee 1 points 4 months ago (1 children)

You need at minimum three ceph nodes but actually four if you want it to work better. But ceph isn't ideally designed in mind with clusters that small. 7 nodes would be more reasonable.

While clustering proxmox using ceph is cool as fuck it's not easy or cheap to accomplish at home.

[–] Evil_Shrubbery@lemm.ee 1 points 4 months ago* (last edited 4 months ago) (1 children)

No, I didn't mean with ceph. No quorum needed either. Just add to the group.

[–] lud@lemm.ee 2 points 4 months ago (1 children)

Ah, so you mean completely independent hosts.

[–] Evil_Shrubbery@lemm.ee 1 points 4 months ago

Yes, it's still easier to manage, transfer stuff, Proxmox Backup server integrates nicely, etc.