I'm a lazy piece of shit and containers give me cancer, so I just keep iptables aggressive and spin up whatever on an Ubuntu box that gets upgrades when I feel like wasting a weekend in my underwear.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
An honest soul
I get paid to do shit with rigor; I don't have the time, energy, or help to make something classy for funsies. I'm also kind of a grumpy old man such that while I'll praise and embrace Python's addition of f-strings which make life better in myriad ways, I eschew the worse laziness of the all the containers attitude that we see for deployment.
Maybe a day shall come when containers are truly less of a headache than just thinking shit through the first time, and I'll begrudgingly adapt and grow, but that day ain't today.
I use debian VMs and create rootless podman containers for everything. Here's my collection so far.
I'm currently in the process of learning how to combine this with ansible... that would save me some time when migrating servers/instances.
Thanks for sharing. There’s some great stuff in the repo.
Proxmox, then create LXC for everything (moslty debian and a bit of alpine), no automation, full yolo, if it break I have backup (problems are for future me eh)
This.
Proxmox and then LXCs for anything I need.
and yes - I cheat a bit, I use the excellent Proxmox scripts - https://tteck.github.io/Proxmox/ because I'm lazy like that haha
Mostly the same. Proxmox with several LXC, two of which are running docker. One for my multimedia, the other for my game servers.
I used to do the same, but nowadays I just run everything in docker, within a single lxc container on proxmox. Having to setup mono or similar every time I wanted to setup a game server or even jellyfin was annoying.
After many years of tinkering, I finally gave in and converted my whole stack over to UnRAID a few years ago. You know what? It's awesome, and I wish I had done it sooner. It automates so many of the more tedious aspects of home server management. I work in IT, so for me it's less about scratching the itch and more about having competent hosting of services I consider mission-critical. UnRAID lets me do that easily and effectively.
Most of my fun stuff is controlled through Docker and VMs via UnRAID, and I have a secondary external Linux server which handles some tasks I don't want to saddle UnRAID with (PFSense, Adblocking, etc). The UnRAID server itself has 128GB RAM and dual XEON CPUs, so plenty of go for my home projects. I'm at 12TB right now but I was just on Amazon eyeing some 8TB drives...
Debian and docker compose
Synology with docker-compose stack
Debian + nginx + docker (compose).
That's usually enough for me. I have all my docker compose files in their respective containers in the home directory like ~/red-discordbot/docker-compose.yml
.
The only headache I've dealt with are permissions because I have to run docker as root and it makes a lot of messy permissions in the home directories. I've been trying rootless docker earlier and it's been great so far.
edit: I also use rclone
for backups.
I run unraid on my server box with a few 8tb hdd and nvme for cache. From there it is really easy to spin up Docker containers or stacks using compose, as well as VMs using your iso of choice.
For automation, I use Ansible to run one click setup machines; it is great for any cloud provider work too.
I have a git repository with all my compose files sorted neatly into directories, i.e. my "stack". Portainer allows adding stacks using a repository, so it's essentially one click deployment once the compose file is on a remote git server.
I've set up some godforsaken combination of docker, podman, nerdctl and bare metal at work for stuff I needed since they hired me. Every day I'm in constant dread something I made will go down, because I don't have enough time to figure out how I was supposed to do it right T.T
I just have a pi 4 running OpenMediaVault with docker and portainer. 😅
I use NixOS on almost all my servers, with declarative configuration. I can also install my config in one command with NixOS-Anywhere
It allows me to improve my setup bit by bit without having to keep track of what I did on specific machines
About two years ago my set up had gotten out of control, as it will. Closet full of crap all running vms all poorly managed by chef. Different linux flavors everywhere.
Now its one big physical ubuntu box. Everything gets its own ubuntu VM. These days if I can't do it in shell scripts and xml I'm annoyed. Anything fancier than that i'd better be getting paid. I document in markdown as i go and rsync the important stuff from each VM to an external every night. Something goes wrong i just burn the vm, copy paste it back together in a new one from the mkdocs site. Then get on with my day.
Right now, I just flash ubuntu server to whatever computer it is, ssh and yolo lmao. no containers, no managers, just me, my servers, and a vpn, raw dogging the internet lmao. The box is running a nas, jellyfin, lemmy, and a print server; the laptop a minecraft server, and the pi is running a pihole, and a website that controls gpio that controls the lights. In the pictured setup i dont have access to the apartment complex's router, so i vpn through a openvpn server i setup in a digitalocean server.
i didnt even know what a container was until i setup the lemmy server, which i just used ansible for.
i still dont really know what ansible is.
I use Unraid and their docker and VM integration, Works great for me as a home user with mixed drives. Most of the dockers i want already have unraid templates so require less configuration. Does everything i want and made it a bit easier for me with less configuration and the mixed drive support.
I use the following procedure with ansible.
- Setup the server with the things I need for k3s to run
- Setup k3s
- Bootstrap and create all my services on k3s via ArgoCD
A bunch of old laptops running Ubuntu Server and docker-compose. Laptops are great; built in screen, keyboard, and UPS (battery), and more than capable of handling the kind of light workloads I run.
I use a heterogeneous environment with some things hosted in various cloud providers and others locally. Often times, I can usually find the package I need - but if I can't, I usually go for Docker and docker-compose. This is often the case in Oracle Linux on OCI - where docker just makes things so much easier.
For my static stuff I just use Cloudflare Pages and forget about it.
On my homelab it is Arch Linux with my own set of scripts. I used to do VFIO gaming a lot (less now), so I had the host only be a hypervisor and used a separate Arch VM to host everything in a docker-compose stack. The VM makes my server operations a lot more tidy.
My RPI is using dietpi and is natively running the pihole software and a couple other things.
I know some folks swear by UnRaid and Proxmox, but I've always found those platforms limited me vs building things my way. Also borking my own system unintentionally on occasion is a thrilling opportunity to learn!
raspberry pi, arch linux, docker-compose. I really need to look up ansible
If doing a fresh server external, I'd go for debian as base(don't need to update it too often + stable)
For apps it's mostly docker-compose to set up portainer/nginx-proxy then from ther just manage the rest from portainer/nginx-proxy web-ui. ony log on the server for the occasional docker updates / pruning for space.
I see a lot of guys going the full kubernetes route and it's something I'm hoping to get into at some point but it seems like a lot to unpack for now.
Debian netinst via PXE, SSH/YOLO, docker + compose (formerly swarm), scripts are from my own library, Debian.
I do the same except I boot a usb installer instead of PXE.
I can never find a USB drive when I need one, thus my PXE server was born. lol
I have a single desktop running Proxmox with a TrueNAS VM for handling my data and a Debian VM for my Docker containers which accesses the NAS data through NFS.
For personal Linux servers, I tend to run Debian or Ubuntu, with a pretty simple "base" setup that I just run through manually in my head.
- Setup my personal account.
- Upload my SSH keys.
- Configure the hostname (usually after something in Star Trek 🖖).
- Configure the /etc/hotss file.
- Make sure it is fully patched.
- Setup ZeroTier.
- Setup Telegraf to ship some metrics.
- Reboot.
I don't automate any of this because I don't see a whole of point in doing it.
Super interesting to me that you swap between Debian and Ubuntu. Is there any rhyme or reason to why you use one over the other?
I setup my bare metal boxes and vms with ansible. Then I use ansible to provision docker containers on those.
Proxmox and shell scripts. I have everything automated from base install to updates.
All the VMs are Debian which install with a custom seed file. Each VM has a config script that will completely setup all users, ip tables, software, mounts, etc. SSL certs are updated on one machine with acme.sh and then pushed out as necessary.
One of these days I’ll get into docker but half the fun is making it all work. I need some time to properly set it up and learn how to configure it securely.
I have a base Debian template with a few tweaks I like for all my machines. Debating setting up something like terraform but I just don't spin up VMs frequently enough to wan tto do that. I do have a few Ansible playbooks I run on a fresh server to really get it to where I want though.
Fedora-server with Podman and Quadlet on btrfs drives. Although I must admit I often use rootful mode in Podman as it works better with Containers made for Docker. Ah and you might want to turn off SElinux in the beginning as it can get frustrating fast.
For a while I tried to run k8s (k3s mostly), then I did run nomad for a while. Now I am just running docker compose on Ubuntu (still have one box running Proxmox, but that will be decommissioned eventually, and mostly just runs one VM running Ubuntu).
I am building a few things to solve specific problems I have with this:
- Some basic ansible scripts to set up ssh, users, basic packages, etc
- Docker label-based service discovery/announcement that traefik can consume. (currently working!)
- Deployment: getting the compose files, config files, and docker images to the right machine and getting them running. (in progress)
- At some point I will probably get around to automating deployment of the rest of the above via Ansible when it is more stable.
Most of my server hardware is oriented toward having a bunch of disks plugged into them (I am 100% guilty of being a data hoarder), and I am running gluster to glue that all together, so that is something I install onto the servers to share their physical disks and/or mount the logical disks.
Xen on Gentoo with Gentoo VMs. I've scripted the provisioning in bash, it's fairly straightforward - create lvm volume, extract latest root, tell xen whick kernel to boot.
Ideally would like to netboot a readonly root off nfs and apply config from some source. Probably bash :D
Some things like opnsense are much more handcrafted because they're a kind of unicorn compared to the rest of the stuff.
I use SSH to manage docker compose. I'm just using a raspberry pi right now so I don't have room for much more than Syncthing and Dokuwiki.
I'm all in on docker-compose + rootless podman. Definitely not no issues, but I've got the hang of the kinds of issues it presents at this point. They're mostly around SELinux and networking, though generally the networking only gets problematic on exotic compose setups - jitsi was a huge pain for me.
Raw server with SSH and an immutable OS too. I'm using fedora IOT for my homeserver, and apart from some initial issues with GPU drivers because of layering issues (now working) that's been basically flawless.
I was on OpenSuse MicroOS, but I had huge problems with BTRFS and decided to give it up in favour of EXT4 + XFS. That necessitated moving distro, because MicroOS uses BTRFS snapshots as the basis for its auto-updating/green/blue system. Fedora IOT uses rpm-ostree instead, and works on any filesystem.
Usually Debian as base, then ansible to setup openssh for accessandd for the longest time, I just ran docker-compose straight on bare metal, these days though, I prefer k3s.
I use Proxmoxn then stare at the dashboard realizing I have no practical use for a home lab
Up until now I've been using docker and mostly manually configuring by dumping docker compose files in /opt/whatever and calling it a day. Portainer is running, but I mainly use it for monitoring and occasionally admin tasks. Yesterday though, I spun up machine number 3 and I'm strongly considering setting up something better for provisioning/config. After it's all set up right, it's never been a big problem, but there are a couple of bits of initial with that are a bit of a pain (mostly hooking up wireguard, which I use as a tunnel for remote admin and off-site reverse proxying.
Salt is probably the strongest contender for me, though that's just because I've got a bit of experience with it.
Generally, it’s Proxmox, debían, then whatever is needed for what I’m spinning up. Usually Docker Compose.
Lately I’ve been playing some with Ansible, but it’s use is far from common for me right now.