this post was submitted on 13 Dec 2023
234 points (98.0% liked)

Selfhosted

39882 readers
429 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I'm a retired Unix admin. It was my job from the early '90s until the mid '10s. I've kept somewhat current ever since by running various machines at home. So far I've managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of "interesting" reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I'm thinking it's no longer a fad and I should invest some time getting comfortable with it?

top 50 comments
sorted by: hot top controversial new old
[–] originalucifer@moist.catsweat.com 68 points 10 months ago (2 children)

dude, im kinda you. i just jumped into docker over the summer... feel stupid not doing it sooner. there is just so much pre-created content, tutorials, you name it. its very mature.

i spent a weekend containering all my home services.. totally worth it and easy as pi[hole] in a container!.

[–] GreatBlueHeron@lemmy.ca 25 points 10 months ago* (last edited 10 months ago) (2 children)

Well, that wasn't a huge investment :-) I'm in..

I understand I've got LOTS to learn. I think I'll start by installing something new that I'm looking at with docker and get comfortable with something my users (family..) are not yet relying on.

[–] infeeeee@lemm.ee 26 points 10 months ago (7 children)

Forget docker run, docker compose up -d is the command you need on a server. Get familiar with a UI, it makes your life much easier at the beginning: portainer or yacht in the browser, lazy-docker in the terminal.

[–] ChapulinColorado@lemmy.world 22 points 10 months ago* (last edited 10 months ago) (1 children)

I would suggest docker compose before a UI to someone that likes to work via the command line.

Many popular docker repositories also automatically give docker run equivalents in compose format, so the learning curve is not as steep vs what it was before for learning docker or docker compose commands.

load more comments (1 replies)
[–] damnthefilibuster@lemmy.world 6 points 10 months ago

Second this. Portainer + docker compose is so good that now I go out of my way to composerize everything so I don’t have to run docker containers from the cli.

load more comments (5 replies)
load more comments (1 replies)
[–] themurphy@lemmy.world 8 points 10 months ago (5 children)

As a guy who's you before summer.

Can you explain why you think it is better now after you have 'contained' all your services? What advantages are there, that I can't seem to figure out?

Please teach me Mr. OriginalLucifer from the land of MoistCatSweat.Com

[–] BeefPiano@lemmy.world 23 points 10 months ago

No more dependency hell from one package needing libsomething.so 5.3.1 and another service absolutely can only run with libsomething.so 4.2.0

That and knowing that when i remove a container, its not leaving a bunch of cruft behind

[–] constantokra@lemmy.one 12 points 10 months ago

You can also back up your compose file and data directories, pull the backup from another computer, and as long as the architecture is compatible you can just restore it with no problem. So basically, your services are a whole lot more portable. I recently did this when dedipath went under. Pulled my latest backup to a new server at virmach, and I was up and running as soon as the DNS propagated.

load more comments (3 replies)
[–] iso@lemy.lol 36 points 10 months ago (3 children)

It just making things easier and cleaner. When you remove a container, you know there is no leftover except mounted volumes. I like it.

[–] AbidanYre@lemmy.world 15 points 10 months ago

It's also way easier if you need to migrate to another machine for any reason.

load more comments (2 replies)
[–] ck_@discuss.tchncs.de 28 points 10 months ago* (last edited 10 months ago) (1 children)

The main downside of docker images is app developers don't tend to play a lot of attention to the images that they produce beyond shipping their app. While software installed via your distribution benefits from meticulous scrutiny of security teams making sure security issues are fixed in a timely fashion, those fixes rarely trickle down the chain of images that your container ultimately depends on. While your distributions package manager sets up a cron job to install fixes from the security channel automatically, with Docker you are back to keeping track of this by yourself, hoping that the app developer takes this serious enough to supply new images in a timely fashion. This multies by number of images, so you are always only as secure as the least well maintained image.

Most images, including latest, are piss pour quality from a security standpoint. Because of that, professionals do not tend to grab "off the shelve" images from random sources of the internet. If they do, they pay extra attention to ensure that these containers run in sufficient isolated environment.

Self hosting communities do not often pay attention to this. You'll have to decide for yourself how relevant this is for you.

load more comments (1 replies)
[–] buedi@feddit.de 22 points 10 months ago (3 children)

I would absolutely look into it. Many years ago when Docker emerged, I did not understand it and called it "Hipster shit". But also a lot of people around me who used Docker at that time did not understand it either. Some lost data, some had servicec that stopped working and they had no idea how to fix it.

Years passed and Containers stayed, so I started to have a closer look at it, tried to understand it. Understand what you can do with it and what you can not. As others here said, I also had to learn how to troubleshoot, because stuff now runs inside a container and you don´t just copy a new binary or library into a container to try to fix something.

Today, my homelab runs 50 Containers and I am not looking back. When I rebuild my Homelab this year, I went full Docker. The most important reason for me was: Every application I run dockerized is predictable and isolated from the others (from the binary side, network side is another story). The issues I had earlier with my Homelab when running everything directly in the Box in Linux is having problems when let´s say one application needs PHP 8.x and another, older one still only runs with PHP 7.x. Or multiple applications have a dependency of a specific library when after updating it, one app works, the other doesn´t anymore because it would need an update too. Running an apt upgrade was always a very exciting moment... and not in a good way. With Docker I do not have these problems. I can update each container on its own. If something breaks in one Container, it does not affect the others.

Another big plus is the Backups you can do. I back up every docker-compose + data for each container with Kopia. Since barely anything is installed in Linux directly, I can spin up a VM, restore my Backups withi Kopia and start all containers again to test my Backup strategy. Stuff just works. No fiddling with the Linux system itself adjusting tons of Config files, installing hundreds of packages to get all my services up and running again when I have a hardware failure.

I really started to love Docker, especially in my Homelab.

Oh, and you would think you have a big resource usage when everything is containerized? My 50 Containers right now consume less than 6 GB of RAM and I run stuff like Jellyfin, Pi-Hole, Homeassistant, Mosquitto, multiple Kopia instances, multiple Traefik Instances with Crowdsec, Logitech Mediaserver, Tandoor, Zabbix and a lot of other things.

load more comments (3 replies)
[–] outcide@lemmy.world 20 points 10 months ago (2 children)

Another old school sysadmin that “retired” in the early 2010s.

Yes, use docker-compose. It’s utterly worth it.

I was intensely irritated at first that all of my old troubleshooting tools were harder to use and just generally didn’t trust it for ages, but after 5 years I wouldn’t be without.

load more comments (2 replies)
[–] 1984@lemmy.today 19 points 10 months ago* (last edited 10 months ago)

Docker is amazing, you are late to the party :)

It's not a fad, it's old tech now.

[–] ShittyBeatlesFCPres@lemmy.world 18 points 10 months ago (13 children)

I’m gonna play devil’s advocate here.

You should play around with it. But I’ve been a Linux server admin for a long time and — this might be unpopular — I think Docker is unimportant for your situation. I use Docker daily at work and I love it. But I didn’t bother with it for my home server. I’ll never need to scale it or deploy anything repeatedly or where I need 100% uptime.

At home, I tend to try out new things and my old docker-compose files are just not that valuable. Docker is amazing at work where I have different use cases but it mostly just adds needless complexity on a home server.

[–] GreatBlueHeron@lemmy.ca 8 points 10 months ago* (last edited 10 months ago) (2 children)

That's exactly how I feel about it. Except (as noted in my post..) the software availability issue. More and more stuff I want is "docker first" and I really have to go out of my way to install and maintain non docker versions. Case in point - I'm trying to evaluate Immich so I can move off Google photos. It looks really nice, but it seems to be effectively "docker only."

[–] greybeard@lemmy.one 12 points 10 months ago

The advantage of docker, as I see it for home labs, is keeping things tidy, ensuring compatibility, and easy to manage/backup setup configs, app configs, and app data. It is all very predictable and manageable. I can move my docker compose and data from one host to another in literal seconds. I can, likewise, spin up and down test environments in seconds too. Obviously the whole scaling thing that people love containers for is pointless in a homelab, but many of the things that make it scalable also make it easy to manage.

[–] Tsubodai@programming.dev 7 points 10 months ago

Im probably the opposite of you! Started using docker at home after messing up my raspberry pi a few too many times trying stuff out, and not really knowing what the hell I was doing. Since moved to a proper nas, with (for me, at least) plenty of RAM.

Love the ability to try out a new service, which is kind of self-documenting (especially if I write comments in the docker-compose file). And just get rid of it without leaving any trace if it's not for me.

Added portainer to be able to check on things from my phone browser, grafana for some pretty metrics and graphs, etc etc etc.

And now at work, it's becoming really, really useful, and I'm the only person in my (small, scientific research) team who uses containers regularly. While others are struggling to keep their fragile python environments working, I can try out new libraries, take my env to the on-prem HPC or the external cloud, and I don't lose any time at all. Even "deployed" some little utility scripts for folks who don't realise that they're actually pulling my image from the internal registry when they run it. A much, much easier way of getting a little time-saving script into the hands of people who are forced to use Linux but don't have a clue how to use it.

load more comments (12 replies)
[–] Swarfega@lemm.ee 14 points 10 months ago* (last edited 10 months ago)

I'm a VMware and Windows admin in my work life. I don't have extensive knowledge of Linux but I have been running Raspberry Pis at home. I can't remember why but I started to migrate away from installed applications to docker. It simplifies the process should I need to reload the OS or even migrate to a new Pi. I use a single docker-compose file that I just need to copy to the new Pi and then run to get my apps back up and running.

linuxserver.io make some good images and have example configs for docker-compose

If you want to have a play just install something basic, like Pihole.

[–] olafurp@lemmy.world 14 points 10 months ago

I started using docker myself for stuff at home and I really liked it. You can create a setup that's easy to reproduce or just download.

Easy to manage via docker CLI, one liner to run on startup unless stopped, tons of stuff made for docker becomes available. For non docker things you can always login to the container.

Tasks such as running, updating, stopping, listing active servers, finding out what ports are being used and automation are all easy imo.

You probably have something else you use for some/all of these tasks but docker makes all this available to non-sysadmin people and even has GUI for people who like clicking their mouse.

I think next time you find something that provides a docker compose file you should try it. :)

[–] kanzalibrary@lemmy.ml 13 points 10 months ago (1 children)

Why not jumping directly to Podman if you want more resilent system from beginning? Just my opinion

load more comments (1 replies)
[–] krash@lemmy.ml 13 points 10 months ago (6 children)

Welcome to the party 😀

If you want a good video tutorial that explains the inner workings of docker so you understand what's going on beneath the surface(without drowning in the details), let me know and I'll paste it tomorrow. Writing from bed atm 😴

load more comments (6 replies)
[–] 2xsaiko@discuss.tchncs.de 13 points 10 months ago

No. (Of course, if you want to use it, use it.) I used it for everything on my server starting out because that's what everyone was pushing. Did the whole thing, used images from docker hub, used/modified dockerfiles, wrote my own, used first Portainer and then docker-compose to tie everything together. That was until around 3 years ago when I ditched it and installed everything normally, I think after a series of weird internal network problems. Honestly the only positive thing I can say about it is that it means you don't have to manually allocate ports for those services that can't listen on unix sockets which always feels a bit yucky.

  1. A lot of images comes from some random guy you have to trust to keep their images updated with security patches. Guess what, a lot don't.
  2. Want to change a dockerfile and rebuild it? If it's old and uses something like "ubuntu:latest" as a base and downloads similar "latest" binaries from somewhere, good luck getting it to build or work because "ubuntu:latest" certainly isn't the same as it was 3 years ago.
  3. Very Linux- and x86_64-centric. Linux is of course not really a problem (unless on Mac/Windows developer machines, where docker runs a Linux VM in the background, even if the actual software you're working on is cross-platform. Lmao.) but I've had people complain that Oracle Free Tier aarch64 VMs, which are actually pretty great for a free VPS, won't run a lot of their docker containers because people only publish x86_64 builds (or worse, write dockerfiles that only work on x86_64 because they download binaries).
  4. If you're using it for the isolation, most if not all of its security/isolation features can be used in systemd services. Run systemd-analyze security UNIT.

I could probably list more. Unless you really need to do something like dynamically spin up services with something like Kubernetes, which is probably way beyond what you need if you're hosting a few services, I don't think it's something you need.

If I can recommend something instead if you want to look at something new, it would be NixOS. I originally got into it because of the declarative system configuration, but it does everything people here would usually use Docker for and more (I've seen it described it as "docker + ansible on steroids", but uses a more typical central package repository so you do get security updates for everything you have installed, and your entire system as a whole is reproducible using a set of config files (you can still build Nix packages from the 2013 version of the repository I think, they won't necessarily run on modern kernels though because of kernel ABI changes since then). However, be warned, you need to learn the Nix language and NixOS configuration, which has quite a learning curve tbh. But on the other hand, setting up a lot of services is as easy as adding one line to the configuration to enable the service.

[–] lemmyvore@feddit.nl 13 points 10 months ago* (last edited 10 months ago) (5 children)

Hi, also used to be a sysadmin and I like things that are simple and work. I like Docker.

Besides what you already noticed (that most software can be found packaged for Docker) here are some other advantages:

  • It's much lighter on resources and efficient than virtual machines.
  • It provides a way to automate installs (docker compose) that's (much) easier to get started with than things like Ansible.
  • It provides a clear separation between configuration, runtime, and persistent data and forces you to get organized.
  • You can group related services.
  • You can control interdependencies, privileges, shared access to resources etc.
  • You can define simple or complex virtual networking topologies between containers as you like.
  • It adds extra security (for whatever that's worth to you).

A brief description of my own setup, for ideas, feel free to ask questions:

  • Router running OpenWRT + server in a regular PC.
  • Server is 32 MB of RAM (bit overkill for now, black Friday upgrade, ran with 4 GB for years), Intel CPU with embedded GPU, OS on M.2 SSD, 8 HDD bays in Linux software RAID (MD).
  • OS is Debian stable barebones, only Docker, SSH and NFS are installed on the host directly. Tip: use whatever Linux distro you know and like best.
  • Docker is installed from their own repository, not from Debian's.
  • Everything else runs from docker containers, including things like CUPS or Samba.
  • I define all containers with compose, and map all persistent data to host storage. This way if I lose a container or even the whole OS I just re-provision from compose definitions and pick up right where I left off. In fact destroying and recreating containers cleanly is common practice with docker.

Learning docker and compose is not very hard esp. if you were on the job.

If you have specific requirements eg. storage, exposing services over internet etc. please ask.

Note: don't start with Podman or rootless Docker, start with regular Docker. It will be 10x easier. You can transition to the others later if you want.

load more comments (5 replies)
[–] BCsven@lemmy.ca 12 points 10 months ago (2 children)

Docker is great. I learned it from aetting up an Openmediavault server that had a built in docker extension, so now lots of servers running off that one server. Also portainer can be very handy for working with containers , basically a gui for the command line stuff or compose files you'd normally use in docker cli

load more comments (2 replies)
[–] gornius@lemmy.world 12 points 10 months ago

Learn it first.

I almost exclusively use it with my own Dockerfiles, which gives me the same flexibility I would have by just using VM, with all the benefits of being containerized and reproducible. The exceptions are images of utility stuff, like databases, reverse proxy (I use caddy btw) etc.

Without docker, hosting everything was a mess. After a month I would forget about important things I did, and if I had to do that again, I would need to basically relearn what I found out then.

If you write a Dockerfile, every configuration you did is either reflected by the bash command or adding files from the project directory to the image. You can just look at the Dockerfile and see all the configurations made to base Debian image.

Additionally with docker-compose you can use multiple containers per project with proper networking and DNS resolution between containers by their service names. Quite useful if your project sets up a few different services that communicate with each other.

Thanks to that it's trivial to host multiple projects using for example different PHP versions for each of them.

And I haven't even mentioned yet the best thing about docker - if you're a developer, you can be sure that the app will run exactly the same on your machine and on the server. You can have development versions of images that extend the production image by using Dockerfile stages. You can develop a dev version with full debug/tooling support and then use a clean prod image on the server.

[–] ikidd@lemmy.world 11 points 10 months ago
[–] DeltaTangoLima@reddrefuge.com 11 points 10 months ago

Similar story to yours. I was a HP-UX and BSD admin, at some point in the 00s, I stopped self-hosting. Felt too much like the work I was paid to do in the office.

But then I decided to give it a go in the mid-10s, mainly because I was uneasy about my dependence on cloud services.

The biggest advantage of Docker for me is the easy spin-up/tear-down capability. I can rapidly prototype new services without worrying about all the cruft left behind by badly written software packages on the host machine.

[–] AbouBenAdhem@lemmy.world 10 points 10 months ago* (last edited 10 months ago)

As a casual self-hoster for twenty years, I ran into a consistent pattern: I would install things to try them out and they’d work great at first; but after installing/uninstalling other services, updating libraries, etc, the conflicts would accumulate until I’d eventually give up and re-install the whole system from scratch. And by then I’d have lost track of how I installed things the first time, and have to reconfigure everything by trial and error.

Docker has eliminated that cycle—and once you learn the basics of Docker, most software is easier to install as a container than it is on a bare system. And Docker makes it more consistent to keep track of which ports, local directories, and other local resources each service is using, and of what steps are needed to install or reinstall.

[–] 520@kbin.social 10 points 10 months ago* (last edited 10 months ago)

It's very, very useful.

For one thing, its a ridiculously easy way to get cross-distro support working for whatever it is you're doing, no matter the distro-specific dependency hell you have to crawl through in order to get it set up.

For another, rather related reason, it's an easy way to build for specific distros and distro versions, especially in an automated fashion. Don't have to fuck around with dual booting or VMs, just use a Docker command to fire up the needed image and do what you gotta do.

Cleanup is also ridiculously easy too. Complete uninstallation of a service running in Docker simply involves removal of the image and any containers attached to it.

A couple of security rules you should bear in mind:

  1. expose only what you need to. If what you're doing doesn't need a network port, don't provide one. The same is true for files on your host OS, RAM, CPU allocation, etc.
  2. never use privileged mode. Ever. If you need privileged mode, you are doing something wrong. Privileged mode exposes everything and leaves your machine ripe for being compromised, as root if you are using Docker.
  3. consider podman over docker. The former does not run as root.
[–] possiblylinux127@lemmy.zip 9 points 10 months ago

Yes, you should. I would look into docker compose as it makes deployments very easy.

[–] rsolva@lemmy.world 9 points 10 months ago (6 children)

Yes! Well, kinda. You can skip Docker and go straight to Podman, which is an open source and more integrated solution. I configure my containers as systemd services (as quadlets).

load more comments (6 replies)
[–] zaphod@lemmy.ca 9 points 10 months ago* (last edited 10 months ago) (1 children)

My vote: not if you can avoid it.

For casual home admins docker containers are mysterious black boxes that are difficult to configure and even worse to inspect and debug.

I prefer lightweight VMs hosting one or more services on an OS I understand and control (in my case Debian stable), and only use docker images as a way to quickly try out something new before commiting time to deploying it properly.

[–] BCsven@lemmy.ca 6 points 10 months ago (1 children)

I found they were easier to config. somebody has a yaml file or via portainer to setup ports etc. and you can always bash into a docker to lurk inside the black box

load more comments (1 replies)
[–] Netrunner@programming.dev 9 points 10 months ago

You should try it.

[–] elscallr@lemmy.world 9 points 10 months ago (1 children)

Yes. Containers are awesome in that they let you use an application inside a sandbox, but beyond that you can deploy it anywhere.

If you're in the sysadmin world you should not only embrace Docker but I'd recommend learning k8s, too, if you still enjoy those things.

load more comments (1 replies)
[–] Undearius@lemmy.ca 9 points 10 months ago (2 children)

If you decide to use docker-compose.yml files, which I do recommend, then I'd also highly recommend this script for updating the docker containers.

It checks each container for updates and then let's you select the containers you would like to update. I just keep it in the main directory with all the other docker container directories.

https://github.com/mag37/dockcheck/blob/main/dockcheck.sh

[–] foobaz@lemmy.world 11 points 10 months ago (1 children)
load more comments (1 replies)
[–] Tsubodai@programming.dev 9 points 10 months ago (1 children)

Why not just run a watchtower container? Combined with a diun one to send gotify messages to my phone if you're into that. (I am!)

load more comments (1 replies)
[–] refreeze@lemmy.world 9 points 10 months ago

Also consider Nix/NixOS, I have used Docker, Kubernetes, LXC and prefer Nix the most. Especially for home use not requiring any scaling.

[–] azdle@news.idlestate.org 8 points 10 months ago* (last edited 10 months ago) (2 children)

IMO, yes. Docker (or at least OCI containers) aren't going anywhere. Though one big warning to start with, as a sysadmin, you're going to be absolutely aghast at the security practices that most docker tutorials suggest. Just know that it's really not that hard to do things right (for the most part[^0]).

I personally suggest using rootless podman with docker-compose via the podman-system-service.

Podman re-implements the docker cli using the system namespacing (etc.) features directly instead of through a daemon that runs as root. (You can run the docker daemon rootless, but it clearly wasn't designed for it and it just creates way more headaches.) The Podman System Service re-implements the docker daemon's UDS API which allows real Docker Compose to run without the docker-daemon.

[^0]: If anyone can tell me how to set SELinux labels such that both a container and a samba server can have access, I could fix my last remaining major headache.

load more comments (2 replies)
[–] ___@lemm.ee 8 points 10 months ago* (last edited 10 months ago)

LXC with cloud-init and ansible on proxmox gives you docker features without the docker headache.

[–] johntash@eviltoast.org 8 points 10 months ago (2 children)

Are you familiar with lxc or chroots or bsd jails by any chance? If you are, you probably won't find docker that much different to use other than a bigger selection of premade images.

It is kind of sad that some projects are trending towards docker first, but I think learning how to make packages for package managers is also becoming less popular :(

load more comments (2 replies)
[–] MigratingtoLemmy@lemmy.world 8 points 10 months ago

Docker is a QoL improvement over plain VMs/LXCs if you want easy-to-go content/FOSS applications bubdled into a system.

I would personally use Podman since Docker uses root by default, and Podman doesn't (there's options for both, just FYI), and Ansible/Terraform have made IaC a breeze (ah, the good days of orchestration), but I will never use Docker because of the company behind them and because of convoluted Docker networking that I can't be arsed to learn. Other than that, have fun! This is just my opinion anyway

[–] quackers@lemmy.blahaj.zone 6 points 10 months ago

It's quite easy to use once you get the hang of it. In most situations, it's the prefered option because you can just have your docker container, choose where relevant files are allowing you to properly isolate your applications. Or on single purpose servers, it makes deployment of applications and maintaining dependencies significantly easier.
At the very least, it's a great tool to add to your toolbox to use as needed.

[–] hottari@lemmy.ml 6 points 10 months ago

I am running all my software services with docker. It's stupid simple to manage and I have all of my running services in one paradigm.

[–] onlinepersona@programming.dev 6 points 10 months ago (34 children)

Why wouldn't you want to use containers? I'm curious. What do you use now? Ansible? Puppet? Chef?

load more comments (34 replies)
load more comments
view more: next ›