this post was submitted on 03 Sep 2023
39 points (97.6% liked)

Selfhosted

39921 readers
304 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Upgrading a self-hosted server (3)

A short intro to Docker

Docker is a lot less complicated than it was made out to be.

Docker is a way of taking a service (something like Plex) and making it work in a sort of "slice" cut out of the real machine's resources (CPU, RAM and disk space). These slices are called containers.

There are several benefits:

  • If someone breaks into one of your services, they only reach one container not the real machine, and not any of the other containers.
  • It's very easy to restore a container in case of machine reinstall, using "magical" recipe files called "docker compose yaml". If the main OS blows up you just need to reinstall stock Debian stable and Docker, then use the magical recipes.
  • The containers share similar files among themselves, so if you have 10 containers that use the same files it will only be stored once, not 10 times.
  • You can try out any server software without worrying you'll mess up your host machine. Or you can use a second configurations of the same service in a second container without worrying you'll mess up the first one.

Basic Docker commands

  • docker-compose up -d: run this into the same dir as a magical yaml file to create a container for the first time.
  • docker stop cups, docker start cups, docker restart cups will stop/start/restart the cups container.
  • docker container list shows all containers you've created.
  • docker rm cups removes the cups container (if it's not running).
  • docker image list shows the software images that the containers are using.
  • docker rmi olbat/cupsd will remove the olbat/cupsd image, but only if the cups container that is based on it has been removed (and stopped).
  • docker exec cups ls /etc/cups will execute a command inside the container. You can execute /bin/sh or /bin/bash to explore inside the container machine.
  • ctop is a nice tool that will show you all containers and let you start/stop/restart them.

Preparing for using Docker

There are a couple of things you need to add to a fresh Debian install in order to use Docker:

  • docker, obviously, the package on Debian is called docker.io.
  • ctop is a nice CLI tool that shows your containers and lets you do stuff with them (stop, start, restart, enter shell, view logs etc.) It's basically a simple CLI version of Portainer (which I never bothered installing after all).

The following tools are indirectly related to services inside docker containers:

  • vainfo will verify that GPU-accelerated video encoding/decoding is working for AMD and Intel GPUs. This will be useful for many media streaming containers. See the Arch wiki for more.
  • avahi (which is avahi-daemon on Debian) and avahi-dnsconfd will help autodiscover some services on LAN between Linux machines. Only applicable if you have more than one Linux machine on your LAN, of course, and it's only relevant to some services (eg. CUPS).

Some tips about Docker

Should you use Docker or Podman? If you're a beginner just use Docker. It's a lot simpler. Yes, there are good reasons to use Podman but you don't need the extra headache when you're starting out. You will be able to transition into Podman easier later.

Use restart: "always" in your compose yaml's and save yourself unnecessary trouble. Some people try to micromanage their containers and end up writing sysctl scripts for each of them and so on and so forth. With this restart policy your containers will stay stopped if stopped manually, but will start each time the docker daemon [re]starts, which most likely means at boot, which is probably all you want right now.

The one docker issue that will give you the most trouble is mapping users from the real machine to the container machine and back. You want the service in the container to run as a certain user, and maybe you want to give it access to some files or devices on the real machine too. But some docker images were made by people who apparently don't understand how Linux permissions work. A good docker image will let you specify what users and groups it needs to work with (emby/embyserver is a very good example). A bad image will make up some UID and GID that's completely unrelated to anything on your machine and give you no way to change them; for such images you can try to force them to use root (UID and GID 0) but that negates some of the benefits a container was supposed to give you. So when looking for images see if they have a description, and if it mentions any UID and GID mapping. Otherwise you will probably have a bad time.

How do you find (good) docker images? On hub.docker.com, just search for what you need (eg. "cups"). I suggest sorting images by most recently updated, and have a look at how many downloads and stars it has, too. It's also very helpful if they have a description and instructions.

One last thing before we get to the good stuff. I made a dir on my RAID array, /mnt/array/docker where I make one subdir for each service (eg. /mnt/array/docker/cups), where I keep the magical yaml (compose.yaml) for each service, and sometimes I map config files from the container, so they persist even if the container is deleted. I also use git in those dirs to keep track of the changes to the yaml files.

Using Emby in a Docker container

Emby is a media server that you use to index movies and series and watch them remotely on TV or your phone and tablet. Some other popular alternatives are Plex, Jellyfin and Serviio. See this comparison chart.

Here's the docker-compose.yaml, explanations below:

version: "2.3"
services:
  emby:
    image: emby/embyserver
    container_name: emby
    #runtime: nvidia # for NVIDIA GPUs
    #network_mode: host # if you need DLNA or Wake-on-Lan
    environment:
      - UID=1000 # The UID to run emby as
      - GID=100 # The GID to run emby as
      - GIDLIST=100,44,105 # extra groups for /dev/dri/* devices
    volumes:
      - "./data:/config" # emby data dir (note the dot at the start)
      - "/mnt/nas/array/multimedia:/mnt/nas/array/multimedia"
    ports:
      - "8096:8096/tcp" # HTTP port
      - "8920:8920/tcp" # HTTPS port
    devices:
      - "/dev/dri:/dev/dri" # VAAPI/NVDEC/NVENC render nodes
    restart: always
  • version is the compose yaml specification version. Don't worry about this.
  • services and emby defines the service for this container.
  • image indicates what image to download from the docker hub.
  • container_name will name your container, normally you'd want this to match your service (and for me the dir I put this in).
  • runtime is only relevant if you have an Nvidia GPU, for accelerated transcoding. Mine is Intel so... More details on the emby image description.
  • network_mode: host will expose the container networking directly to the host machine. In this case you don't need to manually map the ports anymore. As it says, this is only needed for some special stuff like DLNA or WoL (and not even then, I achieve DLNA for example with BubbleUPnP Server without resorting to host mode).
  • environment does what I mentioned before. This is a very nicely behaved and well written docker image that not only lets you map the primary UID and GID but also adds a list of extra GUIDs because it knows we need to access /dev devices that are owned by 3rd party users like video and render. Very nice.
  • volumes maps dirs or files from the local real machine to the container. The config dir holds everything about Emby (settings, cache, data) so I map it outside of the container to keep it. When I installed this container I pointed it to the location of my old Emby stuff from the previous install and It Just Worked.
  • devices similarly maps device files.
  • ports maps the network ports that the app is listening on.
  • restart remember what I said about this above.

Using Deluge in a Docker container

Let's look at another nicely made Docker image. Deluge is a BitTorrent client, what we put in the Docker container is actually just the server part. The server can deal with the uploads/downloads but needs an UI app to manage it. The UI apps can be installed on your phone for example (I like Transdroid) but the Deluge server also includes a web interface on port 8112.

version: "2.1"
services:
  deluge:
    image: lscr.io/linuxserver/deluge:latest
    container_name: deluge
    environment:
      - PUID=1000
      - PGID=1000
      - DELUGE_LOGLEVEL=error
    volumes:
      - "./config:/config" # mind the dot at the start
      - "/mnt/nas/array/deluge:/downloads"
    ports:
      - "8112:8112/tcp" # web UI
      - "60000:60000/tcp" # BT transfers
      - "60000:60000/udp" # BT transfers
      - "58846:58846/tcp" # daemon remote control (for Transdroid)
    restart: always

Most of this is covered above with Emby so I won't repeat everything, just the important distinctions:

  • Notice how environment lets you choose what UID and GUID to work as.
  • I use volumes to map out the dir with the actual downloads, as well as map all the Deluge config dir locally so I can save it across container resets/reinstalls.
  • The ports need to be defined in the deluge config (which you can do via the web UI or edit the config directly) before you map them here. IIRC these are the defaults but please check.

Using Navidrome in a Docker container

Navidrome is a music indexer and streaming server (sort of like your own Spotify). It's follows the Subsonic spec so any client app that works with Subsonic will work with Navidrome (I like Substreamer). It also includes a web UI.

version: "3"
services:
  navidrome:
    image: deluan/navidrome:latest
    container_name: navidrome
    environment:
      ND_SCANSCHEDULE: 1h
      ND_LOGLEVEL: info
      ND_BASEURL: ""
      ND_PORT: 4533
      ND_DATAFOLDER: /data
      ND_MUSICFOLDER: /music
    volumes:
      - "./data:/data"
      - "/mnt/nas/array/music:/music:ro"
    ports:
      - "4533:4533/tcp"
    restart: "always"

Again, mostly self-explanatory:

  • Environment settings are nice but this image stopped short of allowing UID customization and just said fuck it and ran as root by default. Nothing to do here, other than go look for a nicer image.
  • I mapped the data dir locally so I preserve it between resets, and the music is shared read-only.

Using BubbleUPnPServer in a Docker container

This server can do some interesting things. Its bread and butter is DLNA. It has a companion Android app called, you guessed it, BubbleUPnP, which acts as a DLNA controller. The server part here can do local transcoding so the Android phone doesn't have to (subject to some caveats, for example the phone, the DLNA source and the Bubble server need to be on the same LAN; and it can only transcode one stream at a time).

It can also identify media providers (like Emby or Plex) on the LAN and media renderers (like Chromecast or Home Mini speaker) and DLNA-enables them so they appear in the Bubble app as well as on other DLNA-aware devices.

version: "3.3"
services:
  deluge:
    image: bubblesoftapps/bubbleupnpserver-openj9-leap
    container_name: bubbleupnpserver
    network_mode: "host"
    user: "0:0"
    devices:
      - "/dev/dri:/dev/dri:rw"
    volumes:
      - "./data/configuration.xml:/opt/bubbleupnpserver/configuration.xml:rw"
    restart: "always"
  • network_mode is "host" because this server needs to interact with lots of things on the LAN automagically.
  • user forces the server to run as root. The image uses a completely made up UID and GID, there's no way to customize it, and it needs to access /dev/dri which are restricted to video and render groups to access GPU-accelerated transcoding. So using root is the only solution here (short of looking for a nicer image).
  • I map the configuration file outside the container so it's saved across reset/reinstalls.

Using Samba in a Docker container

Normally I'd install samba on the host machine but Debian wanted me to install like 30 packages for it so I think that's a valid reason to use a container.

version: "2.3"
services:
  samba:
    image: twistify/anonymous-samba
    container_name: samba
    volumes:
      - "./etc/samba:/etc/samba:ro"
      - "/mnt/nas/array:/mnt/nas/array"
    ports:
      - "445:445/tcp" # SMB
      - "139:139/tcp" # NetBIOS
    restart: "always"

Normally this should be a simple enough setup, and it is simple as far as docker is concerned. Map some ports, map the config files, map the array so you can give out shares, done. But the image doesn't offer UID customization and just runs as root.

For reference I give the /etc/samba/smb.conf here too because I know it's tricky. This one only offers anonymous read-only shares, which mainly worked out of the box (hence why I stuck to this image in spite of the root thing).

[global]
   workgroup = WORKGROUP
   log file = /dev/stdout
   security = user
   map to guest = Bad User
   log level = 2
   browseable = yes
   read only = yes
   create mask = 666
   directory mask = 555
   guest ok = yes
   guest only = yes
   force user = root

[iso]
path=/mnt/nas/multimedia/iso

You can add your own shares aside from [iso], as long as the paths are mapped in the yaml.

Notice the ugly use of root in the Samba config too.

Using CUPS inside a Docker container

CUPS is a printer server, which I need because my printer is connected via USB to the server and I want to be able to print from my desktop machine.

version: "2.3"
services:
  cups:
    image: aguslr/cups:latest
    container_name: cups
    privileged: "yes"
    environment:
      - CUPS_USER=admin
      - CUPS_PASS=admin
    volumes:
      - "/dev/bus/usb:/dev/bus/usb" # keep this under volumes, not devices
      - "/run/dbus:/run/dbus"
    ports:
      - "631:631/tcp" # CUPS
    restart: "always"

The docker setup is not terribly complicated if you overlook things like /dev/bus/usb needing to be a volume mapping not a device mapping, or the privileged mode.

CUPS is complicated because it's a complex beast, so I'll try to add some pointers below (mostly unrelated to docker):

  • You can use lpstat -p inside the host to check if CUPS know about your printer, and /usr/lib/cups/backend/usb to check if it knows about the USB printer in particular.
  • You need CUPS on both the server and the desktop machine you want to print from. You need to add the printer on both of them. The CUPS interface will be at :631 on both of them, for printer management on the server the user+pass is admin:admin as you can see above, on the desktop machine God only knows (typically "root" and its password, or the password of the main user).
  • So the server CUPS will probably detect the USB printer and have drivers for it, this image did for mine (after I figured out the USB bus snafu). You need to mark the printer as shared!
  • ...but in order for the desktop machine to detect the printer you need to do one more thing: install Avahi daemon and dnsconfd packages on both machines because that's the stuff that actually makes it easy for the desktop machine to autodetect the remote printer.
  • ...and don't rely on the drivers from the server, the desktop machine needs its own drivers, which it may or may not have. For my printer (Brother HL-2030) I had to install an AUR package on desktop – and then the driver showed up when setting up the printer in the desktop CUPS.

See you next time with more docker recipes! As usual any and all comments and suggestions are welcome, including "omg you're so dumb, that thing could be done easier like this".

top 1 comments
sorted by: hot top controversial new old
[–] macallik@kbin.social 2 points 1 year ago

Thanks for sharing, I didn't know about ctop.

I installed Debian on a laptop to use as a server, and planned to use it for Firefly III among other things, but got nervous hearing how tough it was/is to lock down a server.