this post was submitted on 10 Jan 2024
79 points (86.9% liked)

Selfhosted

39921 readers
332 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi! Question in the title.

I get that its super easy to setup. But its really worthwhile to have something that:

  • runs everything as root (not many well built images with proper useranagement it seems)
  • you cannot really know which stuff is in the images: you must trust who built it
  • lots of mess in the system (mounts, fake networks, rules...)

I always host on bare metal when I can, but sometimes (immich, I look at you!) Seems almost impossible.

I get docker in a work environment, but on self hosted? Is it really worth while? I would like to hear your opinions fellow hosters.

(page 2) 34 comments
sorted by: hot top controversial new old
[–] vzq@lemmy.blahaj.zone 2 points 9 months ago* (last edited 9 months ago) (1 children)

How is this meaningfully different than using Deb packages? Or building from source without inspecting the build commands? Or even just building from source without auditing the source?

In the end docker files are just instructions for running software to set up other software. Just like every other single shell script or config file in existence since the mid seventies.

[–] scrubbles@poptalk.scrubbles.tech 1 points 9 months ago (5 children)

Your first sentence proves that it's different. The developer needs to know it's going to be a Deb package. What about rpm? What about if it's going to run on mac? Windows? That means they'll have to change how they develop to think about all of these different platforms. Oh you run windows - well windows doesn't have openssl, so we need to do this vs that.

I'd recommend reading up on docker and containerization. It is not a script for setting up software. If that's what you're thought is then you really don't understand containerization and I recommend taking some learnings on it. Like it or not it's here, and if you're doing any dev/ops work professionally you will be left behind for not understanding it.

load more comments (4 replies)
[–] TCB13@lemmy.world 1 points 9 months ago (3 children)

Why docker?

Its all about companies re-creating and reconfiguring the way people develop software so everyone will be hostage of their platforms. We see this in everything now Docker/DockerHub/Kubernetes and GitHub actions were the first sign of this cancer.

We now have a generation of developers that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some Docker or isn’t a 3rd party cloud xyz deploy-from-github service.

oh but the underlying technologies aren’t proprietary

True, but this Docker hype invariably and inevitably leads people down a path that will then require some proprietary solution or dependency somewhere that is only required because the “new” technology itself alone doesn’t deliver as others did in the past. In this particular case is Docker Hub / Kubernetes BS and all the cloud garbage around it.

oh but there are alternatives like podman

It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies because in the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term. This happened with CentOS vs Debian is currently unfolding with Docker vs LXC/RKT/Podman and will happen with Ubuntu vs Debian for all those who moved from CentOS to Ubuntu.

lots of mess in the system (mounts, fake networks, rules…)

Yes, a total mess of devices hard to audit, constant ram wasting and worse than all it isn't as easy change a docker image / develop things as it used to be.

load more comments (3 replies)
[–] SpeakinTelnet@sh.itjust.works 1 points 9 months ago

I'll say that as someone who stopped using docker and went back to deploying from source in lxc containers: dockers is a great tool for the majority of people and that is exactly what it aims to be, easily reusable in as many different setups as possible.

On the flip side, yes it may happen that you would not benefit from docker for a reason or another. I don't, in my case docker only adds another layer over my already containerized setup and many of the services I deploy are already built from source in a CI/CD workflow and deployed through ansible.

I do have other issues with docker but those are usually less with the tool and more with how some project use docker as a mean to replace proper deployment documentations.

[–] Decronym@lemmy.decronym.xyz 1 points 9 months ago* (last edited 9 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
Git Popular version control system, primarily for code
HTTP Hypertext Transfer Protocol, the Web
LXC Linux Containers
NAS Network-Attached Storage
NAT Network Address Translation
VPN Virtual Private Network
k8s Kubernetes container management package
nginx Popular HTTP server

8 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

[Thread #423 for this sub, first seen 10th Jan 2024, 18:25] [FAQ] [Full list] [Contact] [Source code]

[–] ericjmorey@programming.dev 1 points 9 months ago

What makes it make sense in a work environment?

[–] corsicanguppy@lemmy.ca 1 points 9 months ago

It looks great on a resume, even if there's a risk you'll land a job involving it.

[–] corroded@lemmy.world -1 points 9 months ago (1 children)

My personal opinion is that Docker just makes things more difficult. Containers are fantastic, and I use plenty of them, but Docker is just one way to implement containers, and a bad one. I have a server that runs Proxmox; if I need to set up a new service, I just spin up a LXC and install what I need to. It gives all the advantages of a full Linux installation without taking up the resources of a full-fledged OS. With Docker, I would need a VM running the docker host, then I'd have to install my docker containers inside this host, then forward any ports or resources between the hypervisor, docker host, and docker container.

I just don't get the use-case for Docker. As far as I can tell, all it does is add another layer of complexity between the host machine and the container.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›