this post was submitted on 11 Mar 2024
213 points (90.5% liked)

Selfhosted

40137 readers
663 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I use nftables to set my firewall rules. I typically manually configure the rules myself. Recently, I just happened to dump the ruleset, and, much to my surprise, my config was gone, and it was replaced with an enourmous amount of extremely cryptic firewall rules. After a quick examination of the rules, I found that it was Docker that had modified them. And after some brief research, I found a number of open issues, just like this one, of people complaining about this behaviour. I think it's an enourmous security risk to have Docker silently do this by default.

I have heard that Podman doesn't suffer from this issue, as it is daemonless. If that is true, I will certainly be switching from Docker to Podman.

top 50 comments
sorted by: hot top controversial new old
[–] Molecular0079@lemmy.world 67 points 8 months ago (2 children)

If you use firewalld, both docker and podman apply rules in a special zone separate from your main one.

That being said, podman is great. Podman in rootful mode, along with podman-docker and docker-compose, is basically a drop-in replacement for Docker.

[–] Link@rentadrunk.org 12 points 8 months ago (2 children)

Is it? Last time I tried none of my docker compose files would start correctly in podman compose.

[–] Molecular0079@lemmy.world 17 points 8 months ago (1 children)

podman-compose is different from docker-compose. It runs your containers in rootless mode. This may break certain containers if configured incorrectly. This is why I suggested podman-docker, which allows podman to emulate docker, and the native docker-compose tool. Then you use sudo docker-compose to run your compose files in rootful mode.

[–] warmaster@lemmy.world 2 points 8 months ago (1 children)

How is Podman rootful better than Docker? I was mostly attracted by the rootless path, but the breakage deterred me. Would you be so kind to tell me ?

[–] Molecular0079@lemmy.world 1 points 8 months ago (1 children)

It isn't that much better. I use it as drop-in docker replacement. It's better integrated with things like cockpit though and the idea is that it's easier to eventually migrate to rootless if you're already in the podman ecosystem.

[–] warmaster@lemmy.world 1 points 8 months ago (1 children)

Ok that sounds intetesting, I've found Cockpit easier to use than Proxmox, I'm new to virtualization and I don't want do nesting... I fear it will complicate things when I'll need to do GPU passthrough.

How is Podman integrated into Cockpit?

Also, I had so much trouble trying to bridge my Home Assistant VM to my LAN. Are there any tutorials on how to do this from Cockpit?

[–] Molecular0079@lemmy.world 1 points 8 months ago (1 children)

Your containers show up in Cockpit under the "Podman containers" section and you can view logs, type commands into their consoles, etc. You can even start up containers, manage images, etc.

Are there any tutorials on how to do this from Cockpit?

I have not done this personally, but I would assume you need to create a bridge device in Network Manager or via Cockpit and then tell your VM to use that. Keep in mind, bridge devices only work over Ethernet.

[–] warmaster@lemmy.world 1 points 8 months ago (1 children)

bridge devices only work over Ethernet

Yes, I want to reach my HA VM from my LAN connected devices.

[–] Molecular0079@lemmy.world 1 points 8 months ago

Cockpit definitely has the ability to create bridge devices. I haven't found a tutorial specifically for cockpit, but you can follow something like this and apply the same principles to the "Add Bridge" dialog in Cockpit's network settings.

[–] dandroid@sh.itjust.works 2 points 8 months ago (1 children)

I'm a podman user, but what's the point of using podman if you are going to use a daemon and run it as root? I like podman so I can specifically avoid those things.

[–] Molecular0079@lemmy.world 3 points 8 months ago

I am using it as a migration tool tbh. I am trying to get to rootless, but some of the stuff I host just don't work well in rootless yet, so I use rootful for those containers. Meanwhile, I am using rootless for dev purposes or when testing out new services that I am unsure about.

Podman also has good integration into Cockpit, which is nice for monitoring purposes.

[–] zeluko@kbin.social 62 points 8 months ago (1 children)

Yeah, it needs those rules for e.g. port-forwarding into the containers.
But it doesnt really 'nuke' existing ones.

I have simply placed my rules at higher priority than normal. Very simple in nftables and good to not have rules mixed between nftables and iptables in unexpected ways.
You should filter as early as possible anyways to reduce ressource usage on e.g. connection tracking.

[–] Kalcifer@sh.itjust.works 3 points 8 months ago

But it doesnt really ‘nuke’ existing ones.

How come I don't see my previous rules when I dump the ruleset, then? I have my rules written in /etc/nftables.conf, and they were previously applied by running # nft -f /etc/nftables.conf. Now, when I dump the current ruleset with # nft list ruleset, those previous rules aren't there — all I see are Docker's rules.

[–] Auli@lemmy.ca 51 points 8 months ago (1 children)

It doesn’t nuke your rules. Just ads to them.

[–] Kalcifer@sh.itjust.works 11 points 8 months ago (1 children)

How come I don't see my previous rules when I dump the ruleset, then? I have my rules written in /etc/nftables.conf, and they were previously applied by running # nft -f /etc/nftables.conf. Now, when I dump the current ruleset with # nft list ruleset, those previous rules aren't there — all I see are Docker's rules.

[–] gorgori@lemmy.world 0 points 8 months ago (1 children)

You can use a bridge network or the host network.

In bridge network it is like a NAT host. With its own firewall settings.

In host network mode, it will just open the port it needs.

[–] Kalcifer@sh.itjust.works 1 points 8 months ago

I could be misunderstanding your comment, but you don't seem to have answered my question of why I don't see my rules anymore.

[–] JustEnoughDucks@feddit.nl 17 points 8 months ago (4 children)

This is standard, but often unwanted, behavior of docker.

Docker creates a bunch of chain rules, but IIRC, doesn't modify actual incoming rules (at least it doesn't for me) it just will make a chain rule for every internal docker network item to make sure all of the services can contact each other.

Yes it is a security risk, but if you don't have all ports forwarded, someone would still have to breach your internal network IIRC, so you would have many many more problems than docker.

I think from the dev's point of view (not that it is right or wrong), this is intended behavior simply because if docker didn't do this, they would get 1,000 issues opened per day of people saying containers don't work when they forgot to add a firewall rules for a new container.

Option to disable this behavior would be 100x better then current, but what do I know lol

[–] justJanne@startrek.website 14 points 8 months ago (4 children)

That assumes you're on some VPS with a hardware firewall in front.

Often enough you're on a dedicated server that's directly exposed to the internet, with those iptables rules being the only thing standing between your services and the internet.

load more comments (4 replies)
[–] moonpiedumplings@programming.dev 6 points 8 months ago (3 children)

Yes it is a security risk, but if you don’t have all ports forwarded, someone would still have to breach your internal network IIRC, so you would have many many more problems than docker.

I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

My problem with this, is that when running a public facing server, this ends up with people exposing containers that really, really shouldn't be exposed.

Excerpt from another comment of mine:

It’s only docker where you have to deal with something like this:

***
services:
  webtop:
    image: lscr.io/linuxserver/webtop:latest
    container_name: webtop
    security_opt:
      - seccomp:unconfined #optional
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - SUBFOLDER=/ #optional
      - TITLE=Webtop #optional
    volumes:
      - /path/to/data:/config
      - /var/run/docker.sock:/var/run/docker.sock #optional
    ports:
      - 3000:3000
      - 3001:3001
    restart: unless-stopped

Originally from here, edited for brevity.

Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.

[–] adam@doomscroll.n8e.dev 16 points 8 months ago (1 children)

But... You literally have ports rules in there. Rules that expose ports.

You don't get to grumble that docker is doing something when you're telling it to do it

Dockers manipulation of nftables is pretty well defined in their documentation. If you dig deep everything is tagged and natted through to the docker internal networks.

As to the usage of the docker socket that is widely advised against unless you really know what you're doing.

[–] moonpiedumplings@programming.dev 6 points 8 months ago* (last edited 8 months ago) (1 children)

Dockers manipulation of nftables is pretty well defined in their documentation

Documentation people don't read. People expect, that, like most other services, docker binds to ports/addresses behind the firewall. Literally no other container runtime/engine does this, including, notably, podman.

As to the usage of the docker socket that is widely advised against unless you really know what you’re doing.

Too bad people don't read that advice. They just deploy the webtop docker compose, without understanding what any of it is. I like (hate?) linuxserver's webtop, because it's an example of the two of the worst footguns in docker in one

To include the rest of my comment that I linked to:

Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?

No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker “bypasses” the firewall.

On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that’s better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren’t exposed to the internet, and docker throws that out the window.

You originally stated:

I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

And I'm trying to say that even if that was true, it would still be better than a footgun where people expose stuff that's not supposed to be exposed.

But that isn't the case for podman. A quick look through the github issues for podman, and I don't see it inundated with newbies asking "how to expose services?" because they assume the firewall port needs to be opened, probably. Instead, there are bug reports in the opposite direction, like this one, where services are being exposed despite the firewall being up.

(I don't have anything against you, I just really hate the way docker does things.)

[–] adam@doomscroll.n8e.dev 6 points 8 months ago* (last edited 8 months ago)

Documentation people don’t read

Too bad people don’t read that advice

Sure, I get it, this stuff should be accessible for all. Easy to use with sane defaults and all that. But at the end of the day anyone wanting to using this stuff is exposing potential/actual vulnerabilites to the internet (via the OS, the software stack, the configuration, ... ad nauseum), and the management and ultimate responsibility for that falls on their shoulders.

If they're not doing the absolute minimum of R'ingTFM for something as complex as Docker then what else has been missed?

People expect, that, like most other services, docker binds to ports/addresses behind the firewall

Unless you tell it otherwise that's exactly what it does. If you don't bind ports good luck accessing your NAT'd 172.17.0.x:3001 service from the internet. Podman has the exact same functionality.

[–] null 3 points 8 months ago

My solution to this has been to not forward the ports on individual services at all. I put a reverse proxy in front of them, refer to them by container name in the reverse proxy settings, and make sure they're on the same docker network.

[–] wreckedcarzz@lemmy.world 2 points 8 months ago* (last edited 8 months ago) (3 children)

So uh, I just spun up a vps a couple days ago, few docker containers, usual security best practices... I used ufw to block all and open only ssh and a couple others, as that's what I've been told all I need to do. Should I be panicking about my containers fucking with the firewall?

[–] moonpiedumplings@programming.dev 7 points 8 months ago (1 children)

Probably not an issue, but you should check. If the port opened is something like 127.0.0.1:portnumber, then it's only bound to localhost, and only that local machine can access it. If no address is specified, then anyone with access to the server can access that service.

An easy way to see containers running is: docker ps, where you can look at forwarded ports.

Alternatively, you can use the nmap tool to scan your own server for exposed ports. nmap -A serverip does the slowest, but most indepth scan.

[–] wreckedcarzz@lemmy.world 1 points 8 months ago

Just waking up, I've been running docker on my nas for a few years now and was never made aware of this - the nas ports appear safe, but the vps is not, so I swapped in 127.0.0.1 in front of the port number (so it's now 127.0.0.1:8080:80 or what have you), and that appears to resolve it. I have nginx running so of course that's how I want to have a couple things exposed, not everything via port.

My understanding was that port:port just was local for allowing redirection from container to host, and you'd still need to do network firewall management yourself to allow stuff through, and that appears the case on my home network, so I never had reason to question it. Thanks, I learned something today :)

Might do the same to my nas containers, too, just to be safe. I'm using those containers as a testbed for the vps containers so I don't want to forget...

[–] adam@doomscroll.n8e.dev 4 points 8 months ago (1 children)

Docker will have only exposed container ports if you told it to.

If you used -p 8080:80 (cli) or - 8080:80 (docker-compose) then docker will have dutifully NAT'd those ports through your firewall. You can either not do either of those if it's a port you don't want exposed or as @moonpiedumplings@programming.dev says below you can ensure it's only mapped to localhost (or an otherwise non-public) IP.

[–] wreckedcarzz@lemmy.world 1 points 8 months ago

Thanks - more detailed reply below :)

[–] droolio@feddit.uk 3 points 8 months ago

Actually, ufw has its own separate issue you may need to deal with. (Or bind ports to localhost/127.0.0.1 as others have stated.)

[–] N0x0n@lemmy.ml 2 points 8 months ago (1 children)

Option to disable this behavior would be 100x better then current, but what do I know lol

Prevent docker from manipulating iptables

Don't know what it's actually doing, I'm just learning how to work with nftables, but I saved that link in case oneday I want to manage the iptables rules myself :)

[–] Auli@lemmy.ca 1 points 8 months ago (1 children)

Good luck. Your going to have to change the rules whenever the up address of the container changes.

[–] N0x0n@lemmy.ml 0 points 8 months ago* (last edited 8 months ago)

If you are talking about the IP address then just add a static address, no? I do it anyway in my docker compose:

...
    networks:
      traefik.net:
        ipv4_address: 10.10.10.99

networks:
    traefik.net:
      name: traefik-net
      external: true

I'm not an expert so maybe I'm wrong, if so do not hesitate to correct me !

EDIT: If the IP address doesn't change, you do not need to change to routing and iptables/nftables rules. ??

[–] Kalcifer@sh.itjust.works 0 points 8 months ago

IIRC, doesn’t modify actual incoming rules (at least it doesn’t for me)

How come I don't see my previous rules when I dump the ruleset, then? I have my rules written in /etc/nftables.conf, and they were previously applied by running # nft -f /etc/nftables.conf. Now, when I dump the current ruleset with # nft list ruleset, those previous rules aren't there — all I see are Docker's rules.

[–] N0x0n@lemmy.ml 13 points 8 months ago

You can somehow change that behavior: Prevent docker from manipulating iptables

[–] BearOfaTime@lemm.ee 6 points 8 months ago (2 children)

Wow, thanks for the heads up.

Looks like it affects dockerd, but not docker desktop.

Any idea of the docker implementation in Proxmox or TrueNAS? (TrueNAS does containers if I remember right?)

[–] hperrin@lemmy.world 20 points 8 months ago (2 children)

Correct me if I’m wrong, but I don’t think Proxmox uses Docker. I’m pretty sure its containers are LXC containers.

[–] BearOfaTime@lemm.ee 1 points 8 months ago

Oh, yea, you're right. Thanks

[–] ErwinLottemann@feddit.de 1 points 8 months ago

ofcourse docker desktop is not affected because it's a vm running docker running on your computer. the nftables inside the container are modified by docker

[–] onlinepersona@programming.dev 4 points 8 months ago

There's also rootless docker. There shouldn't be any more firewall shenanigans.

CC BY-NC-SA 4.0

[–] Decronym@lemmy.decronym.xyz 3 points 8 months ago* (last edited 6 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
HA Home Assistant automation software
~ High Availability
HTTP Hypertext Transfer Protocol, the Web
IP Internet Protocol
LXC Linux Containers
NAT Network Address Translation
VPS Virtual Private Server (opposed to shared hosting)
nginx Popular HTTP server

6 acronyms in this thread; the most compressed thread commented on today has 5 acronyms.

[Thread #589 for this sub, first seen 11th Mar 2024, 10:15] [FAQ] [Full list] [Contact] [Source code]

[–] Shimitar@feddit.it 1 points 6 months ago

That's another good reason to use podman, rules are on nft and separated from your rules.

[–] Dirk@lemmy.ml 0 points 8 months ago (2 children)

So better put Docker in a VM so it can't do any harm to the host?

[–] BearOfaTime@lemm.ee 3 points 8 months ago
[–] taladar@sh.itjust.works 2 points 8 months ago

It is enough to put it into its own network namespace.