cross-posted from: https://programming.dev/post/39212874
I recently migrated my services from rootful
docker
to rootlesspodman quadlets
. It went smoothly, since nothing I use actually needs to be rootful. Well, except forcaddy
. It needs to be able to attach to privileged ports 80 and 443.My current way to bypass it is using
HAProxy
running as root and forwarding connections using proxy protocol. (Tried to usefirewalld
but that makes the client IP opaque tocaddy
.) But that adds an extra layer, which means extra latency. It’s perfectly usable, but I’d like to get rid of it, if possible.I’m willing to run
caddy
in rootfulpodman
if needed. But from what I understand, that means I can’t have it in the same rootless network as my other containers. I really don’t wanna open most of my containers’ ports, so that’s not an option.So, I’m asking whether any of these three things are possible.
- Use
firewalld
to forward ports tocaddy
without obscuring the client’s IP.- Make rootful
caddy
share a network with other rootless containers.- Assign privileged ports to caddy somehow, in rootless mode. (I know there’s a way to make all these ports unprivileged, but is it possible to only assign these 2 ports as unprivileged?)
Or maybe there’s a fourth way that I’m missing. I feel like this is a common enough setup, that there must be a way to do it. Any pointers are appreciated, thanks.
You can use rootless caddy via systemd socket activation, here’s a basic setup:
[Unit] Description=rootless-caddy Requires=rootless-caddy.socket After=rootless-caddy.socket [Service] # a non root user here User=El_Quentinator ExecStart=podman run --name caddy --rm -v [...] docker.io/caddy:alpine [Install] WantedBy=default.target
[Socket] BindIPv6Only=both ### sockets for the HTTP reverse proxy # fd/3 ListenStream=[::]:443 # fdgram/4 ListenDatagram=[::]:443 [Install] WantedBy=sockets.target
{$SITE_ADDRESS} { # tcp/443 bind fd/3 { protocols h1 h2 } # udp/443 bind fdgram/4 { protocols h3 } [...] }
And that’s it really.
You can find a few more examples over here: https://github.com/eriksjolund/podman-caddy-socket-activation
Systemd socket activation has a few more interesting advantages on top of unlocking binding priviliged ports:
--network none
while still being able to connect to it via systemd socket, pretty neat to expose some web app while completely cutting its own external access.Drawbacks is that the file descriptor binding is a bit awkward and not always supported (caddy/nginx/haproxy do support it though). And that podman pods / kube do not support it (or at least not yet).
It seems that I’d still need to modify
net.ipv4.ip_unprivileged_port_start=80
in sysctl, which I don’t want to do. If I do it, the socket isn’t even strictly necessary.TBH I haven’t played with passing caddy’s podman network to other containers, mine is a simple reverse proxy to other standalone containers but not directly connected via
podman run --network
(or quadlet network). In my scenario I can at least confirm thatnet.ipv4.ip_unprivileged_port_start
doesn’t need to be modified, the only annoyance is that I cannot use a systemd user service, even though the end process doesn’t run as root.EDIT: Actually looking at the examples a bit more closely I think the primary difference with my setup is that the systemd socket is started with
systemd --user
which thus requires the sysctl change, whereas I’m not using a systemd user service, relying instead onUser=some-non-root-user
to use rootless podman, but requiring root privileges to manage the systemd service.