this post was submitted on 17 Jun 2023
269 points (97.5% liked)

Selfhosted

40183 readers
808 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

So Podman is an open source container engine like Docker—with "full"^1^ Docker compatibility. IMO Podman's main benefit over Docker is security. But how is it more secure? Keep reading...

Docker traditionally runs a daemon as the root user, and you need to mount that daemon's socket into various containers for them to work as intended (See: Traefik, Portainer, etc.) But if someone compromises such a container and therefore gains access to the Docker socket, it's game over for your host. That Docker socket is the keys to the root kingdom, so to speak.

Podman doesn't have a daemon by default, although you can run a very minimal one for Docker compatibility. And perhaps more importantly, Podman can run entirely as a non-root user.^2^ Non-root means if someone compromises a container and somehow manages to break out of it, they don't get the keys to the kingdom. They only get access to your non-privileged Unix user. So like the keys to a little room that only contains the thing they already compromised.^2.5^ Pretty neat.

Okay, now for the annoying parts of Podman. In order to achieve this rootless, daemonless nirvana, you have to give up the convenience of Unix users in your containers being the same as the users on the host. (Or at least the same UIDs.) That's because Podman typically^3^ runs as a non-root user, and most containers expect to either run as root or some other specific user.

The "solution"^4^ is user re-mapping. Meaning that you can configure your non-root user that Podman is running as to map into the container as the root user! Or as UID 1234. Or really any mapping you can imagine. If that makes your head spin, wait until you actually try to configure it. It's actually not so bad on containers that expect to run as root. You just map your non-root user to the container UID 0 (root)... and Bob's your uncle. But it can get more complicated and annoying when you have to do more involved UID and GID mappings—and then play the resultant permissions whack-a-mole on the host because your volumes are no longer accessed from a container running as host-root....

Still, it's a pretty cool feeling the first time you run a "root" container in your completely unprivileged Unix user and everything just works. (After spending hours of swearing and Duck-Ducking to get it to that point.) At least, it was pretty cool for me. If it's not when you do it, then Podman may not be for you.

The other big annoying thing about Podman is that because there's no Big Bad Daemon managing everything, there are certain things you give up. Like containers actually starting on boot. You'd think that'd be a fundamental feature of a container engine in 2023, but you'd be wrong. Podman doesn't do that. Podman adheres to the "Unix philosophy." Meaning, briefly, if Podman doesn't feel like doing something, then it doesn't. And therefore expects you to use systemd for starting your containers on boot. Which is all good and well in theory, until you realize that means Podman wants you to manage your containers entirely with systemd. So... running each container with a systemd service, using those services to stop/start/manage your containers, etc.

Which, if you ask me, is totally bananasland. I don't know about you, but I don't want to individually manage my containers with systemd. I want to use my good old trusty Docker Compose. The good news is you can use good old trusty Docker Compose with Podman! Just run a compatibility daemon (tiny and minimal and rootless… don't you worry) to present a Docker-like socket to Compose and boom everything works. Except your containers still don't actually start on boot. You still need systemd for that. But if you make systemd run Docker Compose, problem solved!

This isn't the "Podman Way" though, and any real Podman user will be happy to tell you that. The Podman Way is either the aforementioned systemd-running-the-show approach or something called Quadlet or even a Kubernetes compatibility feature. Briefly, about those: Quadlet is "just" a tighter integration between systemd and Podman so that you can declaratively define Podman containers and volumes directly in a sort of systemd service file. (Well, multiple.) It's like Podman and Docker Compose and systemd and Windows 3.1 INI files all had a bastard love child—and it's about as pretty as it sounds. IMO, you'd do well to stick with Docker Compose.

The Kubernetes compatibility feature lets you write Kubernetes-style configuration files and run them with Podman to start/manage your containers. It doesn't actually use a Kubernetes cluster; it lets you pretend you're running a big boy cluster because your command has the word "kube" in it, but in actuality you're just running your lowly Podman containers instead. It also has the feel of being a dev toy intended for local development rather than actual production use.^5^ For instance, there's no way to apply a change in-place without totally stopping and starting a container with two separate commands. What is this, 2003?

Lastly, there's Podman Compose. It's a third-party project (not produced by the Podman devs) that's intended to support Docker Compose configuration files while working more "natively" with Podman. My brief experience using it (with all due respect to the devs) is that it's total amateur hour and/or just not ready for prime time. Again, stick with Docker Compose, which works great with Podman.

Anyway, that's all I've got! Use Podman if you want. Don't use it if you don't want. I'm not the boss of you. But you said you wanted content on Lemmy, and now you've got content on Lemmy. This is all your fault!

^1^ Where "full" is defined as: Not actually full.

^2^ Newer versions of Docker also have some rootless capabilities. But they've still got that stinky ol' daemon.

^2.5^ It's maybe not quite this simple in practice, because you'll probably want to run multiple containers under the same Unix account unless you're really OCD about security and/or have a hatred of the convenience of container networking.

^3^ You can run Podman as root and have many of the same properties as root Docker, but then what's the point? One less daemon, I guess?

^4^ Where "solution" is defined as: Something that solves the problem while creating five new ones.

^5^ Spoiler: Red Hat's whole positioning with Podman is like they see it is as a way for buttoned-up corporate devs to run containers locally for development while their "production" is running K8s or whatever. Personally, I don't care how they position it as long as Podman works well to run my self-hosting shit....

you are viewing a single comment's thread
view the rest of the comments
[–] clavismil@lemmy.world 3 points 1 year ago (3 children)

Awesome summary of how podman works.

I still haven't figured out some issues with rootless podman where I pass the PUID and PGID of "myuser" 1000 as environment variable following the linuxserver.io examples... but then get files and folders owned by 100999:100999, if I chown files to "myuser" the service gets permission denied, I give up and chown everything to 100999 as workaround it works but is a bit annoying... Maybe someone here knows what's going on?

[–] jacob@lemmy.dork.lol 3 points 1 year ago* (last edited 1 year ago) (1 children)

This is a consequence of user namespaces, which tripped me up until I read this article from Red Hat about running rootless containers as a non-root user. At that point I got that (the default options) map UID 0 in the container to my UID (i.e. 1000), but the other mappings were confusing.

The short version of the useful part (for me) of that article was podman unshare (man podman-unshare), which launches a shell in a user namespace, like when you start a container. You can run the following command to see how the UIDs are mapped inside of the namespace:

$ podman unshare cat /proc/self/uid_map
         0       1000          1
         1     100000      65536

This is read (for this purpose, see man user_namespaces for a more detailed explanation of this) as "inside this namespace, the UIDs in column 1 map to the UID in column 2 on the caller process, for (column 3) IDs". There is also gid_map which works the same way, but for groups.

The snippet above is from my machine, so in a podman container, UID 0 maps to UID 1000 on the "host", which is me, and this is "good" for only 1 user. Then, starting with UID 1, the container maps to UID 100000 in the container, and is good for 65536 UIDs. This is why when you set the PUID and GUID environment variables, on your filesystem you see the files are owned by 100999:100999 - you can use the mapping to figure the math out: 100000+1000-1=100999.

Since podman unshare puts you in a shell that has the same (? terminology might not be totally right here) user namesapce as your containers, you can use it for lots of stuff -- like in your comment you mentioned using chown to change the permissions to 100999:100999. Instead, you could have used podman unshare chown 1000:1000 which have correctly set the permissions for your volume mount, and on your filesystem outside the container, the permissions would be 100999:100999.

[–] SheeEttin@lemmy.world 2 points 1 year ago (1 children)

I still don't know what "unshare" means.

[–] witten@lemmy.world 1 points 1 year ago

It doesn't mean anything useful.. Its name is an artifact of its Unix roots that should be ignored IMO. For our purposes, it basically means "run this command in the same context as my containers run so that I can see what my containers actually see."

[–] fox@lemmy.fakecake.org 3 points 1 year ago

as far as i remember, processes run inside container as root (0:0) end up under your own UID.

everything else gets mapped to those weird UIDs.

[–] sudneo@lemmy.world 1 points 1 year ago

Not a podman user, so please take this with a whole bag of salt. That seems to me a namespace issue. Does podman by default uses user namespaces? Because if that's the case, it's normal that UIDs are remapped inside the container namespace, and 1000 inside it corresponds to something else (maybe 100999?) outside.

One way to check could be cat /proc/PROC_INSIDE_CONTAINER/status | grep uid or cat /proc/PROC_INSIDE_CONTAINER/uid_map.

This https://stackoverflow.com/questions/70770437/mapping-of-user-ids seems also to be somewhat relevant to your scenario?