this post was submitted on 03 Jul 2023
14 points (93.8% liked)

Selfhosted

39937 readers
400 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Do you host all services just from your root account with docker or do you seperate the services between user accounts with rootless docker?

Do you use podman or docker?

It's easier to just host everything from root with normal docker, but seperating services into special user account is probably way saver, at least as far as i know. Do you think ist worth going the extra step or do you just trust docker and your containers to not get exploited?

Last but not least do you use an automatic update service for your host system and your containers?

top 14 comments
sorted by: hot top controversial new old
[–] witten@lemmy.world 7 points 1 year ago (1 children)

I use rootless Podman, because security. A container breakout exploit will only impact that one Unix user. Plus no Docker daemon to worry about.

I don't seperate services into separate users, although maybe I should. The main impediment with separation is that you give up the conveniences of container networking / container DNS and have to connect everything on the host instead. I don't know if that's even possible (conveniently) with a service like Traefik that's supposed to introspect running containers. Also, with separation by Unix user, there's not one convenient place to SSH in and run podman ps or docker ps to see all containers. Maybe not a big deal?

Auto-update of containers: No, I don't, because updates somtimes break things and I want to be there in case something goes wrong. The one exception is I auto-update the containers I develop myself as the last implicit deployment step of a CI pipeline.

[–] oranki@sopuli.xyz 2 points 1 year ago

+1 for rootless Podman. Kubernetes YAMLs to define pods, which are started/controlled by systemd. SELinux for added security.

Also +1 for not using auto updates. Using the latest tag has bit me more times I can count, now I only use it for testing new stuff. All the important services have at least the major version as tag.

[–] Arcidias@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

I keep all my services in one docker-compose yml, and run it from a normal user account added to the docker group.

I am really conscious of what I expose to the internet though, since I already almost had a security incident.

I used to run non-standard ssh port to my machine with password authentication enabled.

Turns out I didn't know the sonarr/radarr containers came with default users, and a bruteforce attack managed to login to one of them (or something like that anyway,it's been awhile). Fortunately they have a default home of /sbin/nologin so crisis averted there, but it definitely was a big lesson for me.

Years later, the current setup is only plex, tautulli, and ombi open to the internet, and to reach everything else I use tailscale. And of course,only key-based authentication.

Oh and for updates, I run apt upgrade once in a while on the box (Ubuntu server 18.04 LTS) and for the containers, I use watchtower.

[–] supersheep@lemmy.world 4 points 1 year ago

Currently, I’m just using my root account with Docker and update everything manually. I have dockcheck-web installed to check whether any updates are available (https://github.com/Palleri/DCW). From the outside everything is only accessible using Wireguard and connections have to go through a Caddy proxy in order to reach a container. Curious what other peoples setup is.

[–] kuroshi@lemmy.ramble.moe 3 points 1 year ago

Kubernetes, but I’m getting a bit tired of dealing with it. I might try using microVMs for what I’m currently using Pods, and hopefully make the whole system easier to maintain. The overhead for kubernetes is a heck of a lot more than I anticipated, I had to set up a whole second machine for what I used to be able to do on a single one.

[–] poVoq@slrpnk.net 2 points 1 year ago

Podman managed through Quadlet container files and Systemd. Rootless where easily possible but often that requires a bit more work. Auto updates only when it is unlikely to break.

[–] Ducks@ducks.dev 2 points 1 year ago

k3s with rancher. I was using k8s before but redid everything. K3s is overkill for what I do an causes millions of headaches but I enjoy learning through brute force.

I use k8s at work so it's good experience to run my own k3s

[–] ShittyKopper@lemmy.w.on-t.work 2 points 1 year ago* (last edited 1 year ago)

Rootful Podman & podman-compose. Waiting on the version of Podman that supports passt to hit Debian Bookworm or backports to attempt rootless. Deployed with Ansible except a few manual parts like creating the Postgres databases themselves.

No auto updates or notifications so far, as there seems to be a couple incompatibility issues left with Watchtower & Podman. Although since I switched CrowdSec to monitor journald instead of the Podman socket I don't really have a reason to keep the daemon running, and I think that's for the best.

[–] NewDataEngineer@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

Rootless docker via Terraform. Can create all my containers with traefik and dashboard configs at the click of a button.

[–] hitagi@ani.social 1 points 1 year ago

I use rootless docker and dump everything in the home directory. I do manual updates and receive weekly email notifications via newreleases.io

[–] SheeEttin@lemmy.world 1 points 1 year ago

I run docker on almalinux on Proxmox. Nothing is exposed to the Internet. Yes, I do automatic updates for everything, but reboots are manual.

[–] easeKItMAn@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

I’m using network overlays for individual containers and separation.
Secondly fail2ban installed on host to secure docker services. Ban FORWARDING chains specific to docker instead of INPUT chains. [fail2ban docker](Configure Fail2Ban for a Docker Container – seifer.guru) Use 2FA for services if available.

Rootless docker has limitations when it comes to port exposing, storage drivers, network overlays etc.

The host is auto-updating security batches but rebooted manually only.
Docker containers are updated manually too. I built all containers from file and don’t pull them because most are modified (plugins, minimizing sizes, dedicated user rights etc.)

[–] sunbeam60@lemmy.one 1 points 1 year ago

Docker and a Synology NAS. Everything is accessed though a wireguard VPN.

[–] tupcakes@lemmy.world 1 points 1 year ago

Nomad, consul, and gluster. Not as easy as a simple docker compose, but definitely not as annoying as kubernetes.

load more comments
view more: next ›