this post was submitted on 07 Sep 2023
44 points (94.0% liked)

Selfhosted

40219 readers
1131 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Currently I’m planning to dockerize some web applications but I didn’t find a reasonably easy way do create the images to be hosted in my repository so I can pull them on my server.

What I currently have is:

  1. A local computer with a directory where the application that I want to dockerize is located
  2. A “docker server” running Portainer without shell/ssh access
  3. A place where I can upload/host the Docker images and where I can pull the images from on the “Docker server”
  4. Basic knowledge on how to write the needed Dockerfile

What I now need is a sane way to build the images WITHOUT setting up a fully featured Docker environment on the local computer.

Ideally something where I can build the images and upload them but without that something “littering Docker-related files all over my system”.

Something like a VM that resets on every start maybe? So … build the image, upload to repository, close the terminal window, and forget that anything ever happened.

What is YOUR solution to create and upload Docker images in a clean and sane way?

top 44 comments
sorted by: hot top controversial new old
[–] JackLSauce@lemmy.world 37 points 1 year ago (2 children)
[–] nutbutter@discuss.tchncs.de 0 points 1 year ago (1 children)

What does your economic status have to do with creating docker images?

[–] JackLSauce@lemmy.world 7 points 1 year ago

My code ain't the only thing that's broke

[–] orizuru@lemmy.sdf.org 10 points 1 year ago* (last edited 1 year ago) (3 children)

For the littering part, just type crontab -e and add the following line:

@daily docker system prune -a -f
[–] vegetaaaaaaa@lemmy.world 2 points 1 year ago* (last edited 1 year ago) (1 children)

Careful this will also delete your unused volumes (not attached to a running container because it is stopped for whatever reason counts as unused). For this reason alone, always use bind mounts for volumes you care about.

[–] orizuru@lemmy.sdf.org 1 points 1 year ago

Yes.

All my self hosted containers are bound to some volume (since they require reading settings or databases).

[–] A10@kerala.party 1 points 1 year ago* (last edited 1 year ago) (1 children)

as a user with root permission or as root ?

[–] orizuru@lemmy.sdf.org 8 points 1 year ago* (last edited 1 year ago) (1 children)

You shouldn't need sudo to run docker, just can create a docker group and add your user to it. This will give you the steps on how to run docker without sudo.

Edit: as pointed out below, please make sure that you're comfortable with giving these permissions to the user you're adding to the docker group.

[–] vegetaaaaaaa@lemmy.world 3 points 1 year ago (1 children)

run docker without sudo.

Doing that, you effectively give the user account root access without password

docker run --volume /etc:/host_etc debian /bin/bash -> can read/write anything below the host's /etc directory, including shadow file, etc.

[–] orizuru@lemmy.sdf.org 2 points 1 year ago* (last edited 1 year ago)

True.

But I assume OP was already running docker from that user, so they are comfortable with those permissions.

Maybe should have made it clearer. Added to my other post. Thanks!

[–] Cyberflunk@lemmy.world -1 points 1 year ago* (last edited 1 year ago) (1 children)
[–] orizuru@lemmy.sdf.org 3 points 1 year ago* (last edited 1 year ago) (2 children)

Genuinely curious, what would the advantages be?

Also, what if the Linux distro does not have systemd?

The chances I am going to manage a linux distro without systemd are low, but some systems (arch for example) don't have cron out of the box.

Not that big of a deal since it's easy to translate them all, but that's one of the reasons why I default to systemd/timer units.

[–] Cyberflunk@lemmy.world 1 points 1 year ago

I was just making a meme dude. Personally, I like systemd, it's more complicated to learn, I ended up reading books to really learn it properly. There's 100% nothing wrong with cron.

One of the reasons I like timers is journalctl integration. I can see everything in one place. Small thing.

[–] agressivelyPassive@feddit.de 7 points 1 year ago (2 children)

I use Gitea and a Runner to build Docker images from the projects in the git repo. Since I'm lazy and only have one machine, I just run the runner on the target machine and mount the docker socket.

BTW: If you manage to "litter your system with docker related files" you fundamentally mis-used Docker. That's exactly what Docker is supposed to prevent.

[–] smik@discuss.tchncs.de 6 points 1 year ago

Self hosting your own CI/CD is the key for OP. Littering is solved too because litter is only a problem on long running servers, which is an anti-pattern in a CI/CD environment.

[–] Dirk@lemmy.ml 1 points 1 year ago (2 children)

I already have Forgejo (soft-fork of Gitea) in a Docker container. I guess I need to check how I can access that exact same Docker server where itself is hosted …

With littering I mean several docker dotfiles and dotdirectories in the user’s home directory and other system-wide locations. When I installed Docker on my local computer it created various images, containers, and volumes when created an image.

This is what I want to prevent. Neither do I want nor do I need a fully-featured Docker environment on my local computer.

[–] agressivelyPassive@feddit.de 5 points 1 year ago (1 children)

Maybe you should read up a bit about how docker works, you seem to misunderstand a lot here.

For example the "various images" are kind of the point of docker. Images are layered, and each layer is its own image, so you might end up with 3 or 4 images despite only building one image.

This is something you can't really prevent. It's just how docker works.

Anyway, you can mount the docker socket into a container, and using that socket you can then build an image within the running container. That's essentially how most ci/cd systems work.

You could maybe look into podman and buildah, as far as I know, these can build images without a running docker daemon. That might be a tad "cleaner", but comes with other problems (like no caching).

[–] Dirk@lemmy.ml 1 points 1 year ago (1 children)

I have no problem with Docker creating several images and containers and volumes for building a single-image application. The problem is that it does not clean up afterwards and leaves me with multiple things I don’t need for anything else.

I also don’t care about caching or any “magic” stuff. I just ideally want to run one command (or script doing it for me) to build an image resulting in just this one image without any other traces left. … I just like a clean environment and the build process ideally being self-contained.

But I’ll look into your suggestions, thanks!

[–] agressivelyPassive@feddit.de 1 points 1 year ago (1 children)

I seriously don't understand what leftovers you're talking about.

You essentially have a Dockerfile that describes how you want to build your image, you run docker build with the path of your Dockerfile and the path of the context, and the rest is completely up to you. Docker does not leave that many traces around - only the built images within docker itself, but as I said, that's the point of building them.

You can even export the image into a tar file and run docker prune afterwards, that should only leave the exported tar file.

[–] Dirk@lemmy.ml 1 points 1 year ago (1 children)

When I built an image last time there were several unused other images with just hashes as names and two unused volumes, also multiple cache files and other files in the user’s home directory in various subfolders.

[–] adora@kbin.social 1 points 1 year ago

It's very possible they weren't unused.
Docker builds their images out of layers, and all the layers are used during runtime!:
https://sweetcode.io/understanding-docker-image-layers/

The idea is that you can essentially change PARTS of an image, without rebuilding it entirely, which saves space and bandwidth.

[–] fhein@lemmy.world 1 points 1 year ago (1 children)

Do you mean that you want to build the docker image on one computer, export it to a different computer where it's going to run, and there shouldn't be any traces of the build process on the first computer? Perhaps it's possible with the --output option.. Otherwise you could write a small script which combines the commands for docker build, export to file, delete local image, and clean up the system.

[–] Dirk@lemmy.ml 1 points 1 year ago

I want to export the image to my repository/registry and then use it somewhere else. I also don't want to set up a complete docker environment with all the "magic" things. Just build an image and upload it.

[–] Ucinorn@aussie.zone 6 points 1 year ago

Gitlab has a great set of CI tools for deploying docker images, and includes an internal registry of images automatically tied to your repo and available in CI.

[–] demesisx@infosec.pub 6 points 1 year ago* (last edited 1 year ago) (1 children)

I build, configure, and deploy them with nix flakes for maximum reproducibility. It’s the way you should be doing it for archival purposes. With this tech, you can rebuild any docker image identically to today’s in 100 years.

https://youtu.be/0uixRE8xlbY?si=NIIFyzRhXDmcU8Kh

and here’s a link to a blog post, showing how to create a docker image and rust dev environment.

https://johns.codes/blog/rust-enviorment-and-docker-build-with-nix-flakes

[–] happy_saw@lemmy.world 3 points 1 year ago (1 children)

I knew you were going to mention nix before reading you post.

[–] demesisx@infosec.pub 3 points 1 year ago* (last edited 1 year ago)

::Robert Redford nodding gif::

[–] SomeKindaName@lemmy.world 6 points 1 year ago

For local testing: build and run tests on whatever computer I'm developing on.

For deployment: I have a self hosted gitlab instance in a kubernetes cluster. It comes with a registry all setup. Push the project, let the cicd pipeline build, test, and deploy through staging into prod.

[–] pbsds@lemmy.ml 5 points 1 year ago* (last edited 1 year ago)

Nix + dockerTools.

Doesn't even need docker, and if buitt with flakes I don't even have to checkout the repo.

[–] A10@kerala.party 3 points 1 year ago

I use drone CI. you can also use woodpecker which is a community fork of drone CI. https://github.com/woodpecker-ci/woodpecker

[–] SpaceCadet@feddit.nl 3 points 1 year ago

VM with a docker build environment.

As for "littering", a simple docker system prune -f after a build gets rid of most of it.

[–] gamer@lemm.ee 2 points 1 year ago

I use podman, and the standalone tool “buildah” can build images from dockerfiles, and the tool “skopeo” can upload it to an image repository.

[–] skookumasfrig@sopuli.xyz 2 points 1 year ago

I use portainer, and when I deploy an image, I write a short bash script for it.

  • stop the image if running
  • pull the image
  • run the image

This lets me easily do updates. I have a script for each image I run, it's less than a dozen. They're all from public repositories.

[–] avidamoeba@lemmy.ca 1 points 1 year ago

Docker, Jenkins, Docker-in-Docker (dind)

[–] xtremeownage@lemmyonline.com 1 points 1 year ago

For public projects, I use github build pipelines.

For private, I use ansible.

[–] Irisos@lemmy.umainfo.live 1 points 1 year ago

For development, I have a single image per project tagged "dev" running locally in WSL that I overwrite over and over again.

For real builds, I use pipelines on my Azure DevOps server to build the image on an agent using a remote buildkit container and push it in my internal repository. All 3 components hosted in the same kubernetes cluster.

[–] SexualPolytope@lemmy.sdf.org 1 points 1 year ago

Nowadays, I build them locally, and upload stable releases to registry. I have in the last used GitHub runners to do it, but building locally is just easier and faster for testing.

[–] thevoiceofra@mander.xyz 1 points 1 year ago
[–] ALERT@sh.itjust.works 1 points 1 year ago

pycharm + selfhosted docker registry

[–] ExLisper@linux.community 1 points 1 year ago

For private project free gitlab account. At work self hosted gitlab.

[–] Kangie@lemmy.srcfiles.zip 1 points 1 year ago* (last edited 1 year ago)

Oh hey.

I've done this in a ton of different ways.

Manually, viis GitLab CI/CD, CI/CD with Kaniko.

My current favourite though is Kubler; I did a write-up for Lemmy a little while ago: https://lemmy.srcfiles.zip/post/32334

[–] Serinus@lemmy.world 0 points 1 year ago