this post was submitted on 12 Jul 2023
119 points (93.4% liked)

Selfhosted

40183 readers
657 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

For the vast majority of docker images, the documentation only mention a super long and hard to understand "docker run" one liner.

Why nobody is placing an example docker-compose.yml in their documentation? It's so tidy and easy to understand, also much easier to run in the future, just set and forget.

If every image had an yml to just copy, I could get it running in a few seconds, instead I have to decode the line to become an yml

I want to know if it's just me that I'm out of touch and should use "docker run" or it's just that an "one liner" looks much tidier in the docs. Like to say "hey just copy and paste this line to run the container. You don't understand what it does? Who cares"

The worst are the ones that are piping directly from curl to "sudo bash"...

you are viewing a single comment's thread
view the rest of the comments
[–] OmltCat@lemmy.world 76 points 1 year ago (3 children)

Because it’s “quick start”. Least effort to get a taste of it. For actual deployment I would use compose as well.

Many project also have a example docker-compose.yml in the repository if you dig not so deep into it

There is https://www.composerize.com to convert run command to compose. Works ~80% of the time.

I honestly don’t understand why anyone would make “curl and bash” the officially installation method these days, with docker around. Unless this is the ONLY thing you install on the system, so many things can go wrong.

[–] Shrek@lemmy.world 32 points 1 year ago (2 children)

I used to host composerize. Now I host it-tools which has its own version and many other super helpful tools!

[–] Heastes@lemmy.world 10 points 1 year ago (1 children)

I was going to mention it-tools. It's great!
And if you need more stuff in a similar vein, cyberchef is also pretty neat.

[–] Shrek@lemmy.world 1 points 1 year ago

Nice! I wonder if there's anything one has that the other doesn't.

[–] beaumains@programming.dev 4 points 1 year ago (1 children)

You have changed my life today.

[–] Shrek@lemmy.world 2 points 1 year ago

No, the creator of it-tools did. I just told you about it. Give them a star on GitHub and maybe donate if you can ❤️

Omg I never knew about composerize or it-tools. This would save me a ton of headaches. Absolutely using this in the future.

[–] anonymoose@lemmy.ca 1 points 1 year ago (1 children)

Out of curiosity, is there much overhead to using docker than installing via curl and bash? I'm guessing there's some redundant layers that docker uses?

[–] Shrek@lemmy.world 5 points 1 year ago (1 children)

Of course, but the amount of overhead completely depends per container. The reason I am willing to accept the -in my experience- very small amount of overhead I typically get is that the repeatability is amazing with docker.

My first server was unRAID (freebsd, not Linux), I setup proxmox (debian with a webui) later. I took my unRAID server down for maintenance but wanted a certain service to stay up. So I copied a backup from unRAID to another server and had the service running in minutes. If it was a package, there is no guarantee that it would have been built for both OSes, both builds were the same version, or they used the same libraries.

My favorite way to extend the above is Docker Compose. I create a folder with a docker-compose.yml file and I can keep EVERYTHING for that service in a single folder. unRAID doesn't use Docker Compose in its webui. So, I try to stick to keeping things in Proxmox for ease of transfer and stuff.

[–] anonymoose@lemmy.ca 2 points 1 year ago (1 children)

Makes sense! I have a bunch of services (plex, radarr, sonarr, gluetun, etc) on my media server on Armbian running as docker containers. The ease of management is just something else! My HC2 doesn't seem to break a sweat running about a dozen containers, so the overhead can't be too bad.

[–] Shrek@lemmy.world 1 points 1 year ago (1 children)

Yeah, that's going to come completely down to the containers you're running and the people who designed them. If the container is built on Alpine Linux, you can pretty much trust that it's going to have barely any overhead. But if a container is built on an Ubuntu Docker image. It will have a bunch of services that probably aren't needed in a typical docker container.

[–] anonymoose@lemmy.ca 1 points 1 year ago (1 children)

Good point. Most containers I've used do seem to use Alpine as a base. Found this StackOverflow post that compared native vs container performance, and containers fair really well!

[–] Shrek@lemmy.world 2 points 1 year ago

It seems like that data is from 2014 as well. I'm sure the numbers would have improved in almost ten years too!