26

How do you set up a server? Do you do any automation or do you just open up an SSH session and YOLO? Any containers? Is docker-compose enough for you or are you one of those unicorns who had no issues whatsoever with rootless Podman? Do you use any premade scripts or do you hand craft it all? What distro are you building on top of?

I'm currently in process of "building" my own server and I'm kinda wondering how "far" most people are going, where do y'all take any shortcuts, and what do you spend effort getting just right.

top 50 comments
sorted by: hot top controversial new old
[-] SpaceNoodle@lemmy.world 6 points 1 year ago

I'm a lazy piece of shit and containers give me cancer, so I just keep iptables aggressive and spin up whatever on an Ubuntu box that gets upgrades when I feel like wasting a weekend in my underwear.

[-] Adubya@lemmy.world 2 points 1 year ago
[-] SpaceNoodle@lemmy.world 1 points 1 year ago

I get paid to do shit with rigor; I don't have the time, energy, or help to make something classy for funsies. I'm also kind of a grumpy old man such that while I'll praise and embrace Python's addition of f-strings which make life better in myriad ways, I eschew the worse laziness of the all the containers attitude that we see for deployment.

Maybe a day shall come when containers are truly less of a headache than just thinking shit through the first time, and I'll begrudgingly adapt and grow, but that day ain't today.

[-] ppp@lemmy.one 4 points 1 year ago* (last edited 1 year ago)

Debian + nginx + docker (compose).

That's usually enough for me. I have all my docker compose files in their respective containers in the home directory like ~/red-discordbot/docker-compose.yml.

The only headache I've dealt with are permissions because I have to run docker as root and it makes a lot of messy permissions in the home directories. I've been trying rootless docker earlier and it's been great so far.

edit: I also use rclone for backups.

[-] varchar@lemmy.world 4 points 1 year ago

I use NixOS on almost all my servers, with declarative configuration. I can also install my config in one command with NixOS-Anywhere

It allows me to improve my setup bit by bit without having to keep track of what I did on specific machines

[-] mordred@lemmy.world 3 points 1 year ago

raspberry pi, arch linux, docker-compose. I really need to look up ansible

[-] clavismil@lemmy.world 3 points 1 year ago

I use debian VMs and create rootless podman containers for everything. Here's my collection so far.

I'm currently in the process of learning how to combine this with ansible... that would save me some time when migrating servers/instances.

[-] VexCatalyst@lemmy.fmhy.ml 3 points 1 year ago

Generally, it’s Proxmox, debían, then whatever is needed for what I’m spinning up. Usually Docker Compose.

Lately I’ve been playing some with Ansible, but it’s use is far from common for me right now.

[-] nzeayn@lemmy.world 3 points 1 year ago

About two years ago my set up had gotten out of control, as it will. Closet full of crap all running vms all poorly managed by chef. Different linux flavors everywhere.

Now its one big physical ubuntu box. Everything gets its own ubuntu VM. These days if I can't do it in shell scripts and xml I'm annoyed. Anything fancier than that i'd better be getting paid. I document in markdown as i go and rsync the important stuff from each VM to an external every night. Something goes wrong i just burn the vm, copy paste it back together in a new one from the mkdocs site. Then get on with my day.

[-] philip@kbin.chat 2 points 1 year ago

I use the following procedure with ansible.

  1. Setup the server with the things I need for k3s to run
  2. Setup k3s
  3. Bootstrap and create all my services on k3s via ArgoCD
[-] redcalcium@c.calciumlabs.com 1 points 1 year ago

People like to diss running kubernetes on your personal servers, but once you have enough services running in your servers, managing them using docker compose is no longer cut it and kubernetes is the next logical step to go. Tools such as k9s makes navigating as kubernetes cluster a breeze.

[-] Krafting@lemmy.world 2 points 1 year ago

Proxmox, then create LXC for everything (moslty debian and a bit of alpine), no automation, full yolo, if it break I have backup (problems are for future me eh)

[-] asjmcguire@kbin.social 2 points 1 year ago

This.
Proxmox and then LXCs for anything I need.

and yes - I cheat a bit, I use the excellent Proxmox scripts - https://tteck.github.io/Proxmox/ because I'm lazy like that haha

[-] wasney@kbin.social 1 points 1 year ago

Mostly the same. Proxmox with several LXC, two of which are running docker. One for my multimedia, the other for my game servers.

[-] arkcom@kbin.social 1 points 1 year ago

I used to do the same, but nowadays I just run everything in docker, within a single lxc container on proxmox. Having to setup mono or similar every time I wanted to setup a game server or even jellyfin was annoying.

[-] ANuStart@kbin.social 2 points 1 year ago

I use Proxmoxn then stare at the dashboard realizing I have no practical use for a home lab

[-] Stijn@lemmy.antemeridiem.xyz 1 points 1 year ago

So i'm not alone. I am trying to better myself.

[-] dotslashme@infosec.pub 2 points 1 year ago

Usually Debian as base, then ansible to setup openssh for accessandd for the longest time, I just ran docker-compose straight on bare metal, these days though, I prefer k3s.

[-] leopardboy@netmonkey.tech 2 points 1 year ago

For personal Linux servers, I tend to run Debian or Ubuntu, with a pretty simple "base" setup that I just run through manually in my head.

  • Setup my personal account.
  • Upload my SSH keys.
  • Configure the hostname (usually after something in Star Trek 🖖).
  • Configure the /etc/hotss file.
  • Make sure it is fully patched.
  • Setup ZeroTier.
  • Setup Telegraf to ship some metrics.
  • Reboot.

I don't automate any of this because I don't see a whole of point in doing it.

[-] myersguy@lemmy.simpl.website 1 points 1 year ago

Super interesting to me that you swap between Debian and Ubuntu. Is there any rhyme or reason to why you use one over the other?

[-] leopardboy@netmonkey.tech 1 points 1 year ago

I tend to prefer installing Debian on a server, but recently I did install Ubuntu's recent LTS on a box because I was running into an issue with the latest version of Debian. I didn't want to revert to an earlier version of Debian or spend a bunch of time figuring out the problem I was having with Python, so I opted to use Ubuntu, which worked.

Ubuntu is based on Debian, so it's like using the same operating system, as far as I'm concerned.

[-] tr00st@lemmy.tr00st.co.uk 2 points 1 year ago* (last edited 1 year ago)

Up until now I've been using docker and mostly manually configuring by dumping docker compose files in /opt/whatever and calling it a day. Portainer is running, but I mainly use it for monitoring and occasionally admin tasks. Yesterday though, I spun up machine number 3 and I'm strongly considering setting up something better for provisioning/config. After it's all set up right, it's never been a big problem, but there are a couple of bits of initial with that are a bit of a pain (mostly hooking up wireguard, which I use as a tunnel for remote admin and off-site reverse proxying.

Salt is probably the strongest contender for me, though that's just because I've got a bit of experience with it.

[-] EmptyRadar@kbin.social 2 points 1 year ago

After many years of tinkering, I finally gave in and converted my whole stack over to UnRAID a few years ago. You know what? It's awesome, and I wish I had done it sooner. It automates so many of the more tedious aspects of home server management. I work in IT, so for me it's less about scratching the itch and more about having competent hosting of services I consider mission-critical. UnRAID lets me do that easily and effectively.

Most of my fun stuff is controlled through Docker and VMs via UnRAID, and I have a secondary external Linux server which handles some tasks I don't want to saddle UnRAID with (PFSense, Adblocking, etc). The UnRAID server itself has 128GB RAM and dual XEON CPUs, so plenty of go for my home projects. I'm at 12TB right now but I was just on Amazon eyeing some 8TB drives...

[-] jlh@lemmy.jlh.name 1 points 1 year ago

Kubernetes.

I deploy all of my container/Kubernetes definitions from Github:

https://github.com/JustinLex/jlh-h5b/tree/main/applications

[-] CanofBeanz@lemmy.world 1 points 1 year ago

I use Unraid and their docker and VM integration, Works great for me as a home user with mixed drives. Most of the dockers i want already have unraid templates so require less configuration. Does everything i want and made it a bit easier for me with less configuration and the mixed drive support.

[-] th3raid0r@tucson.social 1 points 1 year ago

I use a heterogeneous environment with some things hosted in various cloud providers and others locally. Often times, I can usually find the package I need - but if I can't, I usually go for Docker and docker-compose. This is often the case in Oracle Linux on OCI - where docker just makes things so much easier.

For my static stuff I just use Cloudflare Pages and forget about it.

On my homelab it is Arch Linux with my own set of scripts. I used to do VFIO gaming a lot (less now), so I had the host only be a hypervisor and used a separate Arch VM to host everything in a docker-compose stack. The VM makes my server operations a lot more tidy.

My RPI is using dietpi and is natively running the pihole software and a couple other things.

I know some folks swear by UnRaid and Proxmox, but I've always found those platforms limited me vs building things my way. Also borking my own system unintentionally on occasion is a thrilling opportunity to learn!

[-] Illecors@lemmy.cafe 1 points 1 year ago

Xen on Gentoo with Gentoo VMs. I've scripted the provisioning in bash, it's fairly straightforward - create lvm volume, extract latest root, tell xen whick kernel to boot.

Ideally would like to netboot a readonly root off nfs and apply config from some source. Probably bash :D

Some things like opnsense are much more handcrafted because they're a kind of unicorn compared to the rest of the stuff.

load more comments (2 replies)

NixOS instances running Nomad/Vault/Consul. Each service behind Traefik with LE certs. Containers can mount NFS shares from a separate NAS which optionally gets backed up to cloud blob storage.

I use SSH and some CLI commands for deployment but only because that’s faster than CICD. I’m only running ~’nomad run …’ for the most part

The goal was to be resilient to single node failures and align with a stack I might use for production ops work. It’s also nice to be able to remove/add nodes fairly easily without worrying about breaking any home automation or hosting.

[-] saplyng@kbin.social 1 points 1 year ago

I've set up some godforsaken combination of docker, podman, nerdctl and bare metal at work for stuff I needed since they hired me. Every day I'm in constant dread something I made will go down, because I don't have enough time to figure out how I was supposed to do it right T.T

[-] master@lem.serkozh.me 1 points 1 year ago

A series of VPSes running AlmaLinux, I have a relatively big Ansible playbook to setup everything after the server goes online. The idea is that I can at any time scrape the server off, install an OS, put in all the persistent data (Docker volumes and /srv partition with all the heavy data), and run a playbok.

Docker Compose for services, last time I checked Podman, podman-compose didn't work properly, and learning a new orchestration tool would take an unjustifiable amount of time.

I try to avoid shell scripts as much as possible because they are hard to write in such a way so that they handle all possible scenarios, they are difficult to debug, and they can make a mess when not done properly. Premade scripts are usually the big offenders here, and they are I nice way to leave you without a single clue how the stuff they set up works.

I don't have a selfhosting addiction.

[-] augentism@thaumatur.ge 1 points 1 year ago* (last edited 1 year ago)

Right now, I just flash ubuntu server to whatever computer it is, ssh and yolo lmao. no containers, no managers, just me, my servers, and a vpn, raw dogging the internet lmao. The box is running a nas, jellyfin, lemmy, and a print server; the laptop a minecraft server, and the pi is running a pihole, and a website that controls gpio that controls the lights. In the pictured setup i dont have access to the apartment complex's router, so i vpn through a openvpn server i setup in a digitalocean server.

i didnt even know what a container was until i setup the lemmy server, which i just used ansible for.

i still dont really know what ansible is.

[-] entropicshart@lemmy.world 1 points 1 year ago

I run unraid on my server box with a few 8tb hdd and nvme for cache. From there it is really easy to spin up Docker containers or stacks using compose, as well as VMs using your iso of choice.

For automation, I use Ansible to run one click setup machines; it is great for any cloud provider work too.

[-] ZippyWonderdust@kbin.social 1 points 1 year ago

A bunch of old laptops running Ubuntu Server and docker-compose. Laptops are great; built in screen, keyboard, and UPS (battery), and more than capable of handling the kind of light workloads I run.

[-] Laura@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

For me it’s Ubuntu Server as the OS base, swag as reverse proxy and docker-compose for the services. So mostly SSH and yolo but with containers. I’d guess having something like Portainer running would probably be useful, but for me the terminal was enough.

As folder structure I just have a services directory with subfolders for each app/service.

Sqlite where possible, nginx, linux, no containers. I hate containers.

[-] railsdev@programming.dev 3 points 1 year ago

I’m somewhere in between. I hated containers for a long time but now work a lot with Kubernetes for work.

For my personal projects I’ve always hated containers a lot. Once I started learning how to build them and build them well however I really started enjoying it.

Using others’ containers is always hit or miss because a lot of them are WAY bloated. I especially hate all the docker-compose files that come with some database included as if I’m dying to run a ton of containerized database servers. Usually the underlying software supports the Postgres I run on the host itself.

[-] thomas@lemmy.douwes.co.uk 1 points 1 year ago

I have a stupid overcomplicated networking script that never works. So every time i set up a new server I need to fix a myriad of weird issues I've never seen before. Usually I setup a server with a keyboard and mouse because SSH needs networking, if it's a cloud machine its the QEMU console or hundreds of reboots.

[-] thecdc1995@lemmy.world 1 points 1 year ago

I use SSH to manage docker compose. I'm just using a raspberry pi right now so I don't have room for much more than Syncthing and Dokuwiki.

[-] fristislurper@feddit.nl 1 points 1 year ago

Don't underestimate a pi! If you have a 3 or up, it can easily handle a few more things.

[-] thecdc1995@lemmy.world 1 points 1 year ago

I forgot to mention I also have a samba share running on it and it's sooooooo sloooooow. I might need to reflash the thing just to cover my bases but it's unusable for large or many files.

[-] ptz@dubvee.org 1 points 1 year ago

Debian netinst via PXE, SSH/YOLO, docker + compose (formerly swarm), scripts are from my own library, Debian.

load more comments (2 replies)
[-] neo@lemmy.hacktheplanet.be 1 points 1 year ago

I've recently switched my entire self hosted infrastructure to NixOS, but only after a few years of evaluation, because it's quite a paradigm shift but well worth it imho.

Before that I used to stick to a solid base of Debian with some docker containers. There are still a few of those remaining that I have yet to migrate to my NixOS infra (namely mosquitto, gotify, nodered and portainer for managing them).

[-] skywhale241@lemmy.one 1 points 1 year ago

Ansible and docker compose.

[-] cablepick@lemmy.cablepick.net 1 points 1 year ago

Proxmox and shell scripts. I have everything automated from base install to updates.

All the VMs are Debian which install with a custom seed file. Each VM has a config script that will completely setup all users, ip tables, software, mounts, etc. SSL certs are updated on one machine with acme.sh and then pushed out as necessary.

One of these days I’ll get into docker but half the fun is making it all work. I need some time to properly set it up and learn how to configure it securely.

load more comments
view more: next ›
this post was submitted on 16 Jun 2023
26 points (100.0% liked)

Selfhosted

39364 readers
464 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS