this post was submitted on 24 Jul 2023
44 points (94.0% liked)

Selfhosted

40246 readers
1002 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

TLDR: I am running some Docker containers on a homelab server, and the containers' volumes are mapped to NFS shares on my NAS. Is that bad performance?

  • I have a Linux PC that acts as my homelab server, and a Synology NAS.
  • The server is fast but has 100GB SSD.
  • The NAS is slow(er) but has oodles of storage.
  • Both devices are wired to their own little gigabit switch, using priority ports.

Of course it's slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be "bad practice" to separate CPU and storage this way? Isn't that pretty much what a data center also does?

you are viewing a single comment's thread
view the rest of the comments
[–] dbaines@lemmy.world 2 points 1 year ago (2 children)

I use this approach myself but am starting to reconsider.

I have an Asus PN51 (NUC-like) minipc as my server / brains, hosting all my dockers etc. All docker containers and their volumes are locally sorted on the device. I have a networked QNAP NAS for storage for things like Plex / jellyfin.

It's mostly ok but the NAS is noticeably slower to start up than the NUC, which has caused issues after power loss where Plex thinks all the movies are gone so it empties out the library, then when the NAS comes back up it reindexes and throws off all the dates for everything. It also empties out tags (collections) and things like radarr and sonarr will start fetching things it thinks don't exist anymore. I've stopped those problematic services from starting on boot to hopefully fix those issues. I've also added a UPS to avoid minor power outs.

[–] tburkhol@lemmy.world 3 points 1 year ago

You might be able to solve some of these issues by changing the systemd service descriptions. Change/add an After keyword to make sure the network storage is fully mounted before trying to start the actual service.

https://www.golinuxcloud.com/start-systemd-service-after-nfs-mount/

[–] celebdor@lemmy.sdf.org 3 points 1 year ago

Might be worth it to have the systemd service that runs the docker container (if you run it like that) have a ExecStartPre= statement that checks for availability of the NAS.