this post was submitted on 24 Jul 2023
44 points (94.0% liked)

Selfhosted

40246 readers
1050 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

TLDR: I am running some Docker containers on a homelab server, and the containers' volumes are mapped to NFS shares on my NAS. Is that bad performance?

  • I have a Linux PC that acts as my homelab server, and a Synology NAS.
  • The server is fast but has 100GB SSD.
  • The NAS is slow(er) but has oodles of storage.
  • Both devices are wired to their own little gigabit switch, using priority ports.

Of course it's slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be "bad practice" to separate CPU and storage this way? Isn't that pretty much what a data center also does?

you are viewing a single comment's thread
view the rest of the comments
[–] carzian@lemmy.ml 9 points 1 year ago (2 children)

What no one else has touched on is the protocol used for network drives interferes with databases. Protocols like SMB lock files during read/write so other clients on the network can't corrupt the file by interacting with it at the same time.

It is bad practice to put the docker files on a NAS because it's slower, and the protocol used can and will lead to docker issues.

That's not to say that no files can be remote, jellyfin's media library obviously supports connecting to network drives, but the docker volume and other config files need to be on the local machine.

Data centers get around this by:

  • running actual separate databases with load balancing
  • using cluster storage like ceph and VMs that can be moved across hypervisors
  • a lot more stuff that's very complicated

My advice is to buy a new SSD and clone the existing one over. They're dirt cheap and you're going to save yourself a lot of headache.

[–] marcos@lemmy.world 8 points 1 year ago

Data centers get around this by:

Using network mapped disks instead of network mapped filesystems.

They use SAN and not NAS. The database and VM architecture do not fundamentally change the behavior of the disks, and there isn't much more complicated stuff beyond that.

[–] mea_rah@lemmy.world 1 points 1 year ago

In context of self hosting it's probably worth pointing out, that SQLite specifically mentions NFS on their How To Corrupt An SQLite Database File page.

SQLite is used in many popular services people run at home. Often as only or default option, because it does not require external service to work.