this post was submitted on 27 Mar 2024
15 points (89.5% liked)

Linux

48178 readers
1250 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

I recently acquired two used blade servers and a short rack to put them in. I'm planning to use one or the other as the replacement for a media server that died on me a bit ago. The old media server was just a little refurb dell workstation, with a single SSD in it, but the servers have 6 and 8 bays, respectively.

I would like to RAID them so that one drive dying doesn't lose any of my media, and I was leaning towards Ubuntu server as an OS. I'm not sure how to do that, and I'm kind of poking around for info and advice. Hit me with it.

top 23 comments
sorted by: hot top controversial new old
[–] n2burns@lemmy.ca 5 points 7 months ago (1 children)

It's sounds like what you're looking for is backup, and RAID is not backup, it's redundancy to maintain uptime (as well as data integrity, and in some cases performance). I'd highly recommend you look into backup options, with best case being a 3-2-1 backup strategy.

To be fair, I'm being a little hypocritical. I've been working on my backup strategy for years and still don’t have any remote backups yet. Personally, I have a JBOD system, with 8 drives ranging from 2TB to 8TB, so my setup might be a bit complicated for your purposes. I’m not worried about uptime, and am focused on data integrity. I’m not using actual RAID because in the case of a catastrophic failure, I don’t want to lose all my data. I use snapRAID to create some redundancy, and I pool my data drives using mergerfs.

If you are still interested in RAID, I would recommend staying away from hardware RAID as I’ve commented in other places of this post. It has it’s place in data centres but really doesn’t make sense for consumers anymore. There is a lot of good advice in the rest of the comments about RAID, so I’ll summarize my thoughts. If you only plan on having 2 drives, RAID 1 is a good option, though it’s generally used for it’s write performance and that’s probably not necessary on a media server. My current server is running on decade old, lower-end, consumer hardware, and even in that extreme case, media sometimes takes a second to start 1080p content remotely. If you want to add drives and are willing to expand in redundant pairs, you you can either add another RAID layer (RAID 1+0), or pool the partitions together. If you want to be able to expand by single drives/have more than 50% of your potential storage be realized, you could look at RAID 5/6 or ZFS/btrfs. Note that for RAID 5/6, drives need to be equal size.

[–] blackstampede@sh.itjust.works 1 points 7 months ago (1 children)

I'm mainly concerned about:

  1. Not losing data if one drive dies on me.
  2. Fast reads
  3. Easy plug and play expansion

Since I'll have 8 drives (or 6, if I use the smaller server, it would be nice if I could swap out one of them without losing data and add a larger one, which would then get used automatically. Is that something that RAID is good for?

I'm hesitant to set up backups because it's going to be a lot of data.

[–] n2burns@lemmy.ca 3 points 7 months ago

I’m mainly concerned about:

  1. Not losing data if one drive dies on me.

Sure, that's what RAID is designed to do. However, I'd suggest also looking into what happens when your array is degraded and how to rebuild it.

  1. Fast reads

I'm a bit surprised you need fast reads with a media server. You're probably going to have to clarify your needs a bit more.

  1. Easy plug and play expansion

Since I’ll have 8 drives (or 6, if I use the smaller server, it would be nice if I could swap out one of them without losing data and add a larger one, which would then get used automatically. Is that something that RAID is good for?

Standard RAID levels generally don't have options to add larger drives. I'm not sure what you mean by "plug and play". I'm pretty sure almost all setups will involve a fair bit of configuration.

I’m hesitant to set up backups because it’s going to be a lot of data.

It's also a lot of data to lose if things go more wrong than you expected (multi-drive failure, bit-rot, etc.).

[–] ikidd@lemmy.world 5 points 7 months ago* (last edited 7 months ago) (2 children)

Hardware raid is missing many features of modern software raid like ZFS. Expansion is harder, replication and snapshotting options don't exist like they do on a COW raid, speed improvements with ZIL and caching aren't there, the list goes on.

Ubuntu supports ZFS very well. And if you're going software raid, for the love of dog, don't use md. It's ancient.

[–] AbidanYre@lemmy.world 5 points 7 months ago

HW raid also screws you if the controller dies.

[–] lemmyvore@feddit.nl 4 points 7 months ago (1 children)

There's nothing wrong with MD. It's old but also rock solid. It's super flexible, can do array configurations that can't be done in other ways. And most importantly it decouples redundancy from other features and allows you to pick and choose your feature set.

[–] ikidd@lemmy.world 1 points 7 months ago (1 children)

Is there a way to checksum in md?

[–] lemmyvore@feddit.nl 3 points 7 months ago

Depends what you mean by it. Some of the traditional RAID levels use stripe parity. If you mean file checksums then no, mdadmin only deals with disk devices, doesn't deal with filesystems and below. You can use filesystems with built-in checksums or other methods. For example I use RAID1 and I take incremental backups with Borg.

[–] RandomChain@lemm.ee 3 points 7 months ago

I use openmediavault for my home NAS, it does all the heavy lifting for you with a nice GUI so you don't have to config everything yourself. I'd recommend checking it if you don't have a lot of experience with RAID setups or don't want to do manually.

Just please remember that RAID is not a backup solution, it's a redundancy solution. If you have data corruption on one side, it can copy itself to the other mirror and then you're screwed. If your media is important, keep a proper separate backup.

[–] limelight79@lemm.ee 2 points 7 months ago

Basically you need the mdtools package. I use Debian, but Ubuntu is based on Debian, so it should be pretty similar. It's likely mdtools will be installed, but if not, apt install mdtools as root should do it.

The one thing I strongly, strongly, strongly recommend, after a harrowing week or so a few months back: Do not use the entire disc for the raid arrays. Partition each disk with a single Linux partition, then use those partitions as the array. If you use the entire disc, you run the risk of losing the array if the BIOS thinks those drives are messed up, which is what happened to me. I was able to recover, fortunately, but it was EXTREMELY stressful, and I was to the point where I was starting to figure out what I had lost.

When you issue the command to build the array, such as:

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=5 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 --spare-devices=1 /dev/sdf1

Keep a copy of that command somewhere so you know how you created it, in case you ever need to recreate it.

I also kept copies of the output of /dev/mdstat, blkid (for the RAID drives and partitions), and mdadm --examine for each drive, just in case. Doing this probably means I'll never need it, so that's a good tradeoff.

And, as always, RAID is not a backup. In my case, my array is small enough that a single drive can back it up (which wasn't the case when I original built it ~5 years ago), so I have a large drive in my desktop machine that backs up the array every night.

It's pretty straightforward though. Install Ubuntu on a drive that isn't part of the array and get that working, which should be pretty easy. Partition the array drives like I said above (use gparted or other tools, which will be installed with Ubuntu). Issue the mdadm command similar to what I wrote above, note your partitions will very likely be different. Do not overwrite your Ubuntu partitions with it. That is Bad.

mdadm will create a /dev/md0 or /dev/md127. Some versions do one or the other. It'll tell you.

After mdadm finishes, do a mkfs.ext4 /dev/md0 (or md127) to create the array assuming you want to use ext4.

Add a line like this to your fstab: /dev/md0 /mnt/media ext4 defaults 0 1

Reboot and go.

There are a bunch of more detailed guides out there, I've just given the high level steps.

[–] hperrin@lemmy.world 1 points 7 months ago* (last edited 7 months ago)

mdadm is the tool you can use to create and manage software RAIDs on Linux. You can also manage them with Cockpit.

If you do go with mdadm, my advice is create a partition on each drive that is slightly smaller than the drive itself, and use that as the device in mdadm. That way if you need to replace a drive, and the new one is a few MBs/GBs smaller, you’ll still be able to use it.

[–] HumanPerson@sh.itjust.works 1 points 7 months ago

I did this yesterday for my media server on Debian and it was very easy. Use mdadm to create the raid. There are guides online on how that are very easy to follow. You may want to use partitions on the drives as another comment recommended (mdadm supports whole drives and partitions so do whatever you think is best). Next you should have a device /dev/mdX that you can format to your fs of choice. After that just use lsblk to get the uuid of the raid and mount it in fstab like any other drive.

[–] originalucifer@moist.catsweat.com 1 points 7 months ago (1 children)

you would want to use the hardware raid that likely already exists. its been a minute since i setup dell, but you should be able to boot into the raid controller bios (some ctrl-key sequence) and configure your raid there... then you just install whatever you want on the defined logical drives (linux/windows/hypervsior)

[–] gigatexal@mastodon.social 2 points 7 months ago (1 children)

@originalucifer @blackstampede if you can just do software raid and if possible get the disks to look like JBOD (just a bunch of disks) CPUs are so much faster these days software raid even ZFS offers so much more than hardware raid.

[–] originalucifer@moist.catsweat.com 1 points 7 months ago (1 children)

i wouldnt on a non-jbod, retail server box. if this was a random workstation without onboard hardware raid, then sure.

im not sure how you think sharing the main processor with the raid when there is already a perfectly good set of processors for the raid is going to be faster.

[–] gigatexal@mastodon.social 0 points 7 months ago (1 children)

@originalucifer @blackstampede I’d rather ZFS for the data integrity stuff than anything else.

[–] originalucifer@moist.catsweat.com 1 points 7 months ago (1 children)

what specific feature of ZFS are you frothing over to sacrifice your primary processing for it?

the hardware raid in this box was designed for business and would be more than adequate for the requested purpose

[–] n2burns@lemmy.ca 1 points 7 months ago (1 children)

You're right, hardware RAID still has some use for businesses, but it's generally a bad idea for consumers. The main reason is the procedure if the RAID controller fails. In commercial applications they have spare, compatible controllers, so a quick hardware swap and you're back up and running, you don't even need to rebuild the array. However, consumers generally don't have a spare controller, and if they don't, they can't just get any controller, they need a compatible one or the array is lost. If a system running a software RAID has a hardware failure, the array can be moved to a new host and mdadm can rebuild the array without needing specific hardware.

[–] originalucifer@moist.catsweat.com -1 points 7 months ago (1 children)

but this guy is specifically not using consumer hardware

[–] n2burns@lemmy.ca 1 points 7 months ago (1 children)

Yes, but they're using it in a consumer setting. That was the whole point of my comment. It sounds like they may have 2 identical RAID controllers, which means they might have a spare. However, if one dies, they'd be looking at obtaining another spare, migrating their data to a new setup, or risking complete data loss.

[–] mark3748@sh.itjust.works 1 points 7 months ago (1 children)

They’ll have to get a new SAS controller unless the RAID controller has an HBA mode. Running ZFS under a RAID controller is the best way to lose all of your data.

ZFS is wonderful but it takes quite a bit of planning and specialized knowledge to implement properly. Your fear of a failed RAID controller is a bit much, too. I’ve had to deal with a single controller failure in 30 years of IT (and I’ve done warranty work for all of the major OEMs in corporate IT for most of those 30 years)

[–] n2burns@lemmy.ca 1 points 7 months ago

Is HBA mode that rare? It seems pretty common. Either way, we don't know OP's hardware.

And I'm not scared of RAID controller failure, I'm scared of single point failure. I know it's highly unlikely, but the risk for stranded data is unacceptable IMHO unless you're recommending OP make sure they have a spare on hand.

Also, I never even mentioned ZFS (I've actually never even used it).

[–] bizdelnick@lemmy.ml 1 points 7 months ago