this post was submitted on 23 Nov 2024
62 points (86.0% liked)

Linux

48705 readers
1193 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

i've instaled opensuse tumbleweed a bunch of times in the last few years, but i always used ext4 instead of btrfs because of previous bad experiences with it nearly a decade ago. every time, with no exceptions, the partition would crap itself into an irrecoverable state

this time around i figured that, since so many years had passed since i last tried btrfs, the filesystem would be in a more reliable state, so i decided to try it again on a new opensuse installation. already, right after installation, os-prober failed to setup opensuse's entry in grub, but maybe that's on me, since my main system is debian (turns out the problem was due to btrfs snapshots)

anyway, after a little more than a week, the partition turned read-only in the middle of a large compilation and then, after i rebooted, the partition died and was irrecoverable. could be due to some bad block or read failure from the hdd (it is supposedly brand new, but i guess it could be busted), but shit like this never happens to me on extfs, even if the hdd is literally dying. also, i have an ext4 and an ufs partition in the same hdd without any issues.

even if we suppose this is the hardware's fault and not btrfs's, should a file system be a little bit more resilient than that? at this rate, i feel like a cosmic ray could set off a btrfs corruption. i hear people claim all the time how mature btrfs is and that it no longer makes sense to create new ext4 partitions, but either i'm extremely unlucky with btrfs or the system is in fucking perpetual beta state and it will never change because it is just good enough for companies who can just, in the case of a partition failure, can just quickly switch the old hdd for a new one and copy the nightly backup over to it

in any case, i am never going to touch btrfs ever again and i'm always going to advise people to choose ext4 instead of btrfs

you are viewing a single comment's thread
view the rest of the comments
[–] bunitor@lemmy.eco.br 1 points 1 month ago (2 children)

not sure what the relation would be. my ram is fine afaik

[–] WalnutLum@lemmy.ml 15 points 1 month ago (1 children)

Typically when there are "can't mount" issues with btrfs it's cause the write log got corrupted, and memory errors are usually the cause.

BTRFS needs a clean write log to guarantee the state of the blocks to put the filesystem overlay on top of, so if it's corrupted btrfs usually chooses to not mount until you do some manual remediations.

If the data verification stuff seems more of a pain in the ass than it's worth you can turn most of those features off with mount options.

[–] bunitor@lemmy.eco.br 2 points 1 month ago (2 children)

oh wow, that's crazy. thanks for the info, but it's a little fucked up that btrfs can make a memory failure cause a filesystem corruption

[–] Atemu@lemmy.ml 7 points 1 month ago

It's the other way around: The memory failure causes the corruption.

Btrfs is merely able to detect it while i.e. extfs is not.

[–] ky56@aussie.zone 5 points 1 month ago* (last edited 1 month ago)

Not really. Even TrueNAS Core (ZFS) highly recommends ECC memory to mitigate this possibility from occurring. After reading more about filesystems in general and when money allowed, I took this advice as gospel when upgrading my server from junk I found laying around to a proper Supermicro ATX server mobo.

The difference I think is that BRTFS is more vulnerable to becoming unmountable whereas other filesystems have a better chance of still being mountable but contain missing or corrupted data. The latter usually being preferable.

For desktop use some people don't recommend ZFS as if the right memory corruption conditions are met, it can eat your data as well. It's why Linus Torvalds goes on a rant every now and then about how bullshit it is that Intel normalized paywalling ECC memory to servers only.

I disagree and think the benefits of ZFS on a desktop without ECC outweigh a rare possibility that can be mitigated with backups.

[–] zarkanian@sh.itjust.works 4 points 1 month ago

Run memtest86+. I had similar issues and it was due to faulty RAM.