- cross-posted to:
- linux@programming.dev
- cross-posted to:
- linux@programming.dev
There’s been some Friday night kernel drama on the Linux kernel mailing list… Linus Torvalds has expressed regrets for merging the Bcachefs file-system and an ensuing back-and-forth between the file-system maintainer.
Like there’s not a bunch of stuff EXT 4 can’t do that BTRFS and whatever this other acronym soup can do.
It’s the entire point of my post. E x t 4 does work but it doesn’t do the stuff these other file systems do so they are an advantageous choice for some things.
One point: ext4 has a maximum file size of 16TiB. To a regular user that is stupidly huge and of no concern but it’s exactly the type of thing you overlook if you “just use ext4” on anything and everything then end up with your database broken at work because of said bad advice.
Use the filesystem that makes the most sense for your use case. Consider it every single time you format a disk. Don’t become complacent! Also fuck around with the new shit from time to time! I decided to format my Linux desktop partitions with btrfs over a decade ago and as a result I’m an excellent user of that filesystem but you know what? I’m thinking I’ll try bcachefs soon and fiddle around more with my zfs partition on my HTPC.
BTW: If you’re thinking about trying out btrfs I would encourage you to learn about it’s non-trivial maintenance tasks. btrfs needs you to fuck with it from time to time or you’ll run out of disk space “for no reason”. You can schedule cron jobs to take care of everything (as I have done) but you still need to learn how it all works. It’s not a “set it and forget it” FS like ext4.
For a few years I used a distro that had btrfs as default, including scheduled automatic maintenance. Never had to bother about manual balancing or fiddeling with the FS.
I have 52 terabytes of BTRFS, I’ve been on it for about 5 years.
I think we’re just talking about different priorities. For me stability is the most important in production. For you features seem to matter more. For me it’s enough if a file system can store, write, read and not lose files. I guess it depends on what the use case and the budget are.
Yeah, some people have needs that you don’t have. That’s why I commented on your blanket statement of just use EXT4.
I have BTRFS in production all over the place. Snapshots are extremely useful for what I do.
Well, yes use-case is key. But interestingly ext4 will never detect bitrot/errors/corruption. BTRFS will detect corrupted files because its targeted users wants to know. It makes it difficult to say what’s the more reliable FS because first we’d have to define “reliable” and the perception of it and who/what do we blame when the FS tells us there’s a corrupted file detected?. Do we shoot the messenger?
Which is exactly why you’d want to run a CoW filesystem with redundancy.
ext4 aims to not lose data under the assumption that the single underlying drive is reliable. btrfs/bcachefs/ZFS assume that one/many of the perhaps dozens of underlying drives could fail entirely or start returning garbage at any time, and try to ensure that the bad drive can be kicked out and replaced without losing any data or interrupting the system. They’re both aiming for stability, but stability requirements are much different at scale than a “dumb” filesystem can offer, because once you have enough drives one of them WILL fail and ext4 cannot save you in that situation.
Complaining that datacenter-grade filesystems are unreliable when using them in your home computer is like removing all but one of the engines from a 747 and then complaining that it’s prone to crashing. Of course it is, because it was designed under the assumption that there would be redundancy.