Simon said on Wed, 12 Jul 2023 18:22:14 +0100
>Steve Litt <slitt@???> wrote:
>
>> You just reminded me why I don't use RAID.
>
>I find that statement interesting. What do you use instead - rely on
>backups ?
Anyone who doesn't rely on backups, whether they have RAID or not, is
cruisin for a bruisin. RAID is no substitute for a backup.
>
>
>md on Linux is actually quite easy to use.
Your eight step procedure didn't look easy.
[snip]
>I’ll add that my viewpoint is coloured by having had my backside saved
>by raid so many times. Yes, if a disk fails you can replace it,
>restore your backups, and carry on.
No doubt that restoring everything from backup including system files
(which I don't back up) is a PITA. But all these layers of abstraction
heaped on top of the simplicity of EXT4 make a mistake during repair a
lot more likely.
> But in a business world
s/business world/big business world/
>that takes
>on a whole new meaning : Firstly, there WILL be lost data, typically
>everything all your staff did today since last night’s backup.
>Secondly, you’ll have people sat around twiddling their thumbs while
>to get everything back up again - just the restore typically takes
>“hours”. In business,
Big business
>there really is no case for not using raid on
>anything but things like mass desktops/laptops that can be rebuilt
>easily and which don’t hold any data.
For big business, this is absolutely true. The business has the money
for all the extra disks, and they have the money to hire and/or train
talent proficient in RAID and LVM. With lots of users, and with quick
customer response on the line, there's no alternative.
>
>And at home, I have raid on my Linux boxes. I just don’t want the
>hassle, lost time, and lost data from having to restore (e.g.) my mail
>server (or rather, the host it’s a VM on) from a backup if a disk
>fails. And I have had disk failures, but I’ve been able to replace
>them and do a hot rebuild without any downtime or lost data. That’s
>been well worth the extra cost of hardware, and the time taken to set
>it up.
IIRC it's been over a decade since I had a disk failure on my Daily
Driver Desktop (ddd). I replace my ddd every 5 years, and usually buy
all new, high quality Western Digital or other high quality disks. No
seagate ever touches my equipment. But of course, I'm a one man band.
>That’s not to say I don’t have backups as well, raid only protects you
>agains a failed disk - not again other mishaps such as “oops, that
>wasn’t the directory I intended to delete”.
Yes!
>Of course, the operations I and others have suggested have come about
>from a desire to repartition disks into different arrays. That’s not a
>common requirement - and again, the fact that with md it’s possible to
>do so while keeping your data on the internal disks is almost magical
>compared the other raids I’ve previously used where the ONLY option
>would be to nuke the array and your data before building new ones.
>
>And a way to avoid the requirement the OP asked for is, as I do, to
>add LVM on top of the array - though I think LVM can handle the raid
>as well now ? So two levels of abstraction between partitions on disk
>and filesystems - I would say in the Linux world we are blessed with
>this richness of capabilities.
>
>And I confess I keep thinking I should have a proper look at ZFS - but
>I just never find the time, and I already know md + lvm “well enough”
>for my now fairly modest needs.
I consider ZFS from time to time.
SteveT
Steve Litt
Autumn 2022 featured book: Thriving in Tough Times
http://www.troubleshooters.com/bookstore/thrive.htm