:: Re: [DNG] [OT] Help on change (spli…
Top Pagina
Delete this message
Reply to this message
Auteur: Simon
Datum:  
Aan: Devuan ML
Onderwerp: Re: [DNG] [OT] Help on change (split) partitions of an md raid
tito via Dng <dng@???> wrote:
>
> On Wed, 12 Jul 2023 20:44:08 -0400
> Steve Litt <slitt@???> wrote:
>
>> Simon said on Wed, 12 Jul 2023 18:22:14 +0100
>>
>>> Steve Litt <slitt@???> wrote:
>>>
>>>> You just reminded me why I don't use RAID.
>>>
>>> I find that statement interesting. What do you use instead - rely on
>>> backups ?
>>
>> Anyone who doesn't rely on backups, whether they have RAID or not, is
>> cruisin for a bruisin. RAID is no substitute for a backup.
>
> Yes of course, best if off site, you need them if your shop
> burns down or your is hit by a lightning.


And I didn’t mean to imply raid removed a need for backups.

>>> md on Linux is actually quite easy to use.
>>
>> Your eight step procedure didn't look easy.


Everything is relative. And of course, there are multiple ways to achieve the end result so you can choose whichever method is “best” according to your knowledge level and weighting factors.

>> [snip]
>>
>>> I’ll add that my viewpoint is coloured by having had my backside saved
>>> by raid so many times. Yes, if a disk fails you can replace it,
>>> restore your backups, and carry on.
>>
>> No doubt that restoring everything from backup including system files
>> (which I don't back up) is a PITA. But all these layers of abstraction
>> heaped on top of the simplicity of EXT4 make a mistake during repair a
>> lot more likely.
>
> Isn't it the other way the simplicity of EXT4 is sitting on top of
> a fully transparent (to EXT4) layer of md abstraction?
>
>>> But in a business world
>>
>> s/business world/big business world/


NO ! “business”, not just “big business”.
Clearly the costs of lost work and delays will be higher for a big business, but for a small business, the lower costs can be more significant.
But that’s something that will vary between businesses, and depend on the owner’s attitude to risk, and how they value their time and information. I guess there will be some who figure that all they might lose is a day or two, and they can remember what little they did in that time - and for them, it may be cost effective to not bother with raid.

>>
>>> that takes
>>> on a whole new meaning : Firstly, there WILL be lost data, typically
>>> everything all your staff did today since last night’s backup.
>>> Secondly, you’ll have people sat around twiddling their thumbs while
>>> to get everything back up again - just the restore typically takes
>>> “hours”. In business,
>>
>> Big business


And small business too - unless the computers are almost incidental in their routine work.

>>> there really is no case for not using raid on
>>> anything but things like mass desktops/laptops that can be rebuilt
>>> easily and which don’t hold any data.
>>
>> For big business, this is absolutely true. The business has the money
>> for all the extra disks, and they have the money to hire and/or train
>> talent proficient in RAID and LVM. With lots of users, and with quick
>> customer response on the line, there's no alternative.
>
> Extra Disks, the minimum you need for redundancy of data is 2 disks
> so extra costs are not that high, plus maybe a spare disk you keep
> for fast replacement (shared between a few boxes).


Indeed - just one extra disk.
Clearly, if there’s a load of desktops, and if one fails then it’s user can temporarily switch to another for a while, not justified. But for a server (or a single desktop) supporting the whole business, I would consider it “poor management” not to have raid.

>> IIRC it's been over a decade since I had a disk failure on my Daily
>> Driver Desktop (ddd). I replace my ddd every 5 years, and usually buy
>> all new, high quality Western Digital or other high quality disks. No
>> seagate ever touches my equipment. But of course, I'm a one man band.
>
> You are a lucky man, I had disk failures at home and work and raid
> always saved my ass, especially at work it avoided downtime and thus
> loss of revenue.


Ditto, I’ve had many failures - including with fairly new drives, and including one quite new SSD (unfortunately, that was on my primary laptop that doesn’t have room for a second drive, so I had to restore from backup).


Incidentally, the "Western Digital or other high quality disks. No seagate ever touches my equipment.” bit had me going off to look at Backblaze’s stats - a very interesting topic if you are into that. Most recent at https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2023/

If you scroll down to "The Average Age of Drive Failure” things look very interesting. Seagate drives are 2nd and 3rd in the table, behind WDC. But at the bottom, the last 3 places are taken by WDC (13 months), Seagate (12 months), and Toshiba (9 months). Overall, both Backblaze and Secure Data Recovery (cited in the Backblaze page) had an average drive failure age of under 3 years.

I too went right off Seagate - especially when I found that one drive didn’t just generate lots of bad blocks, but would “fail” if asked to read any bad block in such a way as to make recovery of readable blocks impossible) - but I think they’ve improved from there. but other “good” makes have also had their problem drives. The only hard and fast rule is to assume the drive could fail and be prepared for it - IMO raid is the easiest option as the drive can be swapped out and the raid system will take care of rebuilding the new drive with minimum (or even zero) downtime.


> I would also not buy batches of new disks for a raid array all of the same
> model and brand but rather mix different brands to avoid a faulty
> or problematic batch having disks failing at the same time making
> recovery difficult or impossible.


Indeed, but guess what you get when you buy any storage appliance !
The last drives I bought were used 6TB ones - apart from the cost, I figured they’d be “run in”.


Simon