:: Re: [DNG] [OT] Help on change (spli…
Startseite
Nachricht löschen
Nachricht beantworten
Autor: Simon
Datum:  
To: Devuan ML
Betreff: Re: [DNG] [OT] Help on change (split) partitions of an md raid
Didier Kryn <kryn@???> wrote:

>     I've never found a document making md RAID easy for the beginner. The only usable thing is 'man mdadm'. It takes some time to find you way and understand what you do, but the software is rock-solid and well integrated into the system.


I vaguely recall having looked at man mdadm, then hitting ${favourite_search-engine} to find examples. It can’t have been “too hard” as I managed to pick it up. Then each time I needed to expand my horizons (e.g. when I needed to repair something !) I would it the search engine again and thankfully there are lots of examples out there.
Now it’s getting harder as I don’t use it much - my day job doesn’t involve sysadmin at all now and so the knowledge is rusting away.

>     I've written a little web server which can display the status of md RAID devices by just browsing /proc, and makes monitoring easy; but, when it comes to operations, I never considered creating a graphical menu for them, because it would require a comprehensive understanding of all the possibilities, and these are too many. The complexity of mdadm reflects the variety of RAID configurations and states.


That’s the issue with almost all the nice GUI stuff - by necessity it restricts what you can do to a manageable (in terms of GUI design/build) subset of features.

>     md is the only subsysem to handle comprehensively all RAID configurations. Even better: it does nothing else. In both respects, it cannot compare with LVM, btrfs or zfs.


I agree - mostly.
In terms of implementing raid, I absolutely agree - it’s rock solid, has some very powerful features, but doesn’t have swiss army knife bloat. But, it has no concept of what’s happening above it, and so there are situations where (e.g.) ZFS has advantages - since it understands the semantics of files being stored, there are things ZFS does that md and/or lvm can’t.
I think a prime example of where it’s better to do the redundancy in the filesystem is GoogleFS - it’s a specialised requirement, but it builds redundancy into a cluster of perhaps 20k nodes, working on the basis that nodes will fail often, and so automatically ensures that each chunk is stored in several places (IIRC from a talk some time ago, it even understands racks - so can ensure that all the copies aren’t in the same rack of nodes).


Simon