:: Re: [DNG] Some RAID1's inaccessible…
Top Page
Delete this message
Reply to this message
Author: tito
To: dng
Subject: Re: [DNG] Some RAID1's inaccessible after upgrade to beowulf from ascii
On Tue, 9 Nov 2021 14:56:59 -0500
Hendrik Boom via Dng <dng@???> wrote:

> I upgraded my server to beowulf.
> After rebooting, all home directories except root's are no longer
> accessible.
> They are all on an LVM on software RAID.
> The problem seems to be that two of my three RAID1 systems are not
> starting up properly. What can I do about it?
> hendrik@april:/$ cat /proc/mdstat
> Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
> [raid4] [raid10]
> md1 : inactive sda2[3](S)
>       2391296000 blocks super 1.2

> md2 : inactive sda3[0](S)
>       1048512 blocks

> md0 : active raid1 sdf4[1]
>       706337792 blocks [2/1] [_U]

> unused devices: <none>
> hendrik@april:/$

I would try:
1) detect which is the missing drive for md0 re-add it to the array
    and wait till rebuild of the array is completed.

    To detect which drive to re-add you can use this command:

      mdadm --examine /dev/sd* | grep -E "(^\/dev|UUID)"

    Output is similar to:

     Array UUID : 423e4c05:aa5f073c:19d557d7:2d8ada5a
    Device UUID : 6335644d:1c23a8cb:8098f60b:893546f1
     Array UUID : 423e4c05:aa5f073c:19d557d7:2d8ada5a
    Device UUID : e4234284:f7e12df4:9cc00061:13143756

look for drives with the same Array UID then run:
mdadm --manage /dev/md0 -a /dev/sdxx

2) when you are done with md0, you repeat the array member detection
      for md1 and as the array are inactive I would stop them
      and restart them with the correct members

         mdadm --stop /dev/mdX

    Then try to reassemble the array manually:

    mdadm --assemble /dev/mdX /dev/sdxx /dev/sdyy

      eventually use --force

        mdadm --assemble --force /dev/mdX /dev/sdxx /dev/sdyy

     wait for rebuilding

3) then repeat the same process for md2

4) when everything is restored recreate mdadm.conf
   cp mdadm.conf mdadm.conf.bak
    mdadm --examine --scan > mdadm.conf.new

add the other configuration parameters you like


     mv mdadm.conf.new mdadm.conf

5) rebuild your initramfs if you use one

      update-initramfs -k all -u

There is no guarantee that this works, you have been warned.
Hope this helps at least as inspiration to find a way to recover your arrays.


> hendrik@april:/$ cat /etc/mdadm/mdadm.conf
> DEVICE partitions
> ARRAY /dev/md0 level=raid1 num-devices=2
> UUID=4dc189ba:e7a12d38:e6262cdf:db1beda2
> ARRAY /dev/md1 metadata=1.2 name=april:1
> UUID=c328565c:16dce536:f16da6e2:db603645
> ARRAY /dev/md2 UUID=5d63f486:183fd2ea:c2a3a88f:cb2b61de
> hendrik@april:/$
> The standard recommendation seems to be to replace lines
> in /etc/mdadm/mdadm.conf by lines prouced by mdadm --examine --scan:
> april:~# mdadm --examine --scan
> ARRAY /dev/md/1 metadata=1.2 UUID=c328565c:16dce536:f16da6e2:db603645
> name=april:1
> ARRAY /dev/md2 UUID=5d63f486:183fd2ea:c2a3a88f:cb2b61de
> ARRAY /dev/md0 UUID=4dc189ba:e7a12d38:e6262cdf:db1beda2
> april:~#
> But this replacement involves changing a line that dies work (md0),
> not changing one that did not (md2),
> and changing another one that did not work (md1).
> Since --examine's suggested changes seem uncorrelated
> with the active/inactive record, I have little faith in
> this alleged fix without first gaining more understanding.
> -- hendrik