:: Re: [DNG] mdadm 4.5 for daedalus
Top Page
Delete this message
Reply to this message
Author: tito
Date:  
To: dng
Subject: Re: [DNG] mdadm 4.5 for daedalus
On Tue, 24 Feb 2026 20:39:35 +0000
Rainer Weikusat via Dng <dng@???> wrote:

> In case this is of interest to someone: I've created a fork of the
> Devuan unstable mdadm package which can be built on daedalus. This has
> mdadm version 4.5 which is required for creating persistent
> superblocks which work with newer kernels (at least 6.18 and 6.19, I'm
> presently running 6.19.1). I've also done some cleanup work on this
> (refreshed patches), made working with the package a little easier
> and removed all traces of systemd support from the generated package
> (this is not a meaningul technical improvement for any definition of
> that, I just wanted it gone for spiritual reasons).
>
> I'm planning to do some more work on this but - so far - have no idea
> what exactly. Probably go over the bug list and fix whatever I can.
>
> This is available from GitHub:
>
> https://github.com/rweikusat/mdadm-devuan


Hi,
thanks for your effort, one minor bug that hit me
just yesterday after migrating a nas box from daedalus
to excalibur with mdadm/stable,now 4.4-11devuan3
was a daily email with this message that was harmless
but alarming.

mdadm: DeviceDisappeared event detected on md device /dev/md/md0
mdadm: NewArray event detected on md device /dev/md0

which seems to be triggered by the script in /etc/cron.daily/mdadm
and reproducible everytime mdadm --monitor --scan --oneshot
is run.

Seems to be Bug#1040638: Bothersome message every day from cron
which is rather old but not yet fixed.

"the bug affects entries of the form `/dev/md0`
in `mdadm.conf`, and was introduced by commit 84d969be, which now always
looks for `/dev/md/$(basename)`, resulting in failures when attempting
to access the non-existent `/dev/md/md0`."

My mdadm.conf generated by the same version of mdadm is:
cat /etc/mdadm.conf
DEVICE partitions
MAILADDR root@localhost
MAILFROM NAS-mdadm
ARRAY /dev/md0 level=raid6 num-devices=4 metadata=1.2 spares=1 UUID=705d30b8:1b64bf1b:8d5bf391:65e26d6f

"(Until this is resolved, a simple `../md0` symlink seems to do the job
as a temporary fix.)"

I as suggested in https://discourse.ubuntu.com/t/mdadm-mdmonitor-oneshot-service-daily-devicedisappeared-newarray-alerts/56076
for now fixed it by creating a udevd rules file:

cat /etc/udev/rules.d/md0.rules
# Work around an array detection bug in mdmonitor
SUBSYSTEM=="block", ACTION=="add|change", ENV{MD_UUID}=="705d30b8:1b64bf1b:8d5bf391:65e26d6f", SYMLINK+="md/%k"

that creates the missing path looked up by mdadm: /dev/md/md0

ls -la /dev/md/
total 0
drwxr-xr-x  2 root root   60 Feb 24 10:50 .
drwxr-xr-x 18 root root 4100 Feb 24 10:50 ..
lrwxrwxrwx  1 root root    6 Feb 24 10:50 md0 -> ../md0


This is not a optimal solution as you have to add a rule for every raid array
and to keep it updated with the correct UUID.
Would be nice to find a better fix for that because
DeviceDisappeared events at breakfast really
make a bad start for the day.

Ciao,
Tito