:: Re: [DNG] install on a raid 1 array
Pàgina inicial
Delete this message
Reply to this message
Autor: Simon
Data:  
A: Devuan ML
Assumpte: Re: [DNG] install on a raid 1 array
o1bigtenor via Dng <dng@???> wrote:

> First attempt
> set up 2 raid 1s
> except now I can't partition the drives


You don’t partition the drives after creating an array with them - you partition the array (or just use it as a filesystem of LVM PV).

> second attempt
> set up 2 drives with some spacer partitions (4.0 MB each) and some 8 partitions


You don’t need the spacers.

> set up 2 drives with same spacer partitions and a large /home partition
> then wanted to make 2 raid arrays
> - - - - except I'm only allowed to use 2 partitions  - - - -  one from
> each member
>   of the array.
>   (There was also complaining that there were 2 /root partitions
> before I tried to
>    create the array.)


I’m not quite following what you are describing here.


> So - - - how do I achieve 2 raid 1 arrays?
> #1 has partitions for /efi, /boot, /root/, swap, /tmp, /var, /usr, /usr/local
> with a spacer of 4.0 MB between (and before the first and after each)
> #2 has a partition for /home
> with a spacer of 4.0 MB between (and before the first and after)


To be clear, you want array #1 to use all of the first pair of disks, and partition it with those partitions ?

Partition each disk with just one partition, and set it to type RAID. So lets assume you have sda1 and sdb1.
Create a RAID 1 array, 2 members, using the two partitions
Partition the array.

Let me show you the setup on one of my machines (it’s a Xen host) :

# fdisk -lu /dev/sda

Disk /dev/sda: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002427f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          64      204863      102400   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2          204864     4399167     2097152   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/sda3         4401810   473146379   234372285   fd  Linux raid autodetect


Here I have a dedicated boot partition, which ends up as part of an array, a dedicated swap partition (also part of an array), and a main partition that will be part of an array and then an LVM PV.
sdb is partitioned identically.


# cat /proc/mdstat 
Personalities : [raid1] 
md125 : active raid1 sda3[0] sdb3[2]
      234371125 blocks super 1.2 [2/2] [UU]


md126 : active raid1 sda2[0] sdb2[1]
      2096116 blocks super 1.2 [2/2] [UU]


md127 : active raid1 sda1[0] sdb1[1]
      102336 blocks [2/2] [UU]


So the 3 arrays, all healthy.



# ls -l /dev/md/
total 0
lrwxrwxrwx 1 root root 8 Jan 22 2021 boot -> ../md127
lrwxrwxrwx 1 root root 8 Jan 22 2021 main -> ../md125
lrwxrwxrwx 1 root root 8 Jan 22 2021 swap -> ../md126

When I created the arrays I gave them names - in most situations I can refer to them as /dev/md/name which is a lot easier than trying to remember the numbers.



# lvm pvdisplay
  --- Physical volume ---
  PV Name               /dev/md125
  VG Name               vgmain
  PV Size               223.51 GiB / not usable 2.05 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              57219
  Free PE               28291
  Allocated PE          28928
  PV UUID               h0JXAi-y6Uq-cKGn-1AxH-1FfY-DWiD-eWqjsc


Here’s the single LVM PV defined.


# lvm vgdisplay -C
  VG     #PV #LV #SN Attr   VSize   VFree  
  vgmain   1   9   0 wz--n- 223.51g 110.51g


The VG that’s using it


# lvm lvdisplay -C
  LV           VG     Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  root         vgmain -wi-ao  2.00g                                      
  var          vgmain -wi-ao 20.00g                                      


And some of the LVs defined on it.



# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>         <dump>  <pass>
proc            /proc           proc    defaults          0       0
LABEL=root      /               ext3    errors=remount-ro 0       1
LABEL=boot      /boot           ext3    defaults          0       2
LABEL=var       /var            ext3    defaults          0       2


My personal preference is to use filesystem labels - though there are some potential security issues there (specifically, what if a guest applies a matching filesystem label to one of it’s filesystems ?) when it’s a host for untrusted VMs. But as all the VMs are my own, it’s not a problem.



# mount
/dev/mapper/vgmain-root on / type ext3 (rw,errors=remount-ro)
/dev/md127 on /boot type ext3 (rw)
/dev/mapper/vgmain-var on /var type ext3 (rw)

And here’s what I have mounted (with the irrelevant bits omitted). Yeah I’m a bit old school and still use ext3 !



Now, back to your specific setup.
I suspect you really want your efi partition to be a plain partition - so that’ll be sda1. I’d create a matching sdb1 partition - and possibly clone the contents of /efi onto it at some point so you have a working boot setup from sdb.
Personally I’d keep to having a separate array for /boot - so that’ll be sda2 & sdb2. Create those two, then create an md array using them, then create a filesystem on it.
Then you can either continue creating partition pairs, creating a raid array on each pair, and putting a filesystem on each array - but at some point you’ll hit limits on creating partitions.
Or you can just create a partition for “the rest of the disk” (minus a little to allow for a replacement disk not being identical in size), create an array across the pair of partitions (sda3 & sdb3), then assign the array as an LVM PV, create an LVM VG using that PV, and finally create the LVM LVs you want for each filesystem - using LVM is more flexible as it’s easy to resize an LV.

For /home, partition the disks (sdc & sdd) using the whole disk. Put the two partitions (sdc1 and sdd1) into an array. Put a filesystem on the array.


There are two ways to do this with the Debian installer (from memory).

Using the installer, do the partitioning but at this stage specify to do nothing with each partition (apart from efi, I don’t use such new fangled stuff so I don’t know what it does by default with that). Exit the partitioner and it’ll ask if you want to do the partitioning - say yes.

Now go into the raid array option in the partitioner and create the arrays - when prompted, create the arrays and exit the raid setup.
The partitioner should now show you some physical partitions as being in raid arrays, plus the raid arrays.

Go into the LVM setup section, create an LVM VG and use the array previously created (sda3 & sdb3 in the above example) for it’s single PV. Then create LVs as required.

You can configure the /boot array as “create filesystem, mount on /boot”.
If you have created any more arrays (e.g. for /), then configure these accordingly.
For /home, use the array containing sdc1 & sdd1.
Similarly, configure each LVM LV.
You should now see “something” for each filesystem you want - the volume used will be either a partition (/efi), an array (/boot, possibly /), or an LV (swap, /var, ...).
Now, when you exit the partitioner, it’ll create all the filesystems, and mount them all on the target directory it uses for the installation.


As an alternative to using the installer, IIRC you can switch to another console and it’ll give you a root shell. You can now do all the above steps manually using the command line - partition the disks, create the arrays, create the LVM VG & LVs, create the filesystems.
When you with back to the installer, you may have to exit the partitioner tool and re-enter it to see all the now existing partitions, arrays, LVs, and filesystems. Flag each filesystem according to where you want it mounted, but keep the exiting filesystem. When you exit the partitioner you’ll be at the same step as doing it all via the installer.

In either case, you can switch to another console and see what’s mounted where - if there’s something wrong then go back and correct it now.

Can I tell you what the commands are to create an MD array ? Can I **** - I do it so infrequently that I have to look it up each time :-( “man” is your friend, along with “md —help”.





Hendrik Boom <hendrik@???> wrote:

> I set up a separate RAID 1 pair for /boot.
>
> I don' know if it is necessary now, but there's an older mdadm [artition
> format where the RAID signature is placed at the end of th partition.
>
> I specified this older format when I reated the RAID pair for /boot.
>
> So when I boot, there's no problem finding the boot partition and reading it, because the important stuff that's used at boot time (i'e', the file system) is found at the start of the partition. And it doesn't matter which of the two copies I boot from becuase they are identical. All this booting is done before RAID assembly. If one of the disks is missing because of hardware failute, it just boots from the other.


It’s my understanding that GRUB now understands both md arrays and LVM - so you can use the later partition format for the array. But as you point out, if you use the older format (which only puts it’s metadata right at the end of the array) then something that doesn’t understand raid will just see two identical partitions with the same contents.


Regards, Simon