Newbie with MDADM - quick question while rebuilding PC

night_2004

2[H]4U
Joined
May 31, 2007
Messages
2,229
Here's my stupid question of the day. I have a new Fractal R3 to replace my NAS's Antec 300. I've got Ubuntu Server and MDADM set up with a 5x 1TB RAID6 configuration. Here's the specific details about my MDADM configuration.

Code:
> cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md0 : active raid6 sdd1[2] sde1[3] sdf1[4] sdb1[0] sdc1[1]
      2930280960 blocks super 1.2 level 6, 128k chunk, algorithm 2 [5/5] [UUUUU]

Code:
> cat /etc/mdadm/mdadm.conf
ARRAY /dev/md0 UUID=9de23957:7afe0da8:489f8918:50727432
PROGRAM /home/raid/raid-email.py

Since I have everything being identified by UUID, does hard drive order even matter? I don't want to screw up the array just because I cross a SATA cable or two accidentally. That being said the only data on the array is backups for other PCs anyway. Easily to recreate if there's a problem but it would be a pain.

Still assume I need to keep the boot HDD on the same SATA port as well? Or do I just need to make sure BIOS are set to boot to the right hard drive?
 
No, just make sure all the drives are accessible to Linux. You won't screw up the array by mdadm not being able to put the array together, just don't recreate it if you encounter errors.

As for the boot drive, you'll probably want that on the first SATA port just for the sake of consistency but i'm pretty sure GRUB uses uuid now as well so it should work on a different port but I can't say for sure.
 
as long as UUID not modified, you are safe :)
do Not forget to safe mdadm configuration just for safety :)


current grub is uuid aware...
 
@OP

Since you are running mdadm with metadata 1.2 you're good to go. If you're like me and are running it with metadata 0.9x, then drive order does tend to matter since the older metadata scheme stores the /dev entries on the RAID drives themselves, so if the drive order changes...:rolleyes:

As an example, here is a drive in a 1.2 metadata array:

Code:
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : dadad284:f74b116a:fa1be0ac:65f91d59
           Name : :1
  Creation Time : Fri Feb 24 11:57:21 2012
     Raid Level : raid0
   Raid Devices : 2

 Avail Dev Size : 976767986 (465.76 GiB 500.11 GB)
  Used Dev Size : 0
    Data Offset : 16 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : 931fbacf:888efdf7:2229b073:67b63c3f

    Update Time : Fri Feb 24 11:57:21 2012
       Checksum : f17a8fdf - correct
         Events : 0

     Chunk Size : 256K

    Array Slot : 1 (0, 1)
   Array State : uU

And here is one under the old 0.9x scheme:

Code:
/dev/sdk:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 1d69e328:c47acef4:83e06544:665800a8 (local to host xxx.yyy.com)
  Creation Time : Sun Jul 10 10:51:46 2011
     Raid Level : raid5
  Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
     Array Size : 11721086976 (11178.10 GiB 12002.39 GB)
   Raid Devices : 7
  Total Devices : 7
Preferred Minor : 0

    Update Time : Sun May  6 00:45:29 2012
          State : clean
 Active Devices : 7
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 8781e462 - correct
         Events : 196958

         Layout : left-symmetric
     Chunk Size : 256K

      Number   Major   Minor   RaidDevice State
this     6       8      160        6      active sync   /dev/sdk

   0     0       8       96        0      active sync   /dev/sdg
   1     1       8      112        1      active sync   /dev/sdh
   2     2       8      128        2      active sync   /dev/sdi
   3     3       8       64        3      active sync   /dev/sde
   4     4       8       80        4      active sync   /dev/sdf
   5     5       8      144        5      active sync   /dev/sdj
   6     6       8      160        6      active sync   /dev/sdk

Big difference. :D
 
Back
Top