User Tools

Site Tools


administration:system-mirror

Mini Raid Software howto

Prologue

  • with the instructions that follow we move a complete root system disk from bare partitions to a RAID 1 software layer.
  • obviously you can use the same trick to upgrade to a RAID 5 software system
  • all the work is done without putting the server off-line, but it's better to stop the services before starting the copy of the files from the old disk to the new one
  • only one reboot process is needed if the disks are already installed on the server and if the whole process is correct.

The second disk is already installed and the partition Table is empty

else remove the partitions on the second disk by hand pbefore starting this procedure

  • /dev/sda: is the original disk from where the system boot
  • /dev/sdb: is the second disk

create the correct partitions we need

  • 100 MB for the boot partition
  • 2Gbyte for the swap partition (there's no need to put the swap partitions on a raid system)
  • rest of the space disk for the root and others partitions

start fdisk and follow these commands

 fdisk /dev/sdb
 n
 p
 1
 [enter]
 +100M
 n
 p
 2
 [enter]
 +2048M
 n
 p
 3
 [enter]
 [enter]
 t
 2
 82
 t
 3
 fd
 w

now we need to create the raid 1 partitions for the boot and the other partitions

  • /dev/md0 will be the first raid 1 partition used only by the /boot directory
  • /dev/md1 will be the raid 1 disk used for everything else
 mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
 mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb3
 echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > /etc/mdadm.conf
 mdadm --detail --scan >> /etc/mdadm.conf
  • we created the raid devices (in downgraded mode)
  • the informations about the devices is written in the /etc/mdadm.conf file.

the /dev/md0 partition will be used flat for the boot partitions

  • so we can format it right now
  mkfs.ext3 /dev/md0

the /dev/md1 partition will be splitted using the lvm layer

  • if the virtual disk vg1 is already used, please change the example below
  pvcreate /dev/md1
  vgcreate vg1 /dev/md1
  lvcreate --name root -L 4G vg1
  lvcreate --name usrlocal -L 4G vg1
  lvcreate --name var -L 4G vg1
  lvcreate --name export -L <bigger than the disk>G vg1
  lvcreate --name export -l <extens available reported by the previous command> vg1
  • we assigned the /dev/md1 partition to an lvm disk and we created different lvm partitions
  • now we need to format the partitions just created
  mkfs.ext3 /dev/vg1/root
  mkfs.ext3 /dev/vg1/usr/local
  mkfs.ext3 /dev/vg1/var
  mkfs.ext3 /dev/vg1/export

time to mount the partitions to copy the system from the original disk

  mkdir /1
  mount /dev/vg1/root /1
  cd /1
  mkdir -p boor usr/local var export proc sys dev
  mount /dev/md0 boot
  mount /dev/vg1/usrlocal usr/local
  mount /dev/vg1/var var
  mount /dev/vg1/export export
  • ok, the new partitions are mounted and ready to receive all the contents
  rsync -aH --exclude=/1 --exclude=/proc --exclude=/sys --exclude=/dev / /1/
  • wait until the system is copied

Create the new initrd image file

  • after the copy we need to create a new initrd and change the information in the /1/etc/fstab file
  • avoid the use of label inside this file, instead use the physical name of the devices
  • you should have something like that:
vi /1/etc/fstab

/dev/vg1/root           /                       ext3    defaults        1 1
/dev/md0                /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/vg1/export         /export                 ext3    defaults        1 2
/dev/vg1/var            /var                    ext3    defaults        1 2
/dev/vg1/usrlocal       /usr/local              ext3    defaults        1 2
#LABEL=SWAP-sda2         swap                    swap    defaults        0 0
LABEL=SWAP-sdb2         swap                    swap    defaults        0 0
  • the reference to the swap partition on the first disk must be commented
  • now we can create the new initrd image include also the serial-ata module, if necessary
mkinitrd --preload raid1 --preload raid456 --preload sata_nv --fstab /1/etc/fstab /boot/initrd-$(uname -r)-raid.img $(uname -r)

Edit the grub configuration file in the original disk

  • change the file adding a instance forthe new disk
  • this is the original instance:
title Fedora (2.6.23.15-80.fc7) RAID
        root (hd0,0)
        kernel /vmlinuz-2.6.23.15-80.fc7 ro root=/dev/vg1/root vga=791 selinux=0 noipv6 panic=30
        initrd /initrd-2.6.23.15-80.fc7.img
  • must be changed in:
title Fedora (2.6.23.15-80.fc7) RAID
        root (hd0,0)
        kernel /vmlinuz-2.6.23.15-80.fc7 ro root=/dev/vg1/root vga=791 selinux=0 noipv6 panic=30
        initrd /initrd-2.6.23.15-80.fc7-2.img

title Fedora (2.6.23.15-80.fc7) RAID
        root (hd1,0)
        kernel /vmlinuz-2.6.23.15-80.fc7 ro root=/dev/vg1/root vga=791 selinux=0 noipv6 panic=30
        initrd /initrd-2.6.23.15-80.fc7-2-raid.img
  • leave the default boot partition to the old disk, so in case of trouble we can reboot

Install the boot loader on the second disk

  • we need to install the grub boot loader on the new disk using the grub command
  • avoid the grub-install command, doesn't work for complex situation like this
  grub
  root (hd1,0) [enter]
  setup (hd1) [enter]
  quit [enter]

update the data on the second disk

* we need to copy the changes from the original disk to the new one

  rsync -aH /boot/ /1/boot/

Reboot

  • time to reboot and see what happens
  reboot
  • at the boot prompt select the second boot option and verify that is the right one checking that contains the root (hd1,0) line
  • if everything was right the system should boot with the raid 1 disk

The New System

  • now we have the system running on the software raid disk, but the raid is in downgrade mode (only one disk is operating at the moment)
  • we need to connect the original disk tho the current one
  • for this reason we need to remove all the partitions on the old disk a recreate the new partition map that must be identical to the one of the /dev/sdb disk we are using now.
  • in our examples /dev/vg0 is the old lvm disk

Deleting the old lvm partitions

  • we disable the old lvm partitions to avoid confusion
 lvchange -a n vg0/root
 lvchange -a n vg0/usrlocal
 lvchange -a n vg0/var
 lvchange -a n vg0/export
  • now we can erase the lvm partitions
  lvremove vg0/root
  lvremove vg0/usrlocal
  lvremove vg0/var
  lvremove vg0/export
  • then we need to remove the vg
  vgremove vg0
  pvremove /dev/sda3

Creating the new partition table form the second disk

* now we can recreate the new partition table copying the partition of /dev/sdb to /dev/sda

 sfdisk -d /dev/sdb | sfdisk /dev/sda

add the second disk to the raid system .....

  mdadm --manage /dev/md0 --add /dev/sda1
  mdadm --manage /dev/md1 --add /dev/sda3
  • the raid subsystem is synchronizing the content of both disks

.... and update the grub code

  grub
  root (hd0,0)
  setup (hd0)
  quit

the last task is to create the second swap partition

  mkswap -L SWAP-sda2 /dev/sda2
  swapon /dev/sda2
  • uncomment the line about the swap partition inside /etc/fstab

Kernel upgrade

During Kernel upgrade the grub file must be manually modified in order to add the reference to the second disk

administration/system-mirror.txt · Last modified: 2009/03/16 14:00 by damir