Create & Convert to Raid1 Setup on Existing Centos7.4 System



Create & Convert to Raid1 Setup on Existing Centos7.4 System

    This tutorial explains how to convert existing single disk CentOS 7.4 system into a two disk RAID1 system without loosing data or reinstall the system. RAID 1 produces a mirrored set, which can tolerate a single disk failure. The GRUB2 bootloader is also configured in such a way that the system will still be able to boot if any of the hard drives fails.

    Before starting lets familiar with our current system Configurations.

    Operating System : Centos 7.4

    Hard Disk partitions

    2GB /boot partition as /dev/sda1 16GB / root partition as /dev/sda2 2GB swap space in /dev/sda3 Partition Type : ext4

    We are adding a second identical disk /dev/sdb for raid 1 setup. The raid will be a linux software raid managed by "mdadm" . If the "mdadm" command is not available, install it using yum command.  So attach the new disk in to the server mother board before proceeding with below steps.

    yum install mdadm

    Now lets get started. Create identical partition scheme as current disk /dev/sda

    sfdisk -d /dev/sda | sfdisk /dev/sdb

    Check changes or see the difference  using below command.

    fdisk -l

    Changes partition type to Linux RAID autodetect. To do so, "fdisk" the specific hard disk.

    fdisk /dev/sdb use "t" to convert all 3 partitions to "fd"

    Verify the result using below command.

    fdisk -l

    Now we are going to create a degraded RAID1. Which means we Create RAID1 devices in degraded state because the one  disk is missing, i.e. /dev/sda which is currently installed with OS and used to boot up.

    In our case we have three partitions. So we are doing it for all partitions in  new disk /dev/sdb

    mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1 mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb2 mdadm --create /dev/md2 --level=1 --raid-devices=2 missing /dev/sdb3

    Check the result with:

    cat /proc/mdstat

    Create the file-systems on the new RAID devices partitions.

    mkfs.ext4 /dev/md0 mkfs.ext4 /dev/md1 mkswap /dev/md2

    Now we need to Manually replicate the existing data on existing /dev/sda partitions to new software RAID partitions.

    Mount both / and /boot

    mount /dev/md0 /mnt/boot/  mount /dev/md1 /mnt/

    Copy Existing data using rsync command

    rsync -auxHAXSv --exclude=/dev/* --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* --exclude=/mnt/* /* /mnt

    During rsync we got error like below

    rsync: rsync_xal_set: lremovexattr(""/mnt/mnt"","security.selinux") failed: Permission denied (13) rsync: rsync_xal_set: lremovexattr(""/mnt/run/initramfs"","security.selinux") failed: Permission denied (13) rsync: rsync_xal_set: lremovexattr(""/mnt/run/initramfs/rwtab"","security.selinux") failed: Permission denied (13) rsync: rsync_xal_set: lremovexattr(""/mnt/run/initramfs/log"","security.selinux") failed: Permission denied (13) rsync: rsync_xal_set: lremovexattr(""/mnt/run/initramfs/state"","security.selinux") failed: Permission denied (13) rsync: rsync_xal_set: lremovexattr(""/mnt/run/initramfs/state/etc"","security.selinux") failed: Permission denied (13) rsync: rsync_xal_set: lremovexattr(""/mnt/run/initramfs/state/etc/sysconfig"","security.selinux") failed: Permission denied (13) rsync: rsync_xal_set: lremovexattr(""/mnt/run/initramfs/state/etc/sysconfig/network-scripts"","security.selinux") failed: Permission denied (13) rsync: rsync_xal_set: lremovexattr(""/mnt/run/initramfs/state/var"","security.selinux") failed: Permission denied (13) rsync: rsync_xal_set: lremovexattr(""/mnt/run/initramfs/state/var/lib"","security.selinux") failed: Permission denied (13) rsync: rsync_xal_set: lremovexattr(""/mnt/run/initramfs/state/var/lib/dhclient"","security.selinux") failed: Permission denied (13) run/user/0/ rsync: rsync_xal_set: lremovexattr(""/mnt/run/user/0"","security.selinux") failed: Permission denied (13)

    Noticed similar errors reported as bug

    https://bugzilla.redhat.com/show_bug.cgi?id=873467

    What I understood is the error status  rsync command is trying to synchronize the attributes on destination partition to match to attributes on the source partition  by removing "extra" attributes. But rsync command has no privilege to perform it even if we run it as root user.

    In my case the permission denied are for files inside /run folder. In my understanding /run/ is often implemented as a tmpfs ( mount | fgrep run ) and data in those directories won't survive a reboot (which is a good thing).  Which means /run include as a temporary filesystem (tmpfs) which stores volatile runtime data.

    So I ignored the rsync errors and proceed with the next steps.

    Now mount System information

    mount --bind /proc /mnt/proc mount --bind /dev /mnt/dev mount --bind /sys /mnt/sys mount --bind /run /mnt/run

    Chroot into /mnt partition for /dev/sdb disk

    chroot /mnt/

    Edit fstab with new drive UUID information

     blkid /dev/md*

    /dev/md0: UUID="bdf72caa-716c-431d-ab23-4699feb14bdf" TYPE="ext4″

    /dev/md1: UUID="cef39ac1-e53b-4a25-9608-7baf20fade02″ TYPE="ext4″

    /dev/md2: UUID="45b6e2b2-cd79-4fdd-8288-80e1db0da874″ TYPE="swap"

    Now Open /etc/fstab file and add entries like below.

    vim /etc/fstab  UUID=your-UUID /                       ext4     defaults        1 1 UUID=your-UUID /boot                   ext4     defaults        1 2 UUID=your-UUID swap                    swap    defaults        0 0

    Now create a mdadm.conf from current RAID configuration:

    mdadm --detail --scan > /etc/mdadm.conf

    Backup,Update the current initrd and rebuild the initramfs with the new mdadm.conf:

    cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bck dracut --mdadmconf --fstab --add="mdraid" --filesystems "xfs ext4 ext3" --add-drivers="raid1" --force  /boot/initramfs-$(uname -r).img $(uname -r) -M 
     

    Now edit grub  and add  some default parameters to grub.

    vim /etc/default/grub GRUB_CMDLINE_LINUX="crashkernel=auto rd.auto rd.auto=1 rhgb quiet" GRUB_PRELOAD_MODULES="mdraid1x"

    Make new grub config

    grub2-mkconfig -o /boot/grub2/grub.cfg

    Install grub on new disk /dev/sdb

    grub2-install /dev/sdb

    At this point we are going to reboot the system choosing new disk /dev/sdb from bios. Ff all worked out system
    will boot with new disk /dev/sdb. After that check mount points and raid status using below commands.

    swapon -s mount -t ext4 cat /proc/mdstat

    In our case, we switched the disk ports so after reboot new disk become /dev/sda and old disk become /dev/sdb.
    So don't get confuse with the screenshot results, if you opted to boot from new disk from bios instead of disk
    port swap, in that case the mdstat command results will show the new disk as /dev/sdb itself.

    Now we need add old disk to the raid array. So Change partition type to "Linux raid autodetect" for old disk.

    fdisk /dev/sdb use "t" to convert all 3 partitions to "fd" (in our case its sdb)

    Now Add old disk to raid 1 array. (In our case its sdb)

    mdadm --manage /dev/md0 --add /dev/sdb1 mdadm --manage /dev/md1 --add /dev/sdb2 mdadm --manage /dev/md2 --add /dev/sdb3

    Check rebuild status using below command and see if it running.

    watch -n1 "cat /proc/mdstat"

    Reinstall grub on old disk ( In our case its /dev/sdb)

    grub2-istall /dev/sdb

    This concluded the raid1 setup on existing centos 7.4 install. Once the rebuild completed, the mdstat result will show like below and from that we can confirm its completed and running fine. Try to reboot it again and see if there any issues encountering.


    By |December 18th, 2017|Linux



    Your truly,

    Tiruchirappalli Sivashanmugam


    Close