Auto Mounting RAID Arrays on Linux Server Startup
Auto Mounting RAID Arrays upon Boot
Most (if not all) Linux System Administrators configure their servers to auto mount any RAID arrays at server boot time. This way, if the server fails or is rebooted, the filesystems will come back up with the server without any human intervention (-unless there are errors of course). To do this, it's a pretty straightforward two-step process.
.. Linux System Administrators configure their servers to auto mount any RAID arrays at server boot time. This way, if the server fails or is rebooted, the filesystems will come back up with the server without any human intervention
Updating the /etc/fstab File
The /etc/fstab file is read by Linux at boot time to see which filesystems it should mount and how. Each line in /etc/fstabspecifies a single filesystem, so in order to mount our new RAID drive, we only need to append a single line to the file in the following format:
<Logical Volume Address> <Directory to mount as> <filesystem type> defaults 0 0
In our example, the line added would be:
# Added RAID-1 Data Drive (Fred Bloggs, 19th February 2011)
/dev/mapper/MYDATA01-lv01 /mnt/raid1 ext4 defaults 0 0
This specifies that Linux should mount the logical volume /dev/mapper/MYDATA01-lv01 as a directory called /mnt/raid1 - and that the filesystem is in ext4 format, should be treated in the default manner and not be backed up or checked by the fsckutility prior to mounting.
Note: it's always a good idea to comment any changes you make to system files, so you can see exactly what you changed later on
Updating the /etc/mdadm/mdadm.conf File
For Ubuntu Server, it seems it is also necessary to update the /etc/mdadm/mdadm.conf file. If this is not done, the RAIDdevice will not be mounted when you reboot the server (-it fact, the whole server startup is paused until you choose to "fix it manually" or "skip the mounting"). The solution is to run the following command on your system, once the RAID drive has been configured:
mdadm --examine --scan
This should yield a line for your device (-actually, one line per RAID device) in the following format:
$ mdadm --examine --scan ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b1201c63:a3befd4b:372cd4fe:26600417 spares=1
We will need to add this line (-or lines, if you have adding multiple RAID devices) to the /etc/mdadm/mdadm.conf file: first make a backup of the file, in case we need to revert back to it:
$ sudo cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.orig
Now, edit the /etc/mdadm/mdadm.conf file and paste in the output from mdadm --examine --scan under the "definitions of existing MD arrays" section as shown:
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # Example - added by Fred Bloggs, 1st January 2011 ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b1201c63:a3befd4b:372cd4fe:26600417 # This file was auto-generated on Sat, 19 Feb 2011 16:22:47 +0000 # by mkconf $Id$
Save the changes then reboot your machine to check that the device is mounted correctly at startup time.
Tiruchirappalli Sivashanmugam