SCENARIO:
OS: linux debian 6
Software Raid: raid 1 created with mdadm
Failed drive: sata drive (/dev/sda) of size 500Gb
New drive: same size or bigger sata drive. In my case 1Tb sata drive
WHAT TO DO:
1. Shutdown server,
2. Replace failed disk,
3. Start server,
4. Rebuild Raid 1,
5. Update grub.
LOOK FOR WORKING DISK
With fdisk I looked how is active disk in Raid 1 configured.
root@server:~# fdisk /dev/sdb
Command (m for help): p
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 13 104391 fd Linux raid autodetect
/dev/sdb2 14 293 2249100 fd Linux raid autodetect
/dev/sdb3 294 60801 486030510 fd Linux raid autodetec
I used fdisk to create same size raid partitions on new disk:
PREPARE NEW DISK
# fdisk /dev/sda
n p 1
n p 2
n p 3
n - new partition
p - primary partition
t - type of partition (used fd (Linux raid) for all 3 partitions on disk /dev/sda)
a - add bootable flag to first partition
w - write changes to disk
REBUILD RAID 1 with MDADM:
# mdadm /dev/md0 -a /dev/sda1
NEW DISK:
root@server:~# fdisk /dev/sda
Command (m for help): p
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 fd Linux raid autodetect
/dev/sda2 14 293 2249100 fd Linux raid autodetect
/dev/sda3 294 60801 486030510 fd Linux raid autodetect
BUILDING RAID 1:
root@server:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[2] sdb3[1] 486030400 blocks [2/1] [_U] [==>..................] recovery = 14.0% (68114752/486030400) finish=147.5min speed=47214K/sec
md1 : active raid1 sda2[0] sdb2[1] 2249024 blocks [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1] 104320 blocks [2/2] [UU]
unused devices:
ALL DISKS UP AND RUNNING
root@server:~# cat /proc/mdstat Personalities : [raid1]
md2 : active raid1 sda3[0] sdb3[1] 486030400 blocks [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1] 2249024 blocks [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1] 104320 blocks [2/2] [UU]
unused devices:
UPDATE CONFIGURATION
# mdadm --examine --scan > /etc/mdadm.conf
GRUB INSTALL/UPDATE ON NEW DISK
# grub-install /dev/sda
# update-grub /dev/sda
Example of howto remove a failed disk from array:
# mdadm --fail /dev/md0 /dev/sdb3
# mdadm --remove /dev/md0 /dev/sdb3
IF PROBLEMS WITH update-grub … update-grub failed with no such disk …
SOLUTION
# mv /boot/grub/device.map /boot/grub/device.map.old
# update-grub
2 responses to “replace failed disk in raid 1”
Hi there! I just wanted to ask if you ever have any issues with hackers?
My last blog (wordpress) was hacked and I ended up losing months
of hard work due to no data backup. Do you have any methods to prevent hackers?
Issue…
They are traying to break in all the time. 🙂
What I do:
1. Regular update of wordpress, plugins, server,…
2. Daily backup
3. Limit access to /wp-admin and /wp-login.php
4. Security plugins
5. Check logs,…