Software RAID recovery How To

If you have a question about a file system (for example xfs, reiserfs, ...)

Moderator: feffer

brucehohl
Posts: 1
Joined: Sun Nov 08, 2009 12:17 pm
Location: Ohio

Software RAID recovery How To

Postby brucehohl » Sun Nov 08, 2009 1:03 pm

The following procedure was used to restore an Ubuntu 8.04 'SERVER' which used Software RAID. The root partition was restored from a partimage and the data was restored from a backup server. Lots of notes are included. Much of this may be specific to the install described but likely can be adapted to other situations. The formatting may be somewhat difficult to follow but was 'perfect' in the original text document :).

==========================================================================
Recreate RAID using mdadm:
==========================================================================
1- Get new disk or create new virtual disk image.
$ qemu-img create -f qcow blank.qcow 10G


2- Boot with Linux SystemRescueCD with new disk available as sda1.


3- Setup access to image:
set-up networking, make mount point and mount SMB share on host:

Setup networking:
# net-setup eth0
With Qemu guest IP address = 10.0.2.15
Host IP address = 10.0.2.2

Mount a CIFS share:
mkdir /mnt/public
mount //10.0.2.2/public /mnt/public -t smbfs

- Note 1
CIFS is limited to 2GB files. Partimage allows images to be
written in 2GB volumes to accomodate this limitation. Options
to avoid to limit:
* use package partimage-server
* use package curlftpfs & mount an ftp server
* use package nfs-server & mount a nfs3 share

- Note 2 - To log into Qemu VM guest from host:
Qemu start-up parameter: -redir tcp:10022::22
Set system rescue root passwd = admin.
From host terminal: $ ssh -p 10022 [email protected]

- Note 3 - To mount sda1 or [md0]:
# mkdir /mnt/sda1[md0]
# mount /dev/sda1[md0] /mnt/sda1[md0]


4- Create disk partitions:

(1) New disk is exactly same size and geometry as old disk:
Then restore partition table from partimage.

If PartImage can not 'see' disk add label and small partition:
% parted /dev/sda mklabel msdos
% parted /dev/sda mkpart primary fat32 0 10

Open PartImage: '% partimage'
1- Partition to save/restore = sda1
2- Image to create/use = /mnt/public/SERVER_yyyy-mm-dd.img.000
3- Restore MBR from the image file = Y
4- F5
5- Disk with the MBR to [be] restore[ed] = /dev/sda
6- Original MBR to use = 000: /dev/sda
7- What to restore: Y = The primary partition table
8- F5

(2) All other disks sizes and geometries:

Note: Writing an incorrect or a malfitting partition table to
a disk requires expert level knowledge to correct. Tools like
parted and cfdisk will detect and not operate on such tables.

Write a new partition table:
Use Gparted + Xserver (frontend for GNU parted):
% wizard (select Xvesa-run)
Start Gparted from menu and maximize.
---
First create partitions and filesystems,
Then set flags (New_UUIDs):

Size Type FS Flags For
6 GB primary ext3 raid boot /
1 GB primary swap raid swap
3 GB primary ext3 raid /home


5- Write image to disk (New_UUIDs):
% partimage

Write Image to sda1 partition:
1- Partition to save/restore = sda1
2- Image file to create/use = /mnt/public/SERVER_YYYY-MM-DD.mg.000
3- Restore partition from an image file = Y
4- F5
5- If finisahed successfully = 'Wait = Y'
6- F5

Note:
PartImage does not resize partitions so do the following to move
an image to a larger partition.

1- Use 'parted /dev/sda print' to get the original partition size.
2- Create new partition as close in size as possible.
3- Restore image to that partition. Partition space in excess of
the image size will not [ever] be available.
4- Use [G]parted to increase the partition size.

Or just copy all files to the resized partition!
# rsync -av --delete [email protected]:/ /


6- Install MBR with GRUB:

(1) [email protected] /root % grub

GNU GRUB version 0.97 (640K lower / 3072K upper memory)

[ Minimal BASH-like line editing is supported. For the first
word, TAB lists possible command completions. Anywhere else
TAB lists the possible completions of a device/filename. ]

(2) grub> find /boot/grub/stage1
(hd0,0)

(3) grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0x83

(4) grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... yes
Checking if "/boot/grub/stage2" exists... yes
Checking if "/boot/grub/e2fs_stage1_5" exists... yes
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 16 sectors are
embedded.
succeeded
Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p
(hd0,0)/boot/grub/
stage2 /boot/grub/menu.lst"
... succeeded
Done.

(5) grub> quit


7- Recreate RAID devices:

Note !!!
PERFORM THIS STEP FROM (initramfs) NOT SystemRescueCD.
md devices created by SystemRescueCD not recognized by Ubuntu.
When done 'exit'.

(1) Check for md info:
% mdadm --examine /dev/sda1 [/dev/sda2] [/dev/sda3]
RESULT:
mdadm: No md superblock detected on /dev/sda1.


(2) Create and start md devices (New_UUIDs):

% mdadm --create /dev/md0 --level=1 --raid-devices=2 \
/dev/sda1 missing
% mdadm --create /dev/md1 --level=1 --raid-devices=2 \
/dev/sda2 missing
% mdadm --create /dev/md2 --level=1 --raid-devices=2 \
/dev/sda3 missing


(3) Check md devices:
% mdadm --examine /dev/sda1 [/dev/sda2] [/dev/sda3]
% cat /proc/mdstat
RESULT:
Output as expected.


(4) To start and start md devices manually:

STOP:
% mdadm --stop --scan
mdadm: stopped /dev/md2
mdadm: stopped /dev/md1
mdadm: stopped /dev/md0

START:
# mdadm --assemble /dev/md0 --uuid=<uuid>
# mdadm --assemble /dev/md1 --uuid=<uuid>
# mdadm --assemble /dev/md2 --uuid=<uuid>
or
With appropriate entries to /etc/mdadm/mdadm.conf and run:
# mdadm --assembly --scan


8- Create swap and file systems on RAID devices not recovered from
image (NEW-UUIDs):

1- mkswap /dev/md1
swapon /dev/md1
swapon -s

2- mkfs -t ext3 /dev/md2


9- /etc/fstab
Fix UUID disk addressing for md1 and md2 for mount at boot:

(1) Get md disk (dash-format) UUIDs:
Used in: /etc/fstab, /boot/grub/menu.lst

% blkid | grep /dev/md
/dev/md0 UUID="c2d96ec7-6926-4dd1-9d47-5fb027692d9b" TYPE="ext3"
/dev/md1 UUID="d7902a10-c7f9-4c86-8c03-b1e88ce59b36" TYPE="swap"
/dev/md2 UUID="16b54159-6b0e-4d15-9823-538596119ec2" TYPE="ext3"

(2) Add dash-format hd UUIDs from 'blkid | grep /dev/md'
to devices in /etc/fstab.


10- /etc/mdadm/mdadm.conf (not required for normal operation):
Fix UUID md device addressing so 'mdadm --assemble --scan' works:

(1) Get raid (colon-format) UUIDs:
Used in: /etc/adadm/mdadm.conf

% mdadm --detail /dev/md0 | grep UUID
% mdadm --examine /dev/sda1 | grep UUID
UUID : 64233e4e:0ac89767:8749f37c:6b7e4f90

% mdadm --detail /dev/md1 | grep UUID
% mdadm --examine /dev/sda2 | grep UUID
UUID : a61c1c14:5488b821:e62b1ac6:155f2a05

% mdadm --detail /dev/md2 | grep UUID
% mdadm --examine /dev/sda3 | grep UUID
UUID : 672f00bc:04af3375:d13bce29:bf9b239f

% mdadm --detail --scan
Returns UUID info in correct format for mdadm.conf.

(2) Add colon-format md UUID's from 'mdadm --detail /dev/md0'
to /etc/mdadm/mdadm.conf (for auto-assembly at boot)


11- Add second disk:
(1) % mdadm /dev/md0 --fail /dev/sdb1 /dev/sdb2 /dev/sdb3

(2) % mdadm /dev/md0 --remove /dev/sdb1 /dev/sdb2 /dev/sdb3

(3) % mdadm /dev/md0 --add /dev/sdb1 /dev/sdb2 /dev/sdb3

(4) Write MBR to new disk with GRUB (see above).


12- Edit mdadm configuration (Update to Ubuntu 8.04.3 for boot
degraded fix):

(0) Update system:
# aptitude update
# aptitude upgrade

(1) /etc/mdadm/mdadm.conf
See man mdadm.conf
Run '/etc/init.d/mdadm reload' after any change.

(2) Boot Degraded T or F:
Add 'BOOT_DEGRADED=true' to /etc/initramfs-tools/conf.d/mdadm
And run '# update-initramfs -u'

OR
Run 'dpkg-reconfigure mdadm' script:
Select 'Yes' to boot degraded question
If 'No' system will boot to (initramfs) if raid is degraded.

This script appears to do the following:
1- Queries user for config options.
2- Edits /etc/mdadm/mdadm.conf
Runs '/etc/init.d/mdadm reload'
3- Edits /etc/initramfs-tools/conf.d/mdadm
Runs 'update-initramfs -u'


13- Copy /home from daily backup to the new SERVER:
# rsync -av --delete [email protected]:/home/backup/Thame/SERVER/ /home/

14- If disk sizes or partitions are different do this:
[email protected]:~/raid# cat /proc/mdstat > raid.production.config

15- Write OS image to /home/backup/SERVER_yyyy-mm-dd.img.000


==========================================================================
Documents and Misc Notes:
==========================================================================
man mdadm
man mdadm.conf

/usr/share/doc/mdadm/

Ubuntu Server Book (Chap 12)
https://help.ubuntu.com/8.10/serverguid ... ation.html
http://www.debian-resources.org/files/r ... debian.pdf
http://wiki.clug.org.za/wiki/RAID-1_in_ ... _and_mdadm


--------------------------------------------------------------------------
Ubuntu Server Book (Chap 12) - Rescue operations
--------------------------------------------------------------------------
1- fsck

Automated repair:
# fsck -y -C /dev/md0

To find a bad superblock:
# mke2fs -n /dev/md0

Automated repair from alternate superblock:
# fsck -b 8193 -y -C /dev/md0

2- uuid mismatches (non-RAID):
(1) Change /boot/grub/menu.lst
root=UUID=xxxx to root=/dev/md0

(2) Find UUIDs from 'ls -l /dev/disk/by-uuid'
then fix menu.lst file.

3- Fix grub

4- Recover files with sleuthkit.

5- Restore partition table with gpart.


--------------------------------------------------------------------------
/usr/share/doc/mdadm/md.txt.gz
--------------------------------------------------------------------------
1- Arrays are "created" by writing appropriate superblocks to all
devices.

2- Partition types 0xfd are automatically assembled into RAID arrays
at boot.


--------------------------------------------------------------------------
https://help.ubuntu.com/8.10/serverguid ... ation.html
--------------------------------------------------------------------------
1- If the array has become degraded by default Ubuntu Server Edition
will boot to initramfs after thirty seconds. Once the initramfs
has booted there is a fifteen second prompt giving the option to
boot the system or attempt manual recover.

2- The dpkg-reconfigure utility can be used to configure the default
behavior and settings related to the array such as monitoring, email
alerts, etc. To reconfigure mdadm enter the following:

# sudo dpkg-reconfigure mdadm

The dpkg-reconfigure mdadm process will change these files:

/etc/defaults/mdadm
/etc/initramfs-tools/conf.d/mdadm

The file can pre-configure the system's behavior, and can be
manually edited.

3- After a drive has been replaced and synced, grub must be installed.
To install grub on the new drive, enter the following:

# sudo grub-install /dev/md0

This command will write grub onto all disks related to the md device,
for example, /dev/sda and /dev/sdb in the case of raid 1.


--------------------------------------------------------------------------
Qemu start commands for VM testing:
--------------------------------------------------------------------------
1- To boot OS on 'cdrom':
qemu-system-x86_64 \
-m 384 \
-boot d \
-cdrom /home/other/ISO-files/systemrescuecd-x86-1.1.3.iso \
-hda /home/guest/Desktop/SERVER/SERVER.qcow \
-redir tcp:10022::22 &

2- To boot OS on 'hda':
qemu-system-x86_64 \
-m 384 \
-hda /home/guest/Desktop/SERVER/SERVER.qcow \
-redir tcp:10022::22 &

--------------------------------------------------------------------------

Return to “File systems support”

Who is online

Users browsing this forum: No registered users and 1 guest