User Tools

Site Tools


raid

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
raid [2009/09/30 06:43] 172.26.0.166raid [2010/09/19 23:58] (current) aorth
Line 2: Line 2:
 We have two RAIDs on the HPC We have two RAIDs on the HPC
   * Linux kernel software RAID   * Linux kernel software RAID
-  * 3mware hardware RAID+  * 3ware hardware RAID
 ==== Drive numbering ==== ==== Drive numbering ====
  
-If you're looking at the front of the HPC you'll see four rows of drives.  From the bottom;+If you're looking at the front of the HPC you'll see four rows of drives.  From the bottom:
   * Rows 0 - 2 are SATA, connected to the hardware 3ware RAID card   * Rows 0 - 2 are SATA, connected to the hardware 3ware RAID card
   * Row 3 are IDE   * Row 3 are IDE
Line 32: Line 32:
 A snapshot of the software RAID's health: A snapshot of the software RAID's health:
  
 +<code># cat /proc/mdstat 
 +Personalities : [raid1] [raid0] 
 +md3 : active raid1 hdd1[1] hda1[0]
 +      200704 blocks [2/2] [UU]
 +      
 +md1 : active raid1 hdd3[1] hda3[0]
 +      26627648 blocks [2/2] [UU]
 +      
 +md2 : active raid0 hdd5[1] hda5[0]
 +      36868608 blocks 256k chunks
 +      
 +md4 : active raid1 hdd6[1] hda6[0]
 +      2168640 blocks [2/2] [UU]
 +      
 +md0 : active raid1 hdd2[1] hda2[0]
 +      30716160 blocks [2/2] [UU]
 +      
 +unused devices: <none></code>
 +==== Repair RAID ====
 +When a disk is failing you need to replace the drive.  First, look at the RAID configuration to see which partitions are in use by which arrays.  For example:
 <code># cat /proc/mdstat  <code># cat /proc/mdstat 
 Personalities : [raid1] [raid0]  Personalities : [raid1] [raid0] 
Line 51: Line 71:
 unused devices: <none></code> unused devices: <none></code>
  
-=== To Do list=== +If ''/dev/hda'' is having problems, set all its RAID1 partitions as failed and remove them
- +<code># mdadm /dev/md0 --fail /dev/hda2 --remove /dev/hda2 
- +# mdadm /dev/md1 --fail /dev/hda3 --remove /dev/hda3 
-Prepare written instructions on how to repair disk arrays. +# mdadm /dev/md3 --fail /dev/hda1 --remove /dev/hda1 
- +# mdadm /dev/md4 --fail /dev/hda6 --remove /dev/hda6</code> 
-What disks to we have? +''/dev/md2'' is a RAID0 stripe mounted as ''/scratch'', so we have to umount it and then stop it (you can't remove volumes from a stripe): 
- +<code># umount /dev/md2 
-Add extra spare disks? +# mdadm --stop /dev/md2</code> 
- +<note warning> You must Shutdown the server before you physically remove the drive! </note> 
-How do you know which physical disk is broken to replace it? +Shut the server down and replace the faulty drive with a new one.  After booting your drive letters may have shifted around, so just be sure to verify which is which before proceeding. 
- +Clone the partition table from the good drive to the bad one: 
 +<code># sfdisk -d /dev/hda | sfdisk --force /dev/hdd</code> 
 +Verify the new partitions can be seen: 
 +<code># partprobe -s 
 +/dev/hda: msdos partitions 1 2 3 4 <5 6> 
 +/dev/hdd: msdos partitions 1 2 3 4 <5 6> 
 +/dev/sda: msdos partitions 1 
 +/dev/sdb: msdos partitions 1 
 +/dev/sdc: msdos partitions 1 
 +</code> 
 +Re-create the scratch partition (RAID0): 
 +<code># mdadm --create --verbose /dev/md2 --level=0 --raid-devices=2 /dev/hda5 /dev/hdd5 
 +# mkfs.ext3 /dev/md2 
 +# mount /dev/md2 /scratch</code> 
 +You can now add the new partitions back to the RAID1 arrays: 
 +<code># mdadm /dev/md0 --add /dev/hdd2 
 +# mdadm /dev/md1 --add /dev/hdd3 
 +# mdadm /dev/md3 --add /dev/hdd1 
 +# mdadm /dev/md4 --add /dev/hdd6</code> 
 +After adding you can monitor the progress of the RAID rebuilds by looking in ''/proc/mdstat'': 
 +<file>Personalities : [raid1] [raid0]  
 +md3 : active raid1 hdd1[1] hda1[0] 
 +      200704 blocks [2/2] [UU] 
 +       
 +md1 : active raid1 hdd3[2] hda3[0] 
 +      26627648 blocks [2/1] [U_] 
 +      [===================>.]  recovery = 95.4% (25407552/26627648) finish=0.7min speed=28648K/sec 
 +       
 +md2 : inactive hda5[0] 
 +      18434304 blocks 
 +        
 +md4 : active raid1 hdd6[2] hda6[0] 
 +      2168640 blocks [2/1] [U_] 
 +        resync=DELAYED 
 +       
 +md0 : active raid1 hdd2[1] hda2[0] 
 +      30716160 blocks [2/2] [UU] 
 +       
 +unused devices: <none></file>
 ===== Hardware RAID ===== ===== Hardware RAID =====
  
-A 3ware 9500S SATA RAID card using the 3w-9xxx kernel module.  It has 12 channels.  The HPC is configured to use RAID5 for all of its RAID arrays on the hardware RAID.+A 3ware 9500S-12 SATA RAID card using the 3w-9xxx kernel module.  It has 12 channels.  The HPC is configured to use RAID5 for all of its RAID arrays on the hardware RAID.
  
 ==== Physical Disk Layout ==== ==== Physical Disk Layout ====
  
-We have one RAID controller, 'c1' Disks are plugged into ports, 'p1' - 'p11' The disks are then grouped into units (basically the rows), 'u0' - 'u2'.+We have one RAID controller, 'c1' Disks are plugged into ports, 'p0' - 'p11' The disks are then grouped into units (basically the rows), 'u0' - 'u2'.
  
 | Port 8 | Port 9 | Port 10 | Port 11 | | Port 8 | Port 9 | Port 10 | Port 11 |
Line 77: Line 134:
 ==== Repairing 'degraded' arrays ==== ==== Repairing 'degraded' arrays ====
  
-There is a utility, tw_cli, which can be used to control/monitor the hardware raid controller.+There is a utility, ''tw_cli'', which can be used to control/monitor the hardware raid controller.
  
 Study the output of ''show'' to know which controller to manage.  Then you can use ''/c1 show'' to show the status of that particular controller.  Things to look for: Study the output of ''show'' to know which controller to manage.  Then you can use ''/c1 show'' to show the status of that particular controller.  Things to look for:
Line 84: Line 141:
   * Which port is inactive or missing? (p1, p5, etc)   * Which port is inactive or missing? (p1, p5, etc)
  
-<note important>The controller supports hot swapping but you **must** remove a faulty drive through the ''tw_cli'' tool before you can swap drives.</note>+<note warning>The controller supports hot swapping but you **must** remove a faulty drive through the ''tw_cli'' tool before you can swap drives.</note>
  
 Remove the faulty port: Remove the faulty port:
Line 104: Line 161:
 3w-9xxx: scsi1: AEN: ERROR (0x04:0x0002): Degraded unit detected:unit=0, port=3</file> 3w-9xxx: scsi1: AEN: ERROR (0x04:0x0002): Degraded unit detected:unit=0, port=3</file>
  
-<code>$ sudo tw_cli +<code>$ sudo tw_cli
 Password:  Password: 
 //hpc-ilri> /c1 show //hpc-ilri> /c1 show
raid.1254293010.txt.gz · Last modified: 2010/05/22 14:19 (external edit)