User Tools

Site Tools


raid

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
raid [2009/09/29 13:50] 172.26.0.166raid [2009/09/29 15:02] 172.26.0.166
Line 3: Line 3:
   * Linux kernel software RAID   * Linux kernel software RAID
   * 3mware hardware RAID   * 3mware hardware RAID
 +==== Drive numbering ====
 +
 +If you're looking at the front of the HPC you'll see four rows of drives.  From the bottom;
 +  * Rows 0 - 2 are SATA, connected to the hardware 3ware RAID card
 +  * Row 3 are IDE
  
 ===== Software RAID ===== ===== Software RAID =====
 +The Linux kernel has the ''md'' (mirrored devices) driver for software RAID devices.  There are currently two 80 GB IDE hard drives connected to the server, ''/dev/hda'' and ''/dev/hdd'' These were set up as five RAID devices during the install of Rocks/CentOS.
  
 +Here is information on their configuration:
  
-It is currently reporting a degraded array (27 Aug 2009):+<code># mount | grep md 
 +/dev/md0 on / type ext3 (rw) 
 +/dev/md3 on /boot type ext3 (rw) 
 +/dev/md2 on /scratch type ext3 (rw) 
 +/dev/md1 on /export type ext3 (rw) 
 +# df -h | grep md 
 +/dev/md0               29G   11G   17G  39% / 
 +/dev/md3              190M   60M  121M  34% /boot 
 +/dev/md2               35G  177M   33G   1% /scratch 
 +/dev/md1               25G  5.5G   18G  24% /export</code>
  
-<code>#cat /proc/mdstat +It should be noted that ''/dev/md4'' is being used as swap: 
-Personalities : [raid0] [raid1]  +<code># swapon -s 
-md1 : active raid1 hda1[0] +Filename Type Size Used Priority 
-      129920 blocks [2/1] [U_]+/dev/md4                                partition 2168632 0 -1</code> 
 + 
 +A snapshot of the software RAID's health: 
 + 
 +<code># cat /proc/mdstat  
 +Personalities : [raid1] [raid0]  
 +md3 : active raid1 hdd1[1] hda1[0] 
 +      200704 blocks [2/2] [UU]
              
-md3 : active raid1 hdc3[1] hda3[0] +md1 : active raid1 hdd3[1] hda3[0] 
-      2097024 blocks [2/2] [UU]+      26627648 blocks [2/2] [UU]
              
-md2 : active raid1 hdc5[1] +md2 : active raid0 hdd5[1] hda5[0
-      65437696 blocks [2/1] [_U]+      36868608 blocks 256k chunks
              
-md0 : active raid0 hdc2[1] hda2[0] +md4 : active raid1 hdd6[1] hda6[0] 
-      20971008 blocks 256k chunks+      2168640 blocks [2/2] [UU] 
 +       
 +md0 : active raid1 hdd2[1] hda2[0] 
 +      30716160 blocks [2/2] [UU]
              
 unused devices: <none></code> unused devices: <none></code>
Line 39: Line 65:
 ===== Hardware RAID ===== ===== Hardware RAID =====
  
-There is a utility, tw_cli, which can be used to control the hardware raid.   +There is a utility, tw_cli, which can be used to control the hardware raid.  The hardware RAID has three arrays, all RAID 5 Each "unit" (rowis one array. 
-The storage on the HPC is using a RAID array. (Level?)+ 
 +| 8 | 9 | 10 | 11 | 
 +| 4 | 5 | 6 | 7 | 
 +| 0 | 1 | 2 | 3 |
  
-Study the output of ''show'' to know which controller to manage.  Then you can use ''/c1 show'' to show the status of that particular controller.+Study the output of ''show'' to know which controller to manage.  Then you can use ''/c1 show'' to show the status of that particular controller.  Things to look for: 
 +  * Which controller is active? (c0, c1, etc) 
 +  * Which unit is degraded? (u0, u1, u2, etc) 
 +  * Which 
  
 Remove the faulty port: Remove the faulty port:
raid.txt · Last modified: 2010/09/19 23:58 by aorth