raid
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
raid [2009/09/29 14:54] – 172.26.0.166 | raid [2009/09/30 05:09] – 172.26.0.166 | ||
---|---|---|---|
Line 3: | Line 3: | ||
* Linux kernel software RAID | * Linux kernel software RAID | ||
* 3mware hardware RAID | * 3mware hardware RAID | ||
+ | ==== Drive numbering ==== | ||
- | | 0 | 1 | 2 | 3 | | + | If you're looking at the front of the HPC you'll see four rows of drives. |
- | | 8 | 9 | 10 | 11 | | + | * Rows 0 - 2 are SATA, connected to the hardware 3ware RAID card |
- | | 4 | 5 | 6 | 7 | | + | * Row 3 are IDE |
- | | 0 | 1 | 2 | 3 | | + | |
===== Software RAID ===== | ===== Software RAID ===== | ||
+ | The Linux kernel has the '' | ||
- | It is currently reporting a degraded array (27 Aug 2009): | + | Here is information on their configuration: |
- | < | + | < |
- | Personalities : [raid0] [raid1] | + | /dev/md0 on / type ext3 (rw) |
- | md1 : active raid1 hda1[0] | + | /dev/md3 on /boot type ext3 (rw) |
- | | + | /dev/md2 on /scratch type ext3 (rw) |
+ | /dev/md1 on /export type ext3 (rw) | ||
+ | # df -h | grep md | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | / | ||
+ | |||
+ | It should be noted that ''/ | ||
+ | < | ||
+ | Filename Type Size Used Priority | ||
+ | / | ||
+ | |||
+ | A snapshot of the software RAID's health: | ||
+ | |||
+ | < | ||
+ | Personalities : [raid1] [raid0] | ||
+ | md3 : active raid1 hdd1[1] | ||
+ | | ||
| | ||
- | md3 : active raid1 hdc3[1] hda3[0] | + | md1 : active raid1 hdd3[1] hda3[0] |
- | | + | |
| | ||
- | md2 : active | + | md2 : active |
- | | + | |
| | ||
- | md0 : active | + | md4 : active raid1 hdd6[1] hda6[0] |
- | | + | 2168640 blocks [2/2] [UU] |
+ | |||
+ | md0 : active | ||
+ | | ||
| | ||
unused devices: < | unused devices: < | ||
Line 44: | Line 66: | ||
There is a utility, tw_cli, which can be used to control the hardware raid. The hardware RAID has three arrays, all RAID 5. Each " | There is a utility, tw_cli, which can be used to control the hardware raid. The hardware RAID has three arrays, all RAID 5. Each " | ||
+ | |||
+ | | 8 | 9 | 10 | 11 | | ||
+ | | 4 | 5 | 6 | 7 | | ||
+ | | 0 | 1 | 2 | 3 | | ||
Study the output of '' | Study the output of '' | ||
Line 56: | Line 82: | ||
Rebuild the degraded array: | Rebuild the degraded array: | ||
< | < | ||
- | Check the status of the rebuild by monitoring ''/ | + | |
+ | Check the status of the rebuild by monitoring ''/ | ||
+ | |||
+ | < | ||
+ | 3w-9xxx: scsi1: AEN: INFO (0x04: | ||
+ | |||
+ | This sucks: | ||
+ | |||
+ | < | ||
+ | 3w-9xxx: scsi1: AEN: INFO (0x04: | ||
+ | 3w-9xxx: scsi1: AEN: ERROR (0x04: | ||
+ | |||
+ | < | ||
+ | Password: | ||
+ | // | ||
+ | |||
+ | Unit UnitType | ||
+ | ------------------------------------------------------------------------------ | ||
+ | u0 RAID-5 | ||
+ | u1 RAID-5 | ||
+ | u2 RAID-5 | ||
+ | |||
+ | Port | ||
+ | --------------------------------------------------------------- | ||
+ | p0 | ||
+ | p1 | ||
+ | p2 | ||
+ | p3 | ||
+ | p4 | ||
+ | p5 | ||
+ | p6 | ||
+ | p7 | ||
+ | p8 | ||
+ | p9 | ||
+ | p10 OK | ||
+ | p11 OK | ||
+ | |||
+ | Looks like another drive failed. |
raid.txt · Last modified: 2010/09/19 23:58 by aorth