...
1I:1:1: OK: 500 GB: HP MM0500FAMYT : HPD3
1I:1:2: OK: 500 GB: HP MM0500FAMYT : HPD3
2E:1:1: OK: 600 GB: HP EF0600FARNA : HPD2
2E:1:2: OK: 600 GB: HP EF0600FARNA : HPD2
2E:1:3: OK: 600 GB: HP EF0600FARNA : HPD2
2E:1:4: OK: 600 GB: HP EF0600FARNA : HPD2
2E:1:5: OK: 600 GB: HP EF0600FARNA : HPD2
2E:1:6: OK: 600 GB: HP EF0600FARNA : HPD2
2E:1:7: OK: 600 GB: HP EF0600FARNA : HPD2
2E:1:8: OK: 600 GB: HP EF0600FARNA : HPD2
2E:1:9: OK: 600 GB: HP EF0600FARNA : HPD2
2E:1:10: OK: 600 GB: HP EF0600FARNA : HPD2
2E:1:11: OK: 600 GB: HP EF0600FARNA : HPD2
2E:1:12: OK: 600 GB: HP EF0600FARNA : HPD2
How Karl Monitors the NFS RAID Status
There is a cronjob on suncatfs1 which checks the status of the raid cards
every two hours. The current status is updated in this file:
Code Block |
---|
$ cat /afs/slac/g/scs/systems/system.info/suncatfs1/cciss_vol_status
/dev/cciss/c0d0: (Smart Array P410i) RAID 1 Volume 0 status: OK.
/dev/cciss/c2d0: (Smart Array P812) RAID 6 Volume 0 status: OK. At least one spare drive designated. At least one spare drive remains available.
/dev/cciss/c2d0: (Smart Array P812) Enclosure D2600 SAS AJ940A (S/N: CN894700M7 ) on Bus 2, Physical Port 2E status: OK.
|
If there are any changes to the status (if the output is different at all), you and I will get an email.
The internal OS disks are the RAID1. The NFS disks are the RAID 6.