RAID Cards
Adaptec
CLI notes
On all of our 2450 servers and a handful of old FreeBSD boxes, we are running on adaptec RAID cards.
The CLI program is aaccli on FreeBSD and afacli on linux
Once you're in the CLI, open the container - either `open aac0` on fbsd or `open afa0` on linux - everything else is the same regardless of the OS.
Here are the most common things we ever do:
AAC0> container list /full Executing: container list /full=TRUE Num Total Oth Chunk Scsi Partition Creation System Label Type Size Ctr Size Usage B:ID:L Offset:Size State RO Lk Task Done% Ent Date Time Files ----- ------ ------ --- ------ ------- ------ ------------- ------- -- -- ------- ------ --- ------ -------- ------ 0 Mirror 33.9GB Open 0:01:0 64.0KB:33.9GB Normal 0 071002 05:39:32 /dev/aacd0 mirror0 0:00:0 64.0KB:33.9GB Normal 1 071002 05:39:32 1 Mirror 33.9GB Open 0:02:0 64.0KB:33.9GB Normal 0 071002 05:39:50 /dev/aacd1 mirror1 0:03:0 64.0KB:33.9GB Normal 1 071002 05:39:50 AAC0> disk list /full Executing: disk list /full=TRUE B:ID:L Device Type Removable media Vendor-ID Product-ID Rev Blocks Bytes/Block Usage Shared Rate ------ -------------- --------------- --------- ---------------- ----- --------- ----------- ---------------- ------ ---- 0:00:0 Disk N FUJITSU MAJ3364MC 3702 71390320 512 Initialized NO 160 0:01:0 Disk N FUJITSU MAJ3364MC 3702 71390320 512 Initialized NO 160 0:02:0 Disk N FUJITSU MAJ3364MC 3702 71390320 512 Initialized NO 160 0:03:0 Disk N FUJITSU MAJ3364MC 3702 71390320 512 Initialized NO 160 AAC0> AAC0> disk show smart Executing: disk show smart Smart Method of Enable Capable Informational Exception Performance Error B:ID:L Device Exceptions(MRIE) Control Enabled Count ------ ------- ---------------- --------- ----------- ------ 0:00:0 Y 6 Y N 0 0:01:0 Y 6 Y N 0 0:02:0 Y 6 Y N 0 0:03:0 Y 6 Y N 0 0:06:0 N AAC0> task list Executing: task list Controller Tasks TaskId Function Done% Container State Specific1 Specific2 ------ -------- ------- --------- ----- --------- --------- No tasks currently running on controller AAC0> controller details Executing: controller details Controller Information ---------------------- Remote Computer: S Device Name: S Controller Type: PERC 3/Si Access Mode: READ-WRITE Controller Serial Number: Last Six Digits = 8C01D0 Number of Buses: 1 Devices per Bus: 15 Controller CPU: i960 R series Controller CPU Speed: 100 Mhz Controller Memory: 64 Mbytes Battery State: Not Present Component Revisions ------------------- CLI: 1.0-0 (Build #5263) API: 1.0-0 (Build #5263) Miniport Driver: 2.7-1 (Build #3571) Controller Software: 2.7-1 (Build #3571) Controller BIOS: 2.7-1 (Build #3571) Controller Firmware: (Build #3571) AAC0> disk show smart AAC0> task list
type exit to leave it.
All the mirrors you are getting are healthy - so even if a drive fails you should be ok. If an unexplained system crash occurs, after restarting all the systems you should check the mirrors with `container list /f` to see if one has degraded (one disk has died).
Once in a while a mirror degrading will cause the system to crash - but not usually - usually it is just silent.
Creating/removing mirror
via CLI
First, you have to make sure the new disk(s) is/are wiped clean – a disk with containers from another system will crash the system when inserted. You can do this initializing the drive(s) in another system.
Get into the CLI, rescan the controller:
AAC0> controller rescan
to get it to find the newly inserted disks
AAC0> container list Executing: container list Num Total Oth Chunk Scsi Partition Label Type Size Ctr Size Usage B:ID:L Offset:Size ----- ------ ------ --- ------ ------- ------ ------------- 0 Mirror 68.3GB Open 0:00:0 64.0KB:68.3GB /dev/sda 0:01:0 64.0KB:68.3GB AAC0> disk list Executing: disk list B:ID:L Device Type Blocks Bytes/Block Usage Shared Rate ------ -------------- --------- ----------- ---------------- ------ ---- 0:00:0 Disk 143374738 512 Initialized NO 160 0:01:0 Disk 143374738 512 Initialized NO 160 0:02:0 Disk 143374738 512 Initialized NO 160 0:03:0 Disk 143374738 512 Initialized NO 160
if the disks aren’t initialized:
AAC0> disk initialize (0,2,0) AAC0> disk initialize (0,3,0) AAC0> disk show space Executing: disk show space Scsi B:ID:L Usage Size ----------- ---------- ------------- 0:00:0 Container 64.0KB:68.3GB 0:00:0 Free 68.3GB:7.50KB 0:01:0 Container 64.0KB:68.3GB 0:01:0 Free 68.3GB:7.50KB
Finally, create the mirror:
AAC0> container create volume /label=MIRROR1 ((0,2,0)) Executing: container create volume /label="MIRROR1" (BUS=0,ID=2,LUN=0) Container 1 created at /dev/sdb AAC0> container list Executing: container list Num Total Oth Chunk Scsi Partition Label Type Size Ctr Size Usage B:ID:L Offset:Size ----- ------ ------ --- ------ ------- ------ ------------- 0 Mirror 68.3GB Open 0:00:0 64.0KB:68.3GB /dev/sda 0:01:0 64.0KB:68.3GB 1 Volume 8.47GB Valid 0:02:0 64.0KB:8.47GB /dev/sdb MIRROR1 Create volume on 1 of to disks to be in mirror (fdisk, format), Then to mirror: AAC0> container create mirror 1 (0,3,0) Executing: container create mirror 1 (BUS=0,ID=3,LUN=0) AAC0> container list Executing: container list Num Total Oth Chunk Scsi Partition Label Type Size Ctr Size Usage B:ID:L Offset:Size ----- ------ ------ --- ------ ------- ------ ------------- 0 Mirror 68.3GB Open 0:00:0 64.0KB:68.3GB /dev/sda 0:01:0 64.0KB:68.3GB 1 Mirror 8.47GB Valid 0:02:0 64.0KB:8.47GB /dev/sdb MIRROR1 0:03:0 64.0KB:8.47GB AAC0> task list Executing: task list Controller Tasks TaskId Function Done% Container State Specific1 Specific2 ------ -------- ------- --------- ----- --------- --------- 100 Bld/Vfy 1.3% 1 RUN 00000000 00000000
To remove: MAKE SURE DRIVE NOT MOUNTED FIRST!
Example: delete 2nd mirror installed in slots 2-3
AAC0> container delete 1 Executing: container delete 1 AAC0> container list Executing: container list Num Total Oth Chunk Scsi Partition Label Type Size Ctr Size Usage B:ID:L Offset:Size ----- ------ ------ --- ------ ------- ------ ------------- 0 Mirror 68.3GB Open 0:00:0 64.0KB:68.3GB /dev/sda 0:01:0 64.0KB:68.3GB AAC0> enclosure prepare slot 0 2 Executing: enclosure prepare slot 0 2 AAC0> enclosure prepare slot 0 3 Executing: enclosure prepare slot 0 3
You may now remove drives
To create mirrors from a volume (see next section where we add drives as volume)
CLI > open aac0 AAC0> disk list Executing: disk list C:ID:L Device Type Blocks Bytes/Block Usage Shared Rate ------ -------------- --------- ----------- ---------------- ------ ---- 0:00:0 Disk 143374738 512 Initialized NO 160 0:01:0 Disk 143374738 512 Initialized NO 40 0:02:0 Disk 286749488 512 Initialized NO 320 0:03:0 Disk 286749488 512 Initialized NO 320 AAC0> container list Executing: container list Num Total Oth Stripe Scsi Partition Label Type Size Ctr Size Usage C:ID:L Offset:Size ----- ------ ------ --- ------ ------- ------ ------------- 0 Volume 68.3GB Open 0:00:0 64.0KB:68.3GB /dev/aacd0 MIRROR0 1 Volume 136GB Open 0:02:0 64.0KB: 136GB /dev/aacd1 MIRROR1 AAC0> container create mirror 1 (0,3,0) Executing: container create mirror 1 (BUS=0,ID=3,LUN=0) AAC0> container list Executing: container list Num Total Oth Stripe Scsi Partition Label Type Size Ctr Size Usage C:ID:L Offset:Size ----- ------ ------ --- ------ ------- ------ ------------- 0 Volume 68.3GB Open 0:00:0 64.0KB:68.3GB /dev/aacd0 MIRROR0 1 Mirror 136GB Open 0:02:0 64.0KB: 136GB /dev/aacd1 MIRROR1 0:03:0 64.0KB: 136GB AAC0> task list Executing: task list Controller Tasks TaskId Function Done% Container State Specific1 Specific2 ------ -------- ------- --------- ----- --------- --------- 100 Bld/Vfy 6.5% 1 RUN 00000000 00000000
If this happens:
AAC0> container create mirror 1 (0:3:0) Executing: container create mirror 1 (BUS=0,ID=3,LUN=0) Command Error: <The controller was not able to mirror the specified container.> AAC0> disk initialize /always (0,3,0) Executing: disk initialize /always=TRUE (BUS=0,ID=3,LUN=0)
Then:
AAC0> container create mirror 1 (0:3:0) Executing: container create mirror 1 (BUS=0,ID=3,LUN=0)
In the case of a drive that won't remirror due to a Mirror Failover Container 0 no failover assigned:
container set failover 0 (0,0,0)
where 0,0,0 is the new empty drive
via BIOS configuration utility
Press Ctrl-A to enter the configuration utility while the Adaptec BIOS is booting.
Container configuration utility-> Initialize drives->(ins to select all drives you wish to use- DO NOT CHOOSE DRIVES IN USE IN EXISTING ARRAY) Create container-> (Select the newly-initialized drives) Container type: RAID1 Container label: MIRROR0 (confirm correct size) Read caching: Y Write caching: enable when protected (done)
Repeat the process if you're building another mirror. Call it MIRROR1
Procedure for setting two, ½ mirrors (two volumes) which you can later convert to two mirrors:
Create array-> (Select 1st drive) (enter) Container type: Volume Container label: MIRROR0 (confirm correct size) Read caching: Y Write caching: enable always (answer y to both warnings) (done) Create array-> (Select 3rd drive) Container type: Volume Container label: MIRROR1 (confirm correct size) Read caching: Y Write caching: enable always (answer y to both warnings) (done) (exit util)
To remove a drive:
Go to the "Manage Arrays" screen, move the cursor over the mirror you want to delete and press the delete key.
Remove a live drive
Remove, via CLI:
AAC0> container list Executing: container list Num Total Oth Chunk Scsi Partition Label Type Size Ctr Size Usage B:ID:L Offset:Size ----- ------ ------ --- ------ ------- ------ ------------- 0 Mirror 68.3GB Open 0:00:0 64.0KB:68.3GB /dev/aacd0 0:01:0 64.0KB:68.3GB AAC0> container unmirror 0 AAC0> container list Executing: container list Num Total Oth Stripe Scsi Partition Label Type Size Ctr Size Usage C:ID:L Offset:Size ----- ------ ------ --- ------ ------- ------ ------------- 0 Volume 68.3GB Open 0:00:0 64.0KB:68.3GB /dev/aacd0 MIRROR0 AAC0> enclosure prepare slot 0 1
(remove the drive)
To replace drive:
put in new drive, run controller rescan, if necessary
AAC0> container create mirror 0 1 AAC0> task list (scrubbing)
Note: if you unmirror an unhealthy mirror, it will leave the healthy one there, otherwise it will remove (whichever is listed as) the 2nd drive in the mirror.
Replace a dead ("missing") drive
AAC0> container list Executing: container list Num Total Oth Chunk Scsi Partition Label Type Size Ctr Size Usage B:ID:L Offset:Size ----- ------ ------ --- ------ ------- ------ ------------- 0 Mirror 68.3GB Open 0:00:0 64.0KB:68.3GB /dev/aacd0 0:01:0 64.0KB:68.3GB 1 Mirror 68.3GB Open 0:02:0 64.0KB:68.3GB /dev/aacd1 MIRROR1 ?:??:? - Missing – (put in new drive)
No tasks showing up after 2min? Rescan:
AAC0> controller rescan AAC0> task list Executing: task list Controller Tasks TaskId Function Done% Container State Specific1 Specific2 ------ -------- ------- --------- ----- --------- --------- 100 Rebuild 0.0% 1 RUN 00000000 00000000 AAC0>
To replace failed drive, you can prepare slot and put in new (replacement) drives on dells and it should start scrubbing.
In case it doesn’t start scrubbing (there is a delay, so wait 2min),
AAC0> container set failover (scsi id of new disk)
Or if that fails
AAC0> container unmirror 1
That will demote it to a volume, Then container create, etc…
Enabling/disabling write cache
Write cache should ideally only be enabled when there is a backup battery present. Write cache speeds up I/O performance by allowing the RAID card to write data to a buffer before committing it to disk. This data lives in RAM. If power is lost and there is no backup battery, that data is lost. However, you can force write cache to be on regardless of whether a battery is present.
Turning write cache on:
AFA0> container show cache 1 Executing: container show cache 1 Global Container Read Cache Size : 0 Global Container Write Cache Size : 51380224 Read Cache Setting : ENABLE Write Cache Setting : DISABLE Write Cache Status : Inactive, battery not present AFA0> container set cache /unprotected=1 /write_cache_enable=1 0 Executing: container set cache /unprotected=TRUE /write_cache_enable=TRUE 0 AFA0> container set cache /unprotected=1 /write_cache_enable=1 1 Executing: container set cache /unprotected=TRUE /write_cache_enable=TRUE 1 AFA0> container show cache 0 Executing: container show cache 0 Global Container Read Cache Size : 483328 Global Container Write Cache Size : 51380224 Read Cache Setting : ENABLE Write Cache Setting : ENABLE ALWAYS Write Cache Status : Active, not protected, battery not present AFA0> container show cache 1 Executing: container show cache 1 Global Container Read Cache Size : 483328 Global Container Write Cache Size : 51380224 Read Cache Setting : ENABLE Write Cache Setting : ENABLE ALWAYS Write Cache Status : Active, not protected, battery not present
Shutting off write cache:
FASTCMD> open afa0 Executing: open "afa0" AFA0> container set cache 0 /write_cache_enable=0 Too many parameters AFA0> container set cache /write_cache_enable=0 0 Executing: container set cache /write_cache_enable=FALSE 0 AFA0> container set cache /write_cache_enable=0 1 Executing: container set cache /write_cache_enable=FALSE 1 AFA0> container show cache 0 Executing: container show cache 0 Global Container Read Cache Size : 483328 Global Container Write Cache Size : 51380224 Read Cache Setting : ENABLE Write Cache Setting : DISABLE Write Cache Status : Inactive, battery not present AFA0> container show cache 1 Executing: container show cache 1 Global Container Read Cache Size : 483328 Global Container Write Cache Size : 51380224 Read Cache Setting : ENABLE Write Cache Setting : DISABLE Write Cache Status : Inactive, battery not present AFA0> exit Executing: exit
LSI/MegaRaid
CLI notes
There are 2 shell-based options to manage the RAID card: megamgr and megarc
megamgr is a curses-based program which closely mimics the BIOS config utility. To run it, you must be in the directory where it's installed:
cd /usr/local/sbin/; megamgr
When working in megamgr and entering an ESC, you'll notice nothing happens immediately. After pressing the ESC key, you must press another key before the ESC takes effect. So for instance, after pressing ESC press the up or down arrow (which is inocuous since it won't cause any undesired changes).
There is also a CLI-based program called megarc
Creating/removing mirror
via CLI
This will add a new mirror from drives located in the 2nd and 3rd slots:
cd /usr/local/sbin/; megarc -addCfg -a0 -R1[0:2,0:3] WB CIO echo "scsi add-single-device 0 0 1 0" > /proc/scsi/scsi
To delete that mirror:
cd /usr/local/sbin/; megarc -delLD -a0 -L1 echo "scsi remove-single-device 0 0 1 0" > /proc/scsi/scsi
This will add a new mirror from drives located in the 4th and 5th slots:
cd /usr/local/sbin/; megarc -addCfg -a0 -R1[0:4,0:5] WT CIO echo "scsi add-single-device 0 0 2 0" > /proc/scsi/scsi
To delete that mirror:
cd /usr/local/sbin/; megarc -delLD -a0 -L2 echo "scsi remove-single-device 0 0 2 0" > /proc/scsi/scsi
None of the above is data destructive. You can remove a mirror and re-create it and all the data will still be there...as long as you don't initialize the drive.
via BIOS configuration utility or Megamgr
Totally new configuration:
Configure, New configuration (space 2x to select 2 drives) Enter to end selection F10 to configure Space to select Span-1 F10 to configure RAID=1 StripeSize=64k Write policy=WRBACK Read policy=NORMAL Cache Policy=Cached IO ESC, accept
NOTE: when adding an additional drive, choose view/add not New Configuration – otherwise you'll wipe out the existing mirror/config.
Adding to existing configuration:
Configure->View/Add Configuration-> ([space] on the 2 drives)->[enter] F10 [space] Span-1 F10 Advanced->Cache Policy=CachedIO Accept Save Exit
If this was added to a running machine, to get the device to show up (in linux) run:
echo "scsi add-single-device 0 0 1 0" > /proc/scsi/scsi
Make sure SCSI transfer rate is 320M under objects, channel, channel 1
Go to objects, adapter
Make sure alarm is enabled
To remove a mirror:
Objects->Logical Drive->(move cursor to drive)->F5
If on a running server (assuming you unmounted it first):
echo "scsi remove-single-device 0 0 1 0" > /proc/scsi/scsi
---
Create mirror from volume:
- Remove drive (megarc -delLD -a0 -L1)
- Then re-add config (I did it with megamgr)
- Then fail the new drive and rebuild it
Move drives from one LSI card to another
MAKE SURE ALL DRIVES ARE REMOVED BEFORE COMPLETING!!
On new card:
Configure, New configuration F10 Save Power down Insert drives Reboot
DO NOT INITIALIZE DRIVES!
Alternate (if the drives are still in when you clear config) You can re-create mirrors without destroying data as long as created the same and drives are not initialized.
Configure, Clear configuration Easy configuration (create as normal) DON’T INITIALIZE Reboot
Converting broken mirror to volume back to mirror
Assumes drive #1 is dead/out
- convert #0 to volume. New config, #0 = volume, #2,#3 = R1
- put cleaned drive into slot #1. All drives out- put drive in #1, new conf, #1 volume, init, delete LD
- THIS DOESN’T WORK: new config, #0,#1 = R1, #2,#3 = R1. After reboot end up in grub shell – RAID card confused about which is boot drive
- new config, #0,#1 = R1, #2,#3 = R1. Power down. Remove #1. Power up then insert #1 = mirror 0 rebuilds
Move 2 mirrors from 2 different machines into 1
Premise: Orig machine has 2 R1’s, taking mirror #1 from other machine and replacing mirror #1 on target machine
- power off target machine, remove #1 mirror drives. Unseat #0 mirror
- power up. Clear config. New config. Reboot to confirm
- power down. Reinsert mirror #0 and mirror #1 from other machine
- power up. Vew/add config, select DISK config, save reboot
- will see original #0 mirror and #1 mirror from other machine
3ware
CLI (9xxx)
Replacing a failed drive
tw_cli /c0 show all Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 1.82 TB 3907029168 WD-WCAVY0647904 p1 OK u0 1.82 TB 3907029168 WD-WCAVY0608298 p2 OK u1 1.82 TB 3907029168 WD-WCAVY0629856 p3 OK u1 1.82 TB 3907029168 WD-WCAVY0627316 p4 OK - 1.82 TB 3907029168 WD-WCAVY0564054
then you should run:
tw_cli /c0 rescan tw_cli /c0 show all Port Status Unit Size Blocks Serial --------------------------------------------------------------- p0 OK u0 1.82 TB 3907029168 WD-WCAVY0647904 p1 OK u0 1.82 TB 3907029168 WD-WCAVY0608298 p2 OK u1 1.82 TB 3907029168 WD-WCAVY0629856 p3 OK u1 1.82 TB 3907029168 WD-WCAVY0627316 p4 DEGRADED u1 1.82 TB 3907029168 WD-WCAVY0564054
and the array should show that it is rebuilding
BUT, if you attempt to rebuild with a command like that, and you get:
Error: The following drive(s) cannot be used [4].
you need to remove it, then rescan, then rebuild:
tw_cli /c0/p4 remove tw_cli /c0 rescan tw_cli /c0/u0 start rebuild disk=4
CLI (8xxx)
Reference guide: ftp://ftp.rackable.com/public/Technical%20Support/Pdf%20files/3Ware/7000_8000/CLI-UserGuide.pdf
Replacing a failed drive
After replacing the 2 dead drives, we see:
tw_cli info c0 Controller: c0 ------------- Driver: 1.50.01.002 Model: 7500-8 FW: FE7X 1.05.00.068 BIOS: BE7X 1.08.00.048 Monitor: ME7X 1.01.00.040 Serial #: F11605A3180172 PCB: Rev3 PCHIP: 1.30-33 ACHIP: 3.20 # of units: 3 Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED # of ports: 8 Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0) Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1) Port 2: WDC WD2000 0.00 MB (0 blocks): OK(NO UNIT) Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1) Port 4: WDC WD2000 0.00 MB (0 blocks): OK(NO UNIT) Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5) Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5) Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)
which is to say there are 2 drives which don't belong to any units and don't have any size. They need to be removed from the controller:
tw_cli maint remove c0 p2 tw_cli maint remove c0 p4
although it seems like we should be able to add them back in with
tw_cli maint add c0 p2 jbod tw_cli maint add c0 p2 spare
it doesn't work, and there's no rescan command, so we resort to rebooting.
We won't be able to boot all the way up however, cause the 2 drives will turn into JBOD's and throw off the device ordering:
twed0: <Unit 0, JBOD, Normal> on twe0 twed0: 190782MB (390721968 sectors) twed1: <Unit 1, RAID5, Degraded> on twe0 twed1: 476948MB (976790016 sectors) twed2: <Unit 2, JBOD, Normal> on twe0 twed2: 239372MB (490234752 sectors) twed3: <Unit 4, JBOD, Normal> on twe0 twed3: 239372MB (490234752 sectors) twed4: <Unit 5, RAID5, Degraded> on twe0 twed4: 715422MB (1465185024 sectors) twed5: <Unit 0, RAID5, Normal> on twe1 twed5: 715422MB (1465185024 sectors) twed6: <Unit 4, RAID5, Normal> on twe1 twed6: 715422MB (1465185024 sectors)
So whereas twed2 used to be a RAID5 device, it got pushed down (to twed4?) by the JBOD standalone drive. So we could edit the fstab and stop the OS from trying to mount the other devices. After the rebuild (or during) we could reboot and the next time it comes up the devices will fall back to their regular ordering. We usually opt to, over serial console, enter into single user mode (which is automatic with the failed mounts), and do the remirroring:
mount /dev/twed0s1g /usr (tw_cli is in /usr we we need to mount it manually) tw_cli info c0 Controller: c0 ------------- Driver: 1.50.01.002 Model: 7500-8 FW: FE7X 1.05.00.068 BIOS: BE7X 1.08.00.048 Monitor: ME7X 1.01.00.040 Serial #: F11605A3180172 PCB: Rev3 PCHIP: 1.30-33 ACHIP: 3.20 # of units: 5 Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED Unit 2: JBOD 233.76 GB ( 490234752 blocks): OK Unit 4: JBOD 233.76 GB ( 490234752 blocks): OK Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED # of ports: 8 Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0) Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1) Port 2: WDC WD2500SB-01RFA0 WD-WMANK3040813 233.76 GB (490234752 blocks): OK(unit 2) Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1) Port 4: WDC WD2500SB-01RFA0 WD-WMANK3356318 233.76 GB (490234752 blocks): OK(unit 4) Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5) Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5) Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)
2 new drives show up as JBOD's, which is fine, we can allocate into a mirror, but first we must delete the JBOD unit: BE VERY CAREFUL about this, double check that the unit you are deleting has 1 member and it's member is on the port which contains the new drive.
tw_cli maint deleteunit c0 u2 Deleting unit /ct0/u2 ...wed2: detached Done. tw_cli maint deleteunit c0 u4 Deleting unit /ct0/u4 ...wed3: detached Done. tw_cli info c0 Controller: c0 ------------- Driver: 1.50.01.002 Model: 7500-8 FW: FE7X 1.05.00.068 BIOS: BE7X 1.08.00.048 Monitor: ME7X 1.01.00.040 Serial #: F11605A3180172 PCB: Rev3 PCHIP: 1.30-33 ACHIP: 3.20 # of units: 3 Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK Unit 1: RAID 5 465.77 GB ( 976790016 blocks): DEGRADED Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): DEGRADED # of ports: 8 Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0) Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1) Port 2: WDC WD2500SB-01RFA0 WD-WMANK3040813 233.76 GB (490234752 blocks): OK(NO UNIT) Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1) Port 4: WDC WD2500SB-01RFA0 WD-WMANK3356318 233.76 GB (490234752 blocks): OK(NO UNIT) Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5) Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5) Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)
Now the 2 drives show up as full size and available. We can rebuild:
tw_cli maint rebuild c0 u1 p2 Rebuild started twon unit /c0/u1 AEN: <twed1: rebuild started> tw_cli maint rebuild c0 u5 p4 Rebuild started ton unit /c0/u5 AEN: <twed1: rebuild started> tw_cli info c0 Controller: c0 ------------- Driver: 1.50.01.002 Model: 7500-8 FW: FE7X 1.05.00.068 BIOS: BE7X 1.08.00.048 Monitor: ME7X 1.01.00.040 Serial #: F11605A3180172 PCB: Rev3 PCHIP: 1.30-33 ACHIP: 3.20 # of units: 3 Unit 0: JBOD 186.31 GB ( 390721968 blocks): OK Unit 1: RAID 5 465.77 GB ( 976790016 blocks): REBUILDING (0%) Unit 5: RAID 5 698.65 GB ( 1465185024 blocks): REBUILDING (0%) # of ports: 8 Port 0: WDC WD2000JB-00KFA0 WD-WCAMT1451690 186.31 GB (390721968 blocks): OK(unit 0) Port 1: WDC WD2500JB-00GVC0 WD-WCAL78219488 232.88 GB (488397168 blocks): OK(unit 1) Port 2: WDC WD2500SB-01RFA0 WD-WMANK3040813 233.76 GB (490234752 blocks): OK(unit 1) Port 3: WDC WD2500JB-00GVC0 WD-WMAL73882417 232.88 GB (488397168 blocks): OK(unit 1) Port 4: WDC WD2500SB-01RFA0 WD-WMANK3356318 233.76 GB (490234752 blocks): OK(unit 5) Port 5: WDC WD2500JB-00GVA0 WD-WMAL71338097 232.88 GB (488397168 blocks): OK(unit 5) Port 6: WDC WD2500JB-32EVA0 WD-WMAEH1301595 232.88 GB (488397168 blocks): OK(unit 5) Port 7: WDC WD2500JB-00GVC0 WD-WCAL78165566 232.88 GB (488397168 blocks): OK(unit 5)
And now we can reboot, the rebuild will continue backgrounded, and the devices will come up in correct order:
twed0: <Unit 0, JBOD, Normal> on twe0 twed0: 190782MB (390721968 sectors) twed1: <Unit 1, RAID5, Rebuilding> on twe0 twed1: 476948MB (976790016 sectors) twed2: <Unit 4, RAID5, Rebuilding> on twe0 twed2: 715422MB (1465185024 sectors) twed3: <Unit 0, RAID5, Normal> on twe1 twed3: 715422MB (1465185024 sectors) twed4: <Unit 4, RAID5, Normal> on twe1 twed4: 715422MB (1465185024 sectors)
Areca
Downloads http://www.areca.com.tw/support/main.htm
CLI Manual http://www.areca.us/support/download/RaidCards/Documents/Manual_Spec/CLIManual.zip
CLI
cli64 vsf info cli64 rsf info cli64 disk info cli64 event info cli64 vsf check vol=1
Updating F/W
cd /tmp wget http://www.areca.us/support/download/RaidCards/BIOS_Firmware/ARC1160.zip cli64 sys updatefw path=/tmp/ARC1160/149-20101202/ARC1160FIRM.BIN