Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
50 hours? Even under medium/high load mine can rebuild in 24-26 hours and about 5 hours for low/average load.5 hours? That can't be right. Should be closer to 50 hours.
50 hours? Even under medium/high load mine can rebuild in 24-26 hours and about 5 hours for low/average load.
It's not realistic because it has to read 44TB from the other drives in that example. That comes out to ~2400mb/s and the IOP is not capable of that.2TB in 5 hours comes to 111 MB/s. That is pretty good!
Is there any info on how the 1280 sets up a RAID10? How do I tell which drive has which parts on it? Is the info stored on the drives or in the card?
I have 16 of them in a RAID6 on an Areca 1880iX. Bought them last week. They inititialized very fast and are working fine. They run VERY cool.Has anyone tried the 5K3000's yet?
http://www.newegg.com/Product/Product.aspx?Item=N82E16822145475
They are on sale for 79.99 right now and I picked up four of them. I just wanted to know if I should proceed.
-Brian
I am running an ARC-1880iX-24 card in a Norco RPC-4224 case. It is a Supermicro X8DT6 motherboard. I built a RAID6 array of 12 Seagate ST32000542AS LP drives using foreground initialization. It initialized fine then dropped two drives. The drives showed "failed". On reboot only one was missing and it showed "free". I made it a global hot swap and the array rebuilt without error. The data was mostly good, though there were a couple of directories that were corrupted.
I disabled all power saving modes, flashed all of the drives to CC35 (some had to be forced from CC34), updated the Areca from 1.48 to 1.49, rebuilt the array using background initialization and it has run fine since (about two weeks). I built a second array (RAID6) with 9 of the same drives (also flashed with CC35) using foreground initialization about ten days ago. It has also been running fine. Last Friday I decided to grow expand the second raid set from 9 to 11 drives - again using the same Seagate drives. It showed Migrating all weekend, then last night at about 64% migrated the alarm started beeping constantly on the controller. Both RAID sets were missing to the O/S. When I restarted the server, the first RAID set was up and normal, the second showed "failed migration". One of the original 9 drives was listed as "free" and my global hot swap drive had been assigned in its place. The array was not recoverable using RESCUE or SIGNAT. It is now being rebuilt with 12 drives.
The disc failures have been random, not on a particular backplane, drive or slot. Areca tells me that "only enterprise drives supported" - which I already know. I have had great luck with these drives so far, especially as compared to the WD and Samsung counterparts - albeit not in a RAID array.
The questions:
- Are the Seagate drives really not going to work?
- Could it be a bad 1880 card?
- Any other thoughts?
All four binaries for the v1.49 update were installed. I am not sure which of them was for the expander. The HP card was flashed with 2.06 from the vendor prior to shipping.Have you upgrade the latest expander firmware for ARC-1880iX-24?
Okay, I've read as many posts as I can regarding getting the two to work together but I'm still turning up nothing. I've upgraded to the latest version 1.49 & even figured out how to finally upgrade the 1680s expander w/ the newer firmware 5.89.1.39 from their ftp site. When I console into the expander and type 'li" it sees the HP drives attached via the external cable but I get no love and the system reboots after 300 seconds. I'm using a Tekram 2m long sff-8088 to sff-8088 cable to attach the two together. I've even set the external cable length using 'dr -O 0x2' and saving it w/ 'st 0xff'.
Now if I attach the two together using a sff-8087 to sff-8087 it works fine and the HP expander shows up along w/ the drives. (It shows up as external enclosure #3 HP Expander 2.02 ...which I assume to be the firmware?) Bad part, the performance is lousy. If I keep all 20 drives on the HP expander w/ 0 on the 1680 I can expect around 250 MBps ish. If I move 8 HDs over to the the 1680 w/ 12 on the HP expander I get 450 to 500 MBps ish. But if I only use 12 HDs on the 1680 w/ the HP expander unplugged I get 650+ MBps (but have 8 HDs just sitting on the shelf!)
Any suggestions besides just buying a 1680ix-24 or 1880ix-24?
My seagate drives work fine on the 1680ix.i'd connect all the drives to the HP expander, connect the 1680ix-12 to the HP expander with an SFF-8088 cable and then play with the SAS Mux setting in the Areca management GUI. Whatever you have it set to now, reverse it - I think it either needs to be enabled or disabled to work correctly with something on the external SFF-8088.
I used to have an 1680ix-24 and don't quite remember if I was ever able to simultaneously use both the internal SFF-8087 ports on the areca card *and* the HP expander ports, so I just connected all drives to the HP expander and then I think i disabled SAS Mux which disabled the internal ports on the Areca card. Might the story be different with a different model expander, like the Intel, perhaps.
NOTE: DO NOT use Seagate drives, and some models of WD drives, on the internal Areca 1680ix-12 ports. They're known problematic with the integrated expander on those cards.
My seagate drives work fine on the 1680ix.
ST31000340NS and ST32000444SS
Has anyone placed an fan on the 1880IX series cards? I noticed there is a small place for a fan to connect and it would be ideal to help cool the processor. Anyone have any recommendations for a fan?
I have 16 of them in a RAID6 on an Areca 1880iX. Bought them last week. They inititialized very fast and are working fine. They run VERY cool.