ARECA Owner's Thread (SAS/SATA RAID Cards)

You say you got to the end of an initialization, but don't mention what you initialized. New volume? Existing volume? If existing volume did you change stripe size? Or change raid level?

If I'm not mistaken when you migrate an existing volume to a different raid level or stripe size its a 2-stage process, and this would be normal. Granted it's been a while since I've done one and I'm fuzzy on it - I usually just re-create the array and restore from backup since its faster than migration). In any case let it finish obviously. It the log doesn't have any entries like "Raidset degraded" or "Volume degraded" then its likely the second stage of whatever modifications you kicked off.
 
Last edited:
Everyone should e-mail areca asking for a 'drop drive' feature.

It's seriously annoying having to manually pull known problem drives just to get the drive out of the array since areca cards apparently don't like dropping bad drives on their own.

Over the past several years I've had about 3 real drive failures and not even once did the areca card drop them it got so bad ALL 3 times to the point where the server just locked up entirely with i/o errors and even then the drives still remained undropped even with the event log showing dozens of errors for these drives.

The only time areca has dropped a drive was when I had a few doas and the drives were really messed up.

Something really seems to be wrong with their drive dropping process...
 
Last edited:
You say you got to the end of an initialization, but don't mention what you initialized. New volume? Existing volume? If existing volume did you change stripe size? Or change raid level?

Thanks for the reply - sorry it was a new volume hence my surprise. I would have expected to see some kind of error message before. I am assuming that the error log within the GUI / CLI is about as detailed as it can be? :)
 
Next question from me I am afraid - if you are running an initialisation and a rebuild in tandem (2 volumes obviously), will the rebuild work faster if I temporarily take the volume being initialised offline?
 
The external ports on the 'ix' cards have always been direct to the IOP instead of the expander. I've been a bit curious myself on the 1880ixl cards though. I couldn't spot a chip for an expander anywhere and I thought the IOP only had 8 lanes.

about the "ix" cards... how are the lanes divided?
multiplexing here as well as I expect the expander has 8 lanes to the IOP and the external has 4 lanes?
 
about the "ix" cards... how are the lanes divided?
multiplexing here as well as I expect the expander has 8 lanes to the IOP and the external has 4 lanes?
I believe the expander only has 4 lanes going to it without multiplexing being used. Should be 4 lanes going to the external connector and 4 lanes going to expander (which has 28 lanes).
 
ok, so it's similar to hooking up the HPexpander with one cable?
Pretty much. The only thing that the Areca card has over the HP one is a serial console for the expander itself (though I can't really see that being used often, if ever).
 
I believe the expander only has 4 lanes going to it without multiplexing being used. Should be 4 lanes going to the external connector and 4 lanes going to expander (which has 28 lanes).

I think that might be true for the 1680ix series, but that the 1880ix in fact has 8 lanes between IOP and the LSI expander -- i'm 99.9% sure of it based on benchmarks I've seen with people running 8 x SATA-II SSD's and pushing 1700MB/s+ throughput, as they would've hit a ceiling at 960-1000MB/s if it were only 4 lanes with SATA devices.
 
The connection would be 2400mb/s instead of 1200mb/s with 4 lanes though because it's SAS2.
 
Last edited:
I believe max speed between IOP and expander becomes irrelevant with SATA-I/SATA-II drives dragging things down. 2400mb/s is a theoretical speed which doesn't factor in encoding overhead, which brings it down to 1920. And you're only going to see that speed with SAS-2 drives and possibly SATA-III drives, or SAS-1 drives and TDM (time division multiplexing - muxing two 3Gbps full duplex devices into a 6Gbps link) and that depends on whether Areca chose to implement TDM - I doubt it.

Bottom line I'd be extremely surprised (and yes, crying into a pillow for a few nights) if Areca chose to squander half the lanes to the IOP for an external connector that most people wouldn't ever use. I think they employed the same technique of previous models with having the IOP link-switch the 4 external lanes.
 
Last edited:
Encoding brings it down to 2400mb/s, not 1920mb/s. 6gbit is 750mb/s and 80% of that is 600mb/s. Unless STP is also done with 8b/10b?
 
Everyone should e-mail areca asking for a 'drop drive' feature.

It's seriously annoying having to manually pull known problem drives just to get the drive out of the array since areca cards apparently don't like dropping bad drives on their own.

Over the past several years I've had about 3 real drive failures and not even once did the areca card drop them it got so bad ALL 3 times to the point where the server just locked up entirely with i/o errors and even then the drives still remained undropped even with the event log showing dozens of errors for these drives.

The only time areca has dropped a drive was when I had a few doas and the drives were really messed up.

Something really seems to be wrong with their drive dropping process...

Holy shit dude I have had this same issue. We have 500 servers with areca controllers and when they are in the DC its really annoying that I can't manually kick out a drive without pulling it manually. Last email correspondence I had with areca on this on 10/04:

Code:
Dear Sir/Madam,

thank you for your feedback, i will forward it to our marketing team and
engineers for product improvement.
as i known, we do have such plan to implement this feature, but this feature
will available in web manager console first, this feature in cli or bios
console may not available soon.


Best Regards,


Kevin Wang

I am really hoping this will be implemented. This is one of only bitches I have with areca controllers right now.

That being said I see drive failures everyday and usually the controller does kick out the drive but it is just now and then that it doesnt, even when smart info shows the reallocated sectors went below threshold, example:

Good drive:

===============================================================
Device Type : SATA(5001B4D40F7D1010)
Device Location : Enclosure#1 Slot#1
Model Name : ST31000528AS
Serial Number : 9VP4XXXXX
Firmware Rev. : CC38
Disk Capacity : 1000.2GB
Device State : NORMAL
Timeout Count : 0
Media Error Count : 0
SMART Read Error Rate : 111(6)
SMART Spinup Time : 96(0)
SMART Reallocation Count : 100(36)
SMART Seek Error Rate : 72(30)
SMART Spinup Retries : 100(97)
SMART Calibration Retries : N.A.(N.A.)
===============================================================

bad drive:

Drive Information
> ===============================================================
Device Type : SATA(5001B4D40F7D1011)
Device Location : Enclosure#1 Slot#2
Model Name : ST31000528AS
Serial Number : 9VP4XXXX
Firmware Rev. : CC38
Disk Capacity : 1000.2GB
Device State : NORMAL
Timeout Count : 0
Media Error Count : 0
SMART Read Error Rate : 97(6)
SMART Spinup Time : 96(0)
SMART Reallocation Count : 2(36)
SMART Seek Error Rate : 72(30)
SMART Spinup Retries : 100(97)
SMART Calibration Retries : N.A.(N.A.)
===============================================================

Event history:

2010-10-01 14:38:07 H/W MONITOR Power On With Battery Backup
2010-10-01 13:53:08 Enc#1 Slot#2 Time Out Error
2010-10-01 10:02:59 SW API Interface API Log In
2010-09-30 22:27:46 Enc#1 Slot#2 Time Out Error
2010-09-30 22:27:36 Enc#1 Slot#2 Time Out Error
2010-09-30 22:27:26 Enc#1 Slot#2 Time Out Error
2010-09-30 22:27:15 Enc#1 Slot#2 Time Out Error
2010-09-30 22:26:40 Enc#1 Slot#2 Time Out Error
2010-09-30 22:26:30 H/W MONITOR Power On With Battery Backup
2010-09-30 22:25:36 Enc#1 Slot#2 Time Out Error
2010-09-30 22:25:26 Enc#1 Slot#2 Time Out Error
2010-09-30 22:25:11 Enc#1 Slot#2 Time Out Error
2010-09-30 22:25:01 Enc#1 Slot#2 Time Out Error
2010-09-30 22:24:20 Enc#1 Slot#2 Time Out Error
2010-09-30 22:24:10 Enc#1 Slot#2 Time Out Error
2010-09-30 22:23:51 Enc#1 Slot#2 Time Out Error
2010-09-30 22:23:41 Enc#1 Slot#2 Time Out Error
2010-09-30 22:23:31 H/W MONITOR Power On With Battery Backup
2010-09-30 21:17:50 Enc#1 Slot#2 Time Out Error
2010-09-30 15:55:14 Enc#1 Slot#2 Time Out Error
2010-09-30 12:26:08 Enc#1 Slot#2 Time Out Error
2010-09-30 12:25:58 Enc#1 Slot#2 Time Out Error
2010-09-30 10:07:56 Enc#1 Slot#2 Time Out Error
2010-09-29 19:43:17 Enc#1 Slot#2 Time Out Error
2010-09-29 18:22:48 Enc#1 Slot#2 Time Out Error
2010-09-29 18:22:38 Enc#1 Slot#2 Time Out Error
2010-09-29 18:22:28 Enc#1 Slot#2 Time Out Error
2010-08-13 01:47:32 H/W MONITOR Raid Powered On
2010-08-12 16:31:11 H/W MONITOR Raid Powered On
2010-08-12 16:27:29 H/W MONITOR Raid Powered On
2010-08-12 16:15:24 H/W MONITOR Raid Powered On
2010-08-12 15:55:23 H/W MONITOR Raid Powered On
2010-08-12 15:54:27 H/W MONITOR Raid Powered On
 
Re: manual removal of RAID Set member disks, I sent a nag today to Areca as well. I suggested it go like this:

1. Under Raid Set Functions -> "Remove RAID Set Disk" or "Delete RAID Set Disk"
2. On disk selection screen, above the "Confirm The Operation" checkbox is a "Remove RAID Set Disk Permanently" checkbox.

The "Remove permanently" checkbox would wipe the raidset signature from the disk so it doesn't rejoin the array on the next reboot. Reason for the option is I can think of scenarios where you'd want to remove the disk but not wipe the raidset signature.
 
Last edited:
Has anyone tried the new 1880 Areca controllers with the Supermicro chassis with the built in SAS expander on the backplane? Specifically the 847E1-R1400LPB (3 Gbps) and the 847E16-R1400LPB (6 Gbps) chassis? Any issues? Do the SES2/I2C features work with the backplane's LEDs?
 
Has anyone tried the new 1880 Areca controllers with the Supermicro chassis with the built in SAS expander on the backplane? Specifically the 847E1-R1400LPB (3 Gbps) and the 847E16-R1400LPB (6 Gbps) chassis? Any issues? Do the SES2/I2C features work with the backplane's LEDs?
I'll be able to tell you in couple days. I have a 846E2 coming in the mail and have a 216E1 sitting on top of my rack. I'll probably test them both then. I know that enclosure management did work with with my 1280ML and 1680ix-24 when I used them both in a 846TQ.
 
Encoding brings it down to 2400mb/s, not 1920mb/s. 6gbit is 750mb/s and 80% of that is 600mb/s. Unless STP is also done with 8b/10b?

I don't know what STP is doing or what is responsible for which overhead, like if there's a double translation layer and a double encoding penalty (once for SATA and again for STP?), I just go by the following rule of thumb based on real world performance numbers.

EDIT: Through a SAS2 expander with SATA harddisks:

6Gbps = 600MB/s * .8 (due to 8B/10B encoding overhead) = 480MB/s
SAS2 wide = 480MB/s * 4 = 1920MB/s
SAS wide = 960MB/s
 
Last edited:
6Gbps = 600MB/s * .8 (due to 8B/10B encoding overhead) = 480MB/s
SAS2 wide = 480MB/s * 4 = 1920MB/s
SAS wide = 960MB/s
1 byte = 8 bits. Divide 6000 by 8 and you get 750. Take 80% of that for 8b/10b encoding and you have 600. By your method, 4 lanes = 960mb/s. That means 1 lane = 240mb/s. SSDs are definitely capable of more than 240mb/s. I don't know why you think that 1gbit = 100mb/s, because it just isn't correct.
 
I know they're capable of more than 240MB/s, because I get 1400MB/s with four C300's in RAID0 direct-connected to an 1880i.

I was talking specifically numbers with an expander. Like you said, maybe STP is also doing an 8b/10b and it's happening twice.

In any case once I get 8 of the new Hitachi 6Gbps drives I'll repeat the test and see if SATA-III makes any difference over SATA-II in the numbers I'm seeing. Maybe that will provide another clue.
 
Last edited:
Nice catch. But to be clear the function isn't exclusive to the archttp utility. You can use the function whether you get to the WebGUI through archttp or through the IP address of the card's ethernet port.

Not sure I'm thrilled with the idea of entering a command like that versus being able to select the disk from a list, just seems like chance of error could be higher. I also wonder whether the function wipes the raidset signature from the disk or not, since if it doesn't then the drive would theoretically rejoin the array after a reboot.
 
Holy shit dude I have had this same issue. We have 500 servers with areca controllers and when they are in the DC its really annoying that I can't manually kick out a drive without pulling it manually. Last email correspondence I had with areca on this on 10/04:

Code:
Dear Sir/Madam,

thank you for your feedback, i will forward it to our marketing team and
engineers for product improvement.
as i known, we do have such plan to implement this feature, but this feature
will available in web manager console first, this feature in cli or bios
console may not available soon.


Best Regards,


Kevin Wang

I am really hoping this will be implemented. This is one of only bitches I have with areca controllers right now.

That being said I see drive failures everyday and usually the controller does kick out the drive but it is just now and then that it doesnt, even when smart info shows the reallocated sectors went below threshold, example:

Good drive:

===============================================================
Device Type : SATA(5001B4D40F7D1010)
Device Location : Enclosure#1 Slot#1
Model Name : ST31000528AS
Serial Number : 9VP4XXXXX
Firmware Rev. : CC38
Disk Capacity : 1000.2GB
Device State : NORMAL
Timeout Count : 0
Media Error Count : 0
SMART Read Error Rate : 111(6)
SMART Spinup Time : 96(0)
SMART Reallocation Count : 100(36)
SMART Seek Error Rate : 72(30)
SMART Spinup Retries : 100(97)
SMART Calibration Retries : N.A.(N.A.)
===============================================================

bad drive:

Drive Information
> ===============================================================
Device Type : SATA(5001B4D40F7D1011)
Device Location : Enclosure#1 Slot#2
Model Name : ST31000528AS
Serial Number : 9VP4XXXX
Firmware Rev. : CC38
Disk Capacity : 1000.2GB
Device State : NORMAL
Timeout Count : 0
Media Error Count : 0
SMART Read Error Rate : 97(6)
SMART Spinup Time : 96(0)
SMART Reallocation Count : 2(36)
SMART Seek Error Rate : 72(30)
SMART Spinup Retries : 100(97)
SMART Calibration Retries : N.A.(N.A.)
===============================================================

Event history:

2010-10-01 14:38:07 H/W MONITOR Power On With Battery Backup
2010-10-01 13:53:08 Enc#1 Slot#2 Time Out Error
2010-10-01 10:02:59 SW API Interface API Log In
2010-09-30 22:27:46 Enc#1 Slot#2 Time Out Error
2010-09-30 22:27:36 Enc#1 Slot#2 Time Out Error
2010-09-30 22:27:26 Enc#1 Slot#2 Time Out Error
2010-09-30 22:27:15 Enc#1 Slot#2 Time Out Error
2010-09-30 22:26:40 Enc#1 Slot#2 Time Out Error
2010-09-30 22:26:30 H/W MONITOR Power On With Battery Backup
2010-09-30 22:25:36 Enc#1 Slot#2 Time Out Error
2010-09-30 22:25:26 Enc#1 Slot#2 Time Out Error
2010-09-30 22:25:11 Enc#1 Slot#2 Time Out Error
2010-09-30 22:25:01 Enc#1 Slot#2 Time Out Error
2010-09-30 22:24:20 Enc#1 Slot#2 Time Out Error
2010-09-30 22:24:10 Enc#1 Slot#2 Time Out Error
2010-09-30 22:23:51 Enc#1 Slot#2 Time Out Error
2010-09-30 22:23:41 Enc#1 Slot#2 Time Out Error
2010-09-30 22:23:31 H/W MONITOR Power On With Battery Backup
2010-09-30 21:17:50 Enc#1 Slot#2 Time Out Error
2010-09-30 15:55:14 Enc#1 Slot#2 Time Out Error
2010-09-30 12:26:08 Enc#1 Slot#2 Time Out Error
2010-09-30 12:25:58 Enc#1 Slot#2 Time Out Error
2010-09-30 10:07:56 Enc#1 Slot#2 Time Out Error
2010-09-29 19:43:17 Enc#1 Slot#2 Time Out Error
2010-09-29 18:22:48 Enc#1 Slot#2 Time Out Error
2010-09-29 18:22:38 Enc#1 Slot#2 Time Out Error
2010-09-29 18:22:28 Enc#1 Slot#2 Time Out Error
2010-08-13 01:47:32 H/W MONITOR Raid Powered On
2010-08-12 16:31:11 H/W MONITOR Raid Powered On
2010-08-12 16:27:29 H/W MONITOR Raid Powered On
2010-08-12 16:15:24 H/W MONITOR Raid Powered On
2010-08-12 15:55:23 H/W MONITOR Raid Powered On
2010-08-12 15:54:27 H/W MONITOR Raid Powered On
Couldn't someone with some scripting abilities write something to auto kick out the drive the moment it started to experience timeout errors? Wouldn't one timeout error indicate a problem or what would a threshold be? I am considering scripting this myself on my Linux Raid system when I get my 1880 if this is a known problem with the Areca's and isn't corrected yet.
 
Has anyone tried the new 1880 Areca controllers with the Supermicro chassis with the built in SAS expander on the backplane? Specifically the 847E1-R1400LPB (3 Gbps) and the 847E16-R1400LPB (6 Gbps) chassis? Any issues? Do the SES2/I2C features work with the backplane's LEDs?

I duno if its the 3 gbps or the 6 gbps version but I can tell you that the one that I used (LSILOGICSASX36) SAS exapnder that was in the 24 disk supermicro case was NOT compatible with the ARC-1880ix. Under heavy I/O it got lots of short 5 second pauses and event info showed a timeout. It would rebuild ok when it was pretty idle but the array/volume became failed under very heavy I/O while rebuilding.

I used the exact disks that were listed as 'failed' and tested them in another case with no expander (8 of them) and I had no timeouts no errors and it rebuilt fine under super heavy load (because of the load the rebuild took like 80+ hours).

areca requested pics of the web interface but i didn't hear back:

http://box.houkouonchi.jp/archttp/

That being said the lights (for locate) as well as rebuilding status and all that did work correctly with the supermicro SAS expander (well LSI).
 
Well I just randomly started going through our machines and found several machines to test this on but it didn't work. Either I am doing something wrong or its not supported on the ARC-1222.
 
So Sam called Todd (an Open-e GURU)

Reid this is incorrect! Also Sam is a partner of Open-E and all partners are trained in the products they sell, this is clearly noted on the product comparison site from link below, Reid, this was not fare what you had written and allow my point of view.
http://www.open-e.com/service-and-support/products-archive/products/nas-r3/comparison/

The partner should have stated yes you can have over 16TB on a Volume Group but with the Logical Volumes you can only have 16TB per Logical Volume as this is a 32bit product, so what this means is that and is clearly noted on their website as EOL product NAS-R3 is 32bit, so you can have a 20TB Volume Group and from that you can create many 16TB NAS Logical Volumes. So when you have a 20TB Volume Group on your “End Of Life” product NAS-R3 is only licensed for 16TB then you will need to expand this with a 4TB license key to support the 20TB Volume Group not a 20TB single NAS Logical Volume.

You also forgot to mention that Todd personally paid for the license key NOT Open-E and I know this to be true as we processed the order and Todd pulled out his Visa and Todd informed you and Sam this, so when you make a false accusation it would be nice to be able to back it up in front of the world here, please feel free if you wish proof of this. Also it is not true that Open-E took all day either as Sam was the delay, all he had to do is call me as Open-E does not sell directly to the reseller so this is wrong as well and not true or fare!

Also Todd offered you a FREE upgrade to DSS V6 that is worth $520.00, so to prove it then how is that he talked about the conversion as you needed the DSS V6 version to 64bit. So not only was he willing to go out of his way for you and the partner to quickly resolve it for both of you, but he still eat the cost and time personally. So you may ask why he had to pay for it because it would have cost more to write the report as to why Sam the Partner did not know how to sell products that he owns, the customer return and the Distributer me and internal database people at Open-E to kill the SN#, so when you add Todd’s time (not everyone’s else's time) 2 hrs, from Sam’s time and your time and both our time during the NetViewer session he had with you, this exceeds the cost of all of what we make or the $200.00.

Concerning the transfer from 32bit to 64bit volumes to copy the data was for your protection, try to do this in windows where you have an existing 32bit NTFS and convert it to 64bit, Open-E has a tool but he always likes a backup in case something was to happen and even though you had a RAID 6 this is not a backup. So Todd wanted you to protect your data. Most of the programmers I know would be able to Google that.

Also he can’t take the license off the system as he informed both you and Sam this can’t be done unless an RMA exchange or a free upgrade to DSS V6, plus doing a NetViewer session is like a WebEx session, so he can’t do this as he needs root access so this is just FALSE. He would need to issue a RMA with a new key and he was more than happy to offer you all a free upgrade so this is not true, good thing we have proof of this.

Reid, this is wrong what you are spouting out and is unfair to Todd and to Open-E, anyone who knows Todd knows he will do anything for any engineer that needs help; all you have to do is go to the Open-E Forums where he is the moderator and type “Thanks Todd” or “Thanks To-M” and you will see 100’s of guys all over the world that he has helped and they know Todd to be honorable and one who never lets an engineer down, just check some of the times this guy has been there at late nights and weekends.

So I really can’t site and walk away from these false accusations especially when wronged! The worst part is that even though Todd paid for it personally he never ever got a thank you for paying for it out of his own pocket.

Sorry you all had to see this but many of you probably would not put up with it either especially as hard as we all work.

Just providing the truth :)
THINKMAN
 
Hi all :),

I have been reading thru this thread and it has already proven to be a very valuable learning experience to me.
I'm going to build my first home storage server but I'm still in doubt over some components.
The case I have ordered is the Norco RPC-4224 wich will arrive here next week!
Motherboard is a P5Q-WS, CPU C2D E6750, RAM 6GB DDR2 PC6400 (these will all be replaced by a Xeon configuration in the future).
For PSU I'm thinking about the Corsair AX850 or maybe the AX1200 although the latter might be overkill ....
The Raid controller I would like to buy is the ARC-1880ix-24 :)
Now since I would like to run all HDD's directly from the Areca (hence the ...ix-24) I'm a bit puzzled about wich drives to get.
I have read several horror stories about the Areca cards and drive compatibility so I'm asking here before I run to the store and buy me a couple of drives ;)
For the moment I'm considering either the Hitachi 7K2000 2TB and the Samsung F4EG 2TB ... or should I wait a bit for the Hitachi 7K3000 2TB or the 5K3000 2TB?
Reliability is the most important factor to me, then comes price and then performance (although a nice fast RAID6 is very welcome :cool:).

Also I would like to start off with 8 drives in a RAID6 configuration and then add more drives as the need for more space grows. Is online expanding bad for the performance of the array or does this not make a diffrence?

Sorry for the many questions but as I said, it is my first attempt to a home storage server ...
Thanks in advance for any help and replies :)!!!
 
Well I just randomly started going through our machines and found several machines to test this on but it didn't work. Either I am doing something wrong or its not supported on the ARC-1222.


I got a response from Kevin this morning about adding a fail-drive option to the GUI, after I indicated that the undocumented "FailDisk" keyword thing you enter in "Rescue RAID Set" wasn't enough since most people won't ever know about it. His response:

as i know, the remove hard drive failed feature will available in next official release firmware version.
Best Regards,
Kevin Wang
 
I duno if its the 3 gbps or the 6 gbps version but I can tell you that the one that I used (LSILOGICSASX36) SAS exapnder that was in the 24 disk supermicro case was NOT compatible with the ARC-1880ix. Under heavy I/O it got lots of short 5 second pauses and event info showed a timeout. It would rebuild ok when it was pretty idle but the array/volume became failed under very heavy I/O while rebuilding.

what exact model of 1880ix card were you using, and how did you connect the 1880ix to the expander backplane on the Supermicro case? if you're really talking about an 1880ix card and not an 1880i card, and you connected the Areca to the backplane via the internal SFF-8087 connectors, they you're daisy chaining expanders since the 1880ix has one onboard. And it wouldn't surprise me at all that you were having issues in that scenario, since SAS-1 expanders like the LSISASX36 seem to be a little less "compatible" overall for things like daisy chaining compared to newer SAS-2 offerings.
 
what exact model of 1880ix card were you using, and how did you connect the 1880ix to the expander backplane on the Supermicro case? if you're really talking about an 1880ix card and not an 1880i card, and you connected the Areca to the backplane via the internal SFF-8087 connectors, they you're daisy chaining expanders since the 1880ix has one onboard. And it wouldn't surprise me at all that you were having issues in that scenario, since SAS-1 expanders like the LSISASX36 seem to be a little less "compatible" overall for things like daisy chaining compared to newer SAS-2 offerings.

I guess I said ARC-1880ix by habbit but actually it was the ARC-1880i this is the exact card I am testing:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816151071

It was bought from this exact page when I emailed someone at the company to order it for me. Also the sticker with the barcode on the box also says ARC-1880i.

And yeah I would guess that the SAS expander in the case is SAS1.
 
Quick question:

With an 1880ix-16 if I use all 4 internal 8087s with the fanout cables can I still use the 8080 with another array or do I have to give up one of the internal 8087s?
 
You should have no problems with drives on all 4 internal 8087 ports plus an expander on the 8088 port. There's normally a "SAS Mux" setting in the System Configuration menu on Areca cards with an integrated expander (at least there is on the 1680ix -- I assume the 1880ix is similar) and as long as its Enabled (default setting) you can use both internal and external connectors simultaneously.

And in case anyone wondered, setting "SAS Mux" to Disabled would disable your internal ports, leaving only the 8088 connector as a means for connectivity.
 
Last edited:
Just a little warning for everyone. Apparently my 1880i does not like it when you disconnect 20 drives at once and wound up dropping all 40 drives instead. Didn't break anything, but still had to reboot, so a bit of a pain. So, you shouldn't do that either if you like uptime.
 
Has anyone tested the Samsung 2TB F4's with the 1880? With the news of drive extender being dropped from WHS I'm going to finally more to hardware RAID I believe and need to decide what drives to buy.
 
Has anyone tested the Samsung 2TB F4's with the 1880? With the news of drive extender being dropped from WHS I'm going to finally more to hardware RAID I believe and need to decide what drives to buy.

I am currently building a configuration using those Samsung F4 drives.
I've got 34 drives that will be connected via:
  • 20 drives connected directly to the Areca 1880-ix-24 via fanout cable.
  • 14 drives via SATA fanout cable, internally connected to the Areca card via the Astek A33606-PCI SAS Expander
So far, I've tested building an array using 24 drives directly to the 1880. The drives were recognized, configured to a raid6 array and 100% initialized without any issues.
I have not yet booted fully to the OS to mount and test the array. if you have any requests/suggestions on how I can test, i'd be more then willing to do it.

In a few weeks, i'll be installing the Astek card and testing all 34 drives in a Raid 6 array.
 
Back
Top