ARECA Owner's Thread (SAS/SATA RAID Cards)

Too vague - you haven't stated whether you chose "Foreground" or "Background" initialization, but it sounds like you chose background in which case the timing is normal. I always init in Foreground mode unless you need the space available right away.

Thank you for the input - I did choose background. I was planning to start moving data to it after it was fully initialized, so that was my noobie mistake. I appreciate the pointers you've given me above - I've followed them exactly which got me this far! So I will just have to wait until the initialization is over with.

Any recommendations on how to have these settings to allow for best read performance?

Thanks
 
right now, there are 8 x 2tb drives in one large RAID5 array but I have additional drives on-hand for expansion. For now, all drives are physically in the main chassis and directly connected to the internal connectors on the RAID card via SATA fanout cables. Future expansion will have to move into the other chassis and will be on the 8026 expander. I'm having timeout and read errors... also odd issues where a specific port (doesn't seem to be limited to a specific drive - tried swapping drives for one from my backup/expansion stock) will "reset" with the raid log showing "device removed" then "device inserted" or even "failed".

what are your "Hdd Power Management" settings? I used to see false Timeout errors on my 1680ix-24, only solution was backing off the "Stagger On Power Control" to 1.5 or 2.0 seconds. Also leave "Time To Hdd Low Power Idle" and "Time To Hdd Low RPM Mode" disabled. The false-timeout errors would appear in the log when the array was spinning up from sleep mode and they didn't respond to the Areca quick enough and got marked as timed out. Lastly you might try a different miniSAS to SATA fanout cable to try to eliminate possibilities. I have seen the phenomenon of a particular port # acting erratic when I still had my 1680, and it drove me nuts, but it only occurred if I used a miniSAS to miniSAS cable. If I used miniSAS to SATA fanout the problem disappeared. I sent the card in to Areca twice and they would just send it back without being able to find any problem, as it turned out because they only tested with miniSAS to SATA fanout cables on their bench.

Longterm I would look into selling the 1680 for an 1880, the weird false-Timeout issues pretty much went away when I transitioned my raid cards to 1880i's. Areca never looked into the problem because they claimed Hitachi wouldn't support them with issues pertaining to a desktop class drive on a RAID card.
 
Last edited:
Thank you for the input - I did choose background. I was planning to start moving data to it after it was fully initialized, so that was my noobie mistake. I appreciate the pointers you've given me above - I've followed them exactly which got me this far! So I will just have to wait until the initialization is over with.

Any recommendations on how to have these settings to allow for best read performance?

Thanks

Based on your statement that getting to 57% took 24 hrs, if the init isn't yet say at 75% then I would just delete the array and recreate it in Foreground mode, should be done in 5-6 hours. Also I think a hot-spare on a 5-drive RAID6 is overkill unless you travel a lot, but OTOH drives are cheap enough now that $80 isn't a bad insurance policy against not noticing a failed drive right away and manually swapping in a spare -- assuming you also have enough empty slots in your case to hold it without it cramping your disk space requirements.

There's nothing more you really need to fine-tune that would make a perceivable difference for read performance on spinning disks, you asked about upgrading to 4GB cache module and I think its a waste and you won't notice diff if all you're doing is serving large media files to multiple client systems. The usage pattern where a cache upgrade is noticeable is someone running a desktop PC with their O/S installed on an array, or a server hosting VM's or multi-user data, basically any usage pattern where small amounts of data are read and re-read frequently ("hotspots"). The "re-read" part is where the cache gives you the boost, and media files like videos tend to be big sequential files so the cache is just along for the ride and doesn't really get to do much, unless say you're delivering the same 1GB video to many client systems simultaneously.
 
Last edited:
Thank you so much odditory, it is about 62% complete at this time. I'll just wait it out - I am in no hurry... Next time I'll know better.

What about the power managment options. I do not have any idea what settings are best here... I have these options set:
pwrmgmnt.png
\

Anything you suggest or maybe a way for me to test and find what is best suited to my needs?

Thanks so much!
 
Stagger power-on control is the amount of time the card gives each drive to spin up before moving to the next one, so you multiply that value by the number of drives in array and that's how long it takes array to wake up and data to be accessible. I have no problems with mine set at 0.4 or 0.7 with Hitachi's (some of my arrays have 24 drives so I need a low value or I'm waiting forever for array to wake up), and I like keeping "Time to spin down idle HDD" maxed at 60 so that I'm minimizing unnecessary spindown/spinup cycles, just a personal preference.

Not much more you need to do, just make sure you choose GPT when initializing the partition in Windows Disk Management, and set cluster size to 16k (for reasons I explained on previous page) and quick-format is fine. Also check Areca event log after Init is complete to make sure no drives reported bad sectors.

Setting an NTP server is also advised so your logs have accurate time/date, I use 72.18.205.156
 
Last edited:
Also make sure the drives had the 1.5GB/s jumpers removed (if present and appropriate). I missed one in my 16 drive array, probably when I had one fail and it seemed to affect performance although admittedly I didn't benchmark before after so can't say for sure.
 
I have contacted Areca and asked about ESXi 5 drivers. They said that the beta-driver may be ready in a few weeks and the certification from VMware could take a few months.
 
oh boy... I spent a few hours searching for ESXi 5.0 drivers for Areca, just to find your post.. :(
 
Hoping for some suggestions as to what to try next...

I have the 1680ix-24 running with 20 disks setup as pass-through for a WHS media server at home. A couple of week ago things started going to crap with respect to disks showing up as failing or missing. Right now all disks are WD green drives, all modified (recently) with wdidle3 set to disabled. I went through 2 attempts at removing all drives from the system and starting with a single empty drive in my WHS disk pool tried copying files from the prior DATA drives into the pool. Somewhere along the way things always seem to crap out. Moving data between drives attached to the ARC-1680ix seems to work fine at times. But if I try to write data to the WHS setup from another PC in my home network it will always crap out, i.e. the WHS box freezes up and disappears from the network. So this past weekend I tried another approach where I didnt' use any drives on the 1680ix but rather connect a 2TB green to a SATA port on the mobo. I then added that to the pool and tried copying about 600GB across the network, which worked fine. I used FastCopy with verification turned on and all was fine. I ran the same test again and it worked fine again. So then I put the 2TB back on the 1680ix and re-ran the experiment and once again the server locked up. So then I installed a 1TB Seagate drive, re-ran the experiment, and it failed again.

So I'm down to the point where I believe the 1680ix has crapped out on me. I realize there are many posts discussing the merits (or lack thereof) of using WD green drives with the Areca, and even the Seagates. But keep in mind this setup has been working fine for me for a couple of years and just started having these problems. My 1680ix-24 has 2GB memory so I ordered a new memory module to swap it out to see if perhaps it's a memory issue. I'll get that later in the week so for now I just wait.

But in the mean time have any other users on this forum run into similar issues? Does anyone have any suggestions for what else I could try?

Thanks!

P.S. I have the latest version 1.49 firmware installed on the card.
 
Is there any difference between the 1880i-8 and the 1880ix-8 cards apart from the external port? I am looking at ordering one of these in the next few days for an esxi box. I wont need the external port for a while as I have a norco 4220 case and plan on using an hp expander card (which has an external port).
 
I have a Areca 1880IX-24 and 24 drives as follows:
8x 2TB Hitachi Ultrastar in Raid-6, one raidset, one volume
8x 3TB Hitachi Deskstar 5400rpm in Raid-6, one raidset, one volume
8x 3TB Hitachi Deskstar 5400rpm in Raid-6, one raidset, one volume
that gives 3 volumes.. one ~12TB and two ~18TB
I was able to create the volumes just fine, however when booting, only two of the volumes are presented in the list.. and if I switch the SFF-8087 cables between them on the raid card, then the list of volumes presented may change (although only 2 of them would show up, just as before). The OS would not see the missing volume, of course.. just the 2 of them that are presented by the raid card at the boot time.

Any idea why this happens? I've had 5 volumes connected before (2 of them via SAS expander), all were Hitachi Ultrastar 2TB drives, and they were all showing up just fine.The problem appeared after I created these volumes with 3TB drives... is there a workaround for it, maybe?

I'd really appreciate your help!

P.S. If I use the WEB interface to connect to the raid card after the OS is loaded, I can see all 3 volumes there, just fine. It's just that the raid card doesn't 'pass' to the OS all 3 volumes... just two of them...
 
I have a Areca 1880IX-24 and 24 drives as follows:
8x 2TB Hitachi Ultrastar in Raid-6, one raidset, one volume
8x 3TB Hitachi Deskstar 5400rpm in Raid-6, one raidset, one volume
8x 3TB Hitachi Deskstar 5400rpm in Raid-6, one raidset, one volume
that gives 3 volumes.. one ~12TB and two ~18TB
I was able to create the volumes just fine, however when booting, only two of the volumes are presented in the list.. and if I switch the SFF-8087 cables between them on the raid card, then the list of volumes presented may change (although only 2 of them would show up, just as before). The OS would not see the missing volume, of course.. just the 2 of them that are presented by the raid card at the boot time.

Any idea why this happens? I've had 5 volumes connected before (2 of them via SAS expander), all were Hitachi Ultrastar 2TB drives, and they were all showing up just fine.The problem appeared after I created these volumes with 3TB drives... is there a workaround for it, maybe?

I'd really appreciate your help!

P.S. If I use the WEB interface to connect to the raid card after the OS is loaded, I can see all 3 volumes there, just fine. It's just that the raid card doesn't 'pass' to the OS all 3 volumes... just two of them...

If two different cards I am guessing that the SCSI ID/LUN/etc are not unique. If one card then I have seen this happen if the volume sets are named the same.
 
If two different cards I am guessing that the SCSI ID/LUN/etc are not unique. If one card then I have seen this happen if the volume sets are named the same.

You were right, thanks for the suggestion!

The LUN IDs were the same in some cases.
What happened was that I took out the previous volumes created on the 2TB drives, then inserted the 3TB drives and created new volumes on those.
When I started mixing volumes created on 2TB drives with volumes created on 3TB drives (so I can move data from the old volumes to the new ones), some of those had same LUN IDs. This happens because Areca starts numbering from 0, if no drives exist.
What I did: renumbered some LUN IDs so all the IDs were unique, now I can see all LUNs ;)
 
FYI: I just went through the RMA process on a 1680ix-24 that went bad. Over 5 weeks later it's being delivered today. I had to proactively email them for status updates. They had to ship the card to Taiwan for repair. Since the card is over 1 year old, they would not provide a loaner card, and would only ship it out for repair.

The next time I buy a RAID card, if any other manufacturer is even close to Areca with a better RMA process, I'll be switching from Areca.

The RMA was done through Tekram, the distributor for the US.
 
Tekram might seem to move slow and I have an 1880IX-24 off to them for a couple weeks now and no word of when it'll return. I used a crash kit spare as a stop gap but I don't see how another vendor is going to be a whole lot better because even Seagate takes over a month to replace single drives for me.

What is an acceptable turnaround time?
 
an acceptable turnaround time is a couple of weeks for a $1300 add-in card. if they would have provided a loaner card i'd have zero complaints.
 
It doesn't happen to be possible to get wake on lan support through the areca 1880ix-16 network port is it?

I have had a helluva time going through motherboards to try to get all the features I want. Currently on the gigabyte g1.sniper. I like the marvell 9182 controller as it works nicely with my corsair force 3 ssd and sata 6Gb/s. But it has bigfoot killer nic on it and those evidently don't support wol which I didn't even consider as every board I've used for the past few years has supported wol. Then before that had an asrock extreme6 but after trying two of them I found their 3.3v regulators were bad and they continually under volt beyond the min spec of 3.14v causing drives to drop from the 1880i. This is when running 2x ati 6950 as well. Also they had the crappy 9128 controller. Then my original asrock x58 extreme had the fan header go out and I've had 3 warranty claims on it so far and keep getting boards that won't post. So I'm done with asrock. Oh, and I also tried evga ftw3 and this board also suffered from the crappy marvell 9128 but beyond that it doesn't have enough bootrom space to post with all these drives attached to it (including 16 on the areca). I contacted both evga support and asrock and confirmed these problem with their engineers. I believe I could disable the marvel 9128 on the evga and get it to work but it sucks to lose the onboard controller and I wasn't going to deal with that. All their boards suffer this same problem.
This is so frustrating and expensive!
 
No, WoL is not possible. The ethernet port on the Areca controller is not a network controller for your computer, it's for the HBA exclusively. If you want WoL, just buy another NIC.
 
FYI: I just went through the RMA process on a 1680ix-24 that went bad. Over 5 weeks later it's being delivered today. I had to proactively email them for status updates. They had to ship the card to Taiwan for repair. Since the card is over 1 year old, they would not provide a loaner card, and would only ship it out for repair.

The next time I buy a RAID card, if any other manufacturer is even close to Areca with a better RMA process, I'll be switching from Areca.

The RMA was done through Tekram, the distributor for the US.

Sorry man, part of this might be my fault. I RMA'd 220 cards to Areca in the last 2 months or so which I am sure bogged them down quite a bit.
 
Guys,

My Intel SSD drive controlling my OS died on me yesterday.... what a piece of junk. Though I have to say I am thoroughly impressed with this 1880ix card paired with the Hitachi 5k3000's,,, at least these drives have been working flawlessly.

So this is sort of a noob question, but yes this is my first hardware raid... I need to get a new OS drive and install windows on it. Afterwards, how do I get my existing raid volume back into the OS without losing all my data?

Side question - anyone have any preference on OS for a media server?
 
You just mount the volume in Windows. As for the OS, you really can't go wrong with Windows Server 2008.
 
What do you mean "get it back into the OS"?

Do you mean just have it remount as another drive letter after your boot OS drive? Simply install the RAID card driver once the OS is setup. That will recognize the card. The card itself handles the RAID array. Once the card is active it the array should become available to the OS. You "may" have to use the Disk Manager (in Windows as diskmgr.msc) to import the foreign drive.

Simple as that.

Now, if you mean something else you'll have to better describe what you're seeking.to set up.
 
@cw84: The 2TB and 3TB Hitachi 5K3000 drives combined with an Areca 1880 are the perfect combo for what you want to do. This is an old topic that comes up again and again in long threads like these - and the forum in general - and we've been discussing the success of Hitachi + Areca storage solutions for many years, some of us with hundreds of Hitachi drives across multiple generations of Areca cards as you'd find if you decide to poke around in the forum a bit more.

How would these drives pair with my existing 7K2000 drives which were recommended just over a year ago? Will they be bottlenecked? (Areca 1880 + HP Expander)
 
Also I'd like to ask, is there any benefit to getting a controller with cache such as the 1880-ix 12 vs the base 1880i? I've read you wouldn't want an expander when using the HP expander, is there no way to disable the internal expander?

Really the only two things that made me question not getting the base model when using an 1880 series card in conjunction with the HP Expander were the two network interface ports (what is the purpose of two, can you access the array through these (I thought it was just a management port) and the cache module.

I also thought maybe these two features would be of little to no use if you aren't using high end SAS drives, and I am not going to be doing that until SAS drives are ~$100 a pop too which doesn't seem to be happening anytime soon.
 
Vulcan-
The Areca only has one ethernet port, not two. The external port on the SAS Expander is a SFF-8088 port, not an ethernet port. As to the expander, there is no reason to go with the 12 port card unless you need the all 8 ports on the card in addition to the 32 ports the expander provides (minus the SFF-8087 you lose on the 1880 and on the expander if you single link) or if you are buying just the 1880i now and not the expander, and need 12 ports now. The only other benefit of the 12 port card is the expandable cache, which if you are just using this as a home server/workstation (and serving mostly media/sequential items) will not make enough of a difference for the additional cost.
 
On closer inspection (newegg photo was dubious) its a RJ11 port? So you can call the card and ask how its doing? http://www.starline.de/uploads/pics/ARC-1880ix-12-SAS-Logo_01.jpg

I am more concerned about the cache however, and how it would affect performance by having 512mb on the base card vs 1-4GB. The cost difference is modest if it actually boosts performance noticiably. I never know, maybe one day I will use it for SSD's and might notice it then? I wouldnt want the two expanders to interfere however, and ive read some posts about that. I'm not sure if there is away to not use the onboard expander.
 
The RJ11 port is a serial port. The card comes with an RJ11 to DB9.
 
On closer inspection (newegg photo was dubious) its a RJ11 port? So you can call the card and ask how its doing? http://www.starline.de/uploads/pics/ARC-1880ix-12-SAS-Logo_01.jpg

I am more concerned about the cache however, and how it would affect performance by having 512mb on the base card vs 1-4GB. The cost difference is modest if it actually boosts performance noticiably. I never know, maybe one day I will use it for SSD's and might notice it then? I wouldnt want the two expanders to interfere however, and ive read some posts about that. I'm not sure if there is away to not use the onboard expander.

It might/will make a difference in a heavy multi-user environment, where specific often requested data could come from the cache (acls, databases, etc) but otherwise I don't see you seeing any difference. I would much sooner take the $200-300 difference and buy more RAM for the machine, a new processor, a faster video card and you will see so much more benefit from your machine for the money.
 
Thanks guys you convinced me! Sorry, just want to be sure before dropping 500 bones. Luckily newegg just mailed me a 20% off all areca (and other brands) controller coupon so it lessens the sting >.<
 
After buying my areca 1800 ix 24 a couple months ago i finally started putting together my first large array but of course, having problems.

have 12 x2tb hitachi drives, unfortunately in Australia i could only get 8x 5k3000 and had to use 4 x 7k3000 to fill the gap.

raidproblems2.jpg


built a raid 6 array 64kb unit size, installed server 2008 on a separate raid controller and when i go to format my 20tb raid array i see this.

raidproblems.jpg


i cannot do anything with the unallocated space and i cannot extend the 2tb partition

any ideas whats going on with this? this is my first large array and want to iron out these problems before i put in my next array.
 
Last edited:
I'm using a areca 1800 ix 24 and have one array with 8 hitachi 7k2000. Now i have 12 5k3000 and started Initializing them using foreground.I'm using them in a new array as i don't want to mix the drives. I can not see my 8 7k2000 array anymore, it's gone... is it so when using foreground or what?. Volume state is normal but i can't see the array in windows anymore.
 
any ideas whats going on with this? this is my first large array and want to iron out these problems before i put in my next array.

Talon-
When you created the volume set, did you choose 64 bit LBA (In the Greater Than 2TB Section)? If you choose No, it will only allow 2TB per volume, which it what looks like happened. Also, Do not choose 4K clusters, it will top you out at 16TB per volume which is also undesirable in your situation. Delete your Volumeset if this is the case and recreate. Also, choose GPT instead of MBR if the Areca is not your boot device.
 
After buying my areca 1800 ix 24 a couple months ago i finally started putting together my first large array but of course, having problems.

have 12 x2tb hitachi drives, unfortunately in Australia i could only get 8x 5k3000 and had to use 4 x 7k3000 to fill the gap.

raidproblems2.jpg


built a raid 6 array 64kb unit size, installed server 2008 on a separate raid controller and when i go to format my 20tb raid array i see this.

raidproblems.jpg


i cannot do anything with the unallocated space and i cannot extend the 2tb partition

any ideas whats going on with this? this is my first large array and want to iron out these problems before i put in my next array.

since the block device size is correct it looks like you simply didn't use a GPT partition table and its using MSDOS/MBR which has a 2TB limit. Delete the partitions and re-partition the disk as GPT to use the full disk space.

EDIT: It looks like (from the system volume) that even though its D: you might be booting off the areca array? Windows doesn't currently support booting from GPT. If that is the case you will need to re-install. If you want to run windows off the raid array you should probably delete the volume set and this time create two volume sets. One small 80-120GB volume set which will be your boot drive (or whatever space you need for the OS/programs) and all the rest of the space in a second volume set.

Here is what mine look like:

Code:
root@dekabutsu: 11:10 AM :~# cli64 vsf info
CLI>   # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 WINDOWS VOLUME   40TB RAID SET   Raid6    129.0GB 00/00/00   Normal
  2 MAC VOLUME       40TB RAID SET   Raid6     30.0GB 00/00/01   Normal
  3 LINUX VOLUME     40TB RAID SET   Raid6    129.0GB 00/00/02   Normal
  4 DATA VOLUME      40TB RAID SET   Raid6   35712.0GB 00/00/03   Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> GuiErrMsg<0x00>: Success.

CLI>   # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 DATA 2 VOLUME    90TB RAID SET   Raid6   84000.0GB 00/01/00   Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI>
 
Hello I am currently looking to build a new File Storage server for my company to replace an aging and filled up 6 year old x server. I have read through all 70 plus pages of this thread and received quite a bit of helpful advice. I am going to list the components that I plan to buy for our new server and would welcome any advice that anyone has.

It will be used as a File server for an office of 20 employees, on a Gigabit Network. The files that we work with range from Word Documents, Power Points, PDF's etc to Video files that could be as large or larger then a GB each.

I am looking to go either Raid 6 or Raid 60 and would welcome any thoughts on both. Raid 6 seems like the smart move over Raid 5 because of the extra drive protection from total failure. Raid 60 seems enticing because we would get the extra protection from failure plus enhanced performance over raid 6.

I had originally though about getting a bigger case and just growing the array over time, however I decided against that for the following reason. The files we store are normally accessed for a short time during the year and then they sit idle, but need to be accessible if needed. If it takes us two years to fill this array up, then my plan would be to build another similar array (perhaps with 3TB drives if those are the sweet spot at the time), and then throw the new array in the rack. The old array would stay online but would most likely only be accessed once a month or so if we needed an old file so for the most part the drives can spin down etc. I believe another advantage of this would be that if I get the same raid controller for the second raid, it would give me a backup card that is getting used rather then just sitting on the shelf. Meaning if once we have two of these 2U cases in place, if the main one went down I could take the older array offline and put that card into the newer array until a replacement card could get here. I would welcome any thoughts on this concept.

We are also going to look into some UPS systems but I have not researched those yet.

Hardrives
12 HITACHI Deskstar 5K3000 Product Page

Raid Controller
Areca ARC-1880ix-12 Product Page

Case
NORCO RPC-2212 Black 2U Rackmount Server Case with 12 Hot-Swappable SATA/SAS Drive Bays Product Page

Motherboard
SUPERMICRO MBD-X9SCM-F-O LGA 1155 Intel C204 Micro ATX Product Page

CPU
Intel Xeon E3-1220 Sandy Bridge 3.1GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1155 80W Quad-Core Server Processor Product Page

Fan Heatsink
Noctua NH-D14 120mm & 140mm SSO CPU Cooler Product Page

Ram - 8GB
Kingston 8GB (2 x 4GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 (PC3 10600) Server Memory Product Page

Power Supply
Athena Power AP-RRU2ATX70 2U EPS-12V 2 x 700W Mini Redundant Server Power Supply - OEM Product Page

A few questions I already have
  1. I am currently looking at the 1880ix which I believe has an expander card built in, which will allow me to connect my 12 drives via the SFF-8087 Mini SAS that the case supports. Is this a wise move rather then going with a different card that does not include the expander and then go with an HP expander?
  2. Does anyone have any experience with this motherboard, or have a suggested motherboard that is known to work with with the card and drives I have selected?
  3. Should I go ahead and load the server up with 16GB of ram over the 8GB I currently have listed? Will I see much increase in performance? For the 80 dollars it would cost is it almost a no brainer?
  4. Does the type of power supply I have selected seem wise. We would like a redundant power supply and I went with one rated as a server power supply
  5. Are the Hitachi 5k300's still the recommended drives for this type of configuration. I had originally looked at the 7k300 which were around 109.00 more a drive is that overkill?
  6. Does the case look good for what I want to do? I looked at the Norco 4220 however I am not sure I want to go larger then 12 drives with the array. My though is that once we fill this one up we will build another 2U rack mounted array with 12 drives (Perhaps 3TB will be the suite spot at the time)
  7. I plan to order more then 12 drives so we have backups on the shelf. Any recommendations on how many backup drives might be wise to have on hand? Also should I order a couple extra in case some arrive DOA so getting the array up and running is not dependent on waiting for some RMA drives to come back?

Thank you for the time and a big thank you for everyone who has contributed to this thread, it is a wealth of information.

James Newman
 
1) You're putting 12 drives on a card with 12 ports, if I'm following correctly. Why do you think you need an expander?

2) Supermicro boards are generally stable and they're well-supported. You might do a bit better with an Intel board, but it's not worth fretting over.

3) For $80 I'd do it without thinking much. It's been 3 years since I've built any machines -- even a desktop -- with less than 12 gigs of memory. You'll have more file cache; on a file server, that's a good thing.

4) I haven't used the Athenas myself. If you want DIY redundant supplies, it seems like an appropriate choice.

5) It doesn't sound like you need a lot of IOPS. What are your estimates? What capacity planning did you do?

6) I don't like Norco cases. We usually get the Supermicro cases from our vendor (again, when we're not buying from a server vendor). I think you'll save yourself some trouble by not running the OS drives on the Areca card and instead doing RAID1 on the motherboard. A 16- or 24-drive server might be more appropriate.

7) In 12 drives, I don't think you're too likely to have infant mortality or DOA. Are you thinking of ordering an extra drive because you're in a huge rush? That's up to you -- time vs. money, and so on. If you're ordering cold spares anyway, what does a DOA matter? You immediately consume a cold spare and get deployed, then order a replacement for the cold spare. You don't care: you're up and running. I'd get one cold spare, I guessHow many hot spares will you run? What RAID configuration are you going to use? Seems like RAID1 over 2 drives for the OS, leaves 10 drives; then RAID6 over those with zero or one hot spares would be the way to go.
 
James911, the ARC-1880IX-12 is pretty big, I'm not sure if it'll fit in a 2U chassis with vertical slots. You may want to double check that before placing an order.
 
Back
Top