ARECA Owner's Thread (SAS/SATA RAID Cards)

The motherboard should for sure work fine with the Areca. Areca's are one of the raid cards that are that are better than others (compatability wise) and supermicro are one of the brands of motherboards that is also better than others (compatability wise). I have never seen an areca controller not work with supermicro board and we use them in a variety of different motherboard models.


6) I don't like Norco cases. We usually get the Supermicro cases from our vendor (again, when we're not buying from a server vendor). I think you'll save yourself some trouble by not running the OS drives on the Areca card and instead doing RAID1 on the motherboard. A 16- or 24-drive server might be more appropriate.

7) In 12 drives, I don't think you're too likely to have infant mortality or DOA. Are you thinking of ordering an extra drive because you're in a huge rush? That's up to you -- time vs. money, and so on. If you're ordering cold spares anyway, what does a DOA matter? You immediately consume a cold spare and get deployed, then order a replacement for the cold spare. You don't care: you're up and running. I'd get one cold spare, I guessHow many hot spares will you run? What RAID configuration are you going to use? Seems like RAID1 over 2 drives for the OS, leaves 10 drives; then RAID6 over those with zero or one hot spares would be the way to go.

If its at a business where someone is at all the time or at least everyday I prefer cold-spares to hot spares. When you use hot spares it messes up your raid order and god forbid some day you have to recover the array from scratch this can cause complications. I say go cold-spare especially if raid6. I usually just have one spare drive on hand.

I got my 3TB 5700 RPM hitachi drives for around $120/piece definitely worth going 3TB over 2 TB for the long run IMHO.

I think there is no reason to use motherboard raid1 unless you run windows. Even then I would be reluctant to do any raid as I don't really trust software raid at all. The only time this ever makes sense is when the OS is windows as you can't easily boot off a live-cd and access the raid array like you can with a linux distro or solaris, etc...

James911, the ARC-1880IX-12 is pretty big, I'm not sure if it'll fit in a 2U chassis with vertical slots. You may want to double check that before placing an order.

He is right. You are going to have problems with that chassis. I think I would just go with the 4U unless you are just limited in rack space or something. The 2U is gonna have problems with a 1880ix-12 because its full height and an ARC-1880i + HP sas expander because the SAS expander is full height. You might be able to get away with ARC-1880i + intel SAS expander though.

If you must have 2U and/or want to use full height cards you are going to need something like this supermicro chasis:

SC826TQ-R800UB

Of course then it doesn't look like there are any supermicro motherboards socket 1155 that really support using the risers you should use for that chasis. You would then probably want to go with something like this:

http://www.supermicro.com/products/motherboard/Xeon3000/3400/X8SIU.cfm?IPMI=Y

The cheaper and simpler solution is likely just to get the 4U chasis though. The norco 4U isn't THAT much more expensive compared to going the other routes and extra slots for future expansion and its probably quieter too (especially if you swap out the fans for the bigger ones).
 
Thank you everyone for your feedback. I have taken into account and made some changes to my current build plans and put them below.


mikeblas
To answer a couple of your questions.

A high estimate would be that we will be generating about 6 to 8 TB of data a year that will need to go on this array.

My worry about DOA drives is just that when I read over the New Egg reviews there seem to be allot of people complaining about DOA drives. However I guess like anything else a much larger percentage of people who have a bad experience spend the time to post then do the people who have a good experience. So it probably is not as bad as it seems by reading those reviews.


  1. I went with the Norco 4220 Chassis thanks to the advice of a couple people that the 2U case I had choosen was going to be to small for the Areca card
  2. Went with the 3TB Hitachi drives over the 2TB. This will push my raid volume to around 24TB in a raid 60 configuration, which is great for us.
  3. Doubled the ram for a total of 16GB for an extra $80.00

I'm still looking for any advice on the following
  1. Motherboard Choice - I am not tied to this motherboard in anyway, I just wanted a Server motherboard and this one had a nice price
  2. Redundant Power Supply - Again I am not tied to this power supply I just wanted a server rated redunant power supply and this one seemed resonable. Is 700 Watts enough for this config?
  3. CPU choice - The CPU choice was basically chossen based the motherboard choice.

Raid Drives
12 x HITACHI Deskstar 5K3000 HDS5C3030ALA630 (0F12460) 3TB 32MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive -Bare Drive
$120.00 Product Page

OS Drives
2 x SAMSUNG Spinpoint MP4 HM250HJ 250GB 7200 RPM Product Page

Case NORCO RPC-4220 4U Rackmount Server Chassis Product Page

Raid Controller
Areca ARC-1880ix-12 Product Page

Motherboard
SUPERMICRO MBD-X9SCM-F-O LGA 1155 Intel C204 Micro ATX Intel Xeon E3 Server Motherboard Product Page

CPU
Intel Xeon E3-1220 Sandy Bridge 3.1GHz Product Page

Fan Heatsink
Noctua NH-D14 120mm & 140mm SSO CPU Cooler Product Page

Ram
2 x Kingston 8GB (2 x 4GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333. Total of 16 GB. Product Page

Power Supply Athena Power AP-RRU2ATX70 2U EPS-12V 2 x 700W Mini Redundant Server Power Supply - OEM Product Page



I have not spent anytime shopping around for best prices yet. I want to finalize the build first, but without taxes and shipping this currently comes out to just under $4000.00. That is with two extra 3TB drives to put on the shelf. Does that seem pretty good for this type of system?
 
Last edited:
If its at a business where someone is at all the time or at least everyday I prefer cold-spares to hot spares. When you use hot spares it messes up your raid order
What does "messes up your RAID order" mean?

I think there is no reason to use motherboard raid1 unless you run windows.
Lots of reasons to use motherboard RAID1 for the OS. The most obvious is capacity. If you use two ports on a 12 port card, you're left with 10 ports for your primary storage. The motherboard is perfectly capable of handling access to the OS volume with its onboard RAID controller but those same ports can't participate in the presumably high-performance application dedicated to the RAID controller card. Using the RAID ports on the motherboard lets the two ports on the RAID card be available for the core functionality of the machine -- in this case, the RAID60 array.

That is, the compelling reason to use the motherboard RAID is that doing so substantially increases the capacity of the RAID card while slightly increasing its performance, too.
A high estimate would be that we will be generating about 6 to 8 TB of data a year that will need to go on this array.
Sounds like you haven't done any capacity planning. If you don't have a demanding application, it's going to be hard to go wrong.

My worry about DOA drives is just that when I read over the New Egg reviews there seem to be allot of people complaining about DOA drives. However I guess like anything else a much larger percentage of people who have a bad experience spend the time to post then do the people who have a good experience. So it probably is not as bad as it seems by reading those reviews.
From the reviews, the reader doesn't know how many units of a particular drive model have been shipped and how many units shipped did have problems. You only know how many people thought they had a problem with the drive and decided to post about it -- you can't even tell how many drives have shipped. You also do know if the cause is NewEgg's infamously bad packaging for disk drives, a compatibility issue, a setup issue, or a problem endemic to the drive model itslef.

It's pretty obvious, I think, that the newegg reviews are worthless for determining reliability.
 
Last edited:
What does "messes up your RAID order" mean?

It means just that. Say you have a 20 disk chasis and slots 1-19 comprise a raid6 array. Lets say slot 20 is a hot spare. 6 months down the line slot 5 fails and since you have a hot spare the array immediately rebuilds to slot 20 because its a hot spare (thats its job).

Well now slot 20 has taken the place of slot 5 and the physical order of the array is no longer the same as the logical order as it would be:

Physical slot in logical order now becomes:

1 2 3 4 20 6 7 8 9 10 11 12 13 14 15 16 17 18 19.

Now lets say you replace the bad disk (slot 5) with a new drive and and its a hot spare again and a little while longer slot 12 fails and immediately starts rebuilding to slot 5.

Now the physical slot in logical order looks like:

1 2 3 4 20 6 7 8 9 10 11 5 13 14 15 16 17 18 19

If something happened and the array metadata got lost or the array became in a failed state (like backplane problem and 4 drives went offline at once) you would normally recover the array by re-creating with a no init/rescue option. Well that isn't going to work unless the logical/physical order is the same. Basically it complicates the recovery process if you ever have to do this. Hot spare is only really useful IMHO when its a colo'd box that you don't have easy physical access to and it would normally take many days or a week+ to get the drive replaced. If its at somewhere you work everyday then ts maybe a half a day slower than when a hot-spare would have rebuilt.

The situation it makes sense is when you have something very random I/O dependent as raid6/5 random I/O goes to shit when degraded (sequential reads/writes for most file-server operations like the OP would still be fine). In this case you might want the rebuild to happen ASAP so the machine isn't running on degraded performance as long.

For most home/small business use the the added complexity/difficulty with recovery the array outways the advantage of a hot spare IMHO, not to mention you have a slot you can't use for live space. A cold spare is a good idea IMHO though.


Lots of reasons to use motherboard RAID1 for the OS. The most obvious is capacity. If you use two ports on a 12 port card, you're left with 10 ports for your primary storage. The motherboard is perfectly capable of handling access to the OS volume with its onboard RAID controller but those same ports can't participate in the presumably high-performance application dedicated to the RAID controller card. Using the RAID ports on the motherboard lets the two ports on the RAID card be available for the core functionality of the machine -- in this case, the RAID60 array.

That is, the compelling reason to use the motherboard RAID is that doing so substantially increases the capacity of the RAID card while slightly increasing its performance, too.Sounds like you haven't done any capacity planning. If you don't have a demanding application, it's going to be hard to go wrong.

Honestly I still don't quite get your point. Going the route of using shitty onboard raid (which can break if you replace the motherboard and weird driver issues can cause corruption) seems to just waste disks and disk space and power and isnt nearly as effecient. Not to mention going the raid array route you get superior performance.

Yes I suppose if you still did raid1 for the OS and had two disks dedicated for OS (I would never do this) then it makes sense to maybe use the onboard controller but this is not what I am saying.

I usually buy a controller that has as many slots on it as disks my chasis can take so the limitation is usually the chasis disk slots not the controller anyway. I don't want to waste 1 slot, let alone 2 slots just to have my OS on a dedicated array/disk.

Are you suggesting that say in the ops case with 12 disks that he does 2 disks in raid1 for OS and another 10 disks for raid so he has another 2 ports later to expand? That indeed is true if he isn't chasis limited as well (like you would be in say that 12 disk supermicro chasis) but I say just use 12 high capacity disks in raid6. You can then slice up the array and make a small (relatively) boot partition that is only 50 or 80 GB (or whatever you need). Most OS volumes do not need to be all that large. The dis-advantage of this method (without using dedicated disks) is that now you lost out on that 50 or 80 GB of space on your 10TB+ volume. For most this is not a big deal.

The advantage is now your OS volume can be raid6 now instead of raid1. Raid1 has a higher percentage of corruption during a rebuild as your other disk can have some bad sectors causing some corruption where raid6 would have double parity and now your OS volume is on very high speed raid as well (most software raid1 gives 0 speed advantage) so your boot/speeds software installs, etc.. are faster.


Again here is how I have my raid volumes laid out:

Code:
CLI>   # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 WINDOWS VOLUME   40TB RAID SET   Raid6    129.0GB 00/00/00   Normal
  2 MAC VOLUME       40TB RAID SET   Raid6     30.0GB 00/00/01   Normal
  3 LINUX VOLUME     40TB RAID SET   Raid6    129.0GB 00/00/02   Normal
  4 DATA VOLUME      40TB RAID SET   Raid6   35712.0GB 00/00/03   Normal
===============================================================================

All of these volumes are going over the same set of 20x2TB drives. Out of my 36000 GB of usable space went down to 35712GB in order to support two ~120GB volumes for linux/windows (I barely use windows but made it anyway) and one for mac OS if I wanted to. If I went the route you said then I would need to have another 2 disks in non hot swap slots (as this is a norco 20 disk with 2x non hot swap slots) and it would just mean more power and less performance from my OS arrays AND

I also have a JBOD array hooked up to two JBOD chasis via SFF-8088 and on a different controller. I dont want to ever boot off that one so I just made it as a single volume and got all 30 of my disks in usable space:

Code:
CLI>   # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 DATA 2 VOLUME    90TB RAID SET   Raid6   84000.0GB 00/01/00   Normal
===============================================================================
GuiErrMsg<0x00>: Success.
 
If something happened and the array metadata got lost or the array became in a failed state (like backplane problem and 4 drives went offline at once) you would normally recover the array by re-creating with a no init/rescue option. Well that isn't going to work unless the logical/physical order is the same.
This doesn't seem like a sensible concern. If it is an issue for a particular application, it's easy enough to backup the controller configuration... which should be done for disaster recovery, anyway.

Honestly I still don't quite get your point.
Sorry; I'm not sure how to present it in a simpler or more direct way. The issue might be that you're hung up on insisting that motherboard RAID performance is so bad that it's not acceptable for use in mounting the OS drives. In some applications, where data is being written to or read from the OS volume, that can be a concern. Lots of applications -- such as this file server -- won't read to or write from the OS much after boot. Instead, data is going to or coming from the RAID-hosted storage.

your OS volume is on very high speed raid as well (most software raid1 gives 0 speed advantage) so your boot/speeds software installs, etc.. are faster.
Mainline server performance is important; a slow file server is slow because it's slow at serving files, not because it's slow at installing applications or rebooting.
 
Last edited:
This doesn't seem like a sensible concern. If it is, it's easy enough to backup the controller configuration.

Sorry; I'm not sure how to present it in a simpler or more direct way. The issue might be that you're hung up on insisting that motherboard RAID performance is so bad that it's not acceptable for use in mounting the OS drives. In some applications, where data is being written to or read from the OS volume, that can be a concern. Lots of applications -- such as this file server -- won't read to or write from the OS much after boot. Instead, data is going to or coming from the RAID-hosted storage.

Mainline server performance is important; a slow file server is slow because it's slow at serving files, not because it's slow at installing applications or rebooting.

Ok. So what is the advantage of doing it your way vs doing it the way I said?


Its not like I think raid1 is dogshit slow and unusable.

My point is here are the advantages doing it the way I said:
  • Using the hardware raid is then you get that super fast performance on both the OS AND data volumes (not just one).
  • You can make it raid6 so you get better redundancy and less likely to get corruption on a single disk failure rebuild (which what most rebuilds will be).
  • If the OS volume is the first volume it will rebuild super quick (way faster than a raid1 array would). For an 80 GB volume on a 20x2TB disk array it would rebuild in under 3 minutes.
  • You don't have to buy two extra drives (less cost)
  • You don't have to waste a bunch of unused space (as likely the drives you buy are going to be a little overkill in size for just the OS and thus lower space usage efficiency) .
  • You don't have to power two extra drives just for the OS so less power usage.
  • You don't have to worry about the driver ever just writing to one disk and causing corruption. I have seen software raid do strange things so this is my biggest worry right here.


Disadvantages:

  • Instead of OS taking none of your raid card storage you are losing (in the case of a large array) a few percent of total storage to allocate it to your OS. In the case of my 2x20TB disks I made 3 OS volumes combining 288 GB of space (i did way more than I needed to) this was less than 1% of the space of my combined disks.
  • If your array is in a failed state you can no longer boot the OS as the OS volume goes with the data volume (unless the drive failed mid-rebuild as the OS volume can rebuild first and be degraded why the data volume is failed. If linux/unix you can easily boot from a live CD instead.

So is there some other disadvantage I am not seeing? It seems like the advantages far outway the disadvantages?

Its not that I need you to explain things simpler. When I say I don't get it I just still don't get why its such a good reason to go the way you are saying from the reasons you listed.
 
iSCSI NFS CIFS SAN build for use at home or small business office. I've been using this build for several years and it has been really reliable. Configured as a RAID 5 using 1TB x5 drives. Performance has been great considering the drives are WD Green models. I recently added another RAID cage for a total of 10 swappable bays. I would absolutely go the Areca route the next time around.

Check it out:
How To Build a Custom SAN
 
Hi, I would like to know the average self initializing and HDD detecting time for ARC-1231ML. Anyone can assist?

Earlier I used Adaptec 31205 before and the self initializing time took about 40sec and HDD detection 30sec, which is extremely annoying to me as PC user.

the adaptec card is SAS/SATA, and ARC-1231ML is only SATA. I hope this might make the seft boot time shorter.

I am looking for hardware raid controller than can self boot fast.

Thank you.
 
So is there some other disadvantage I am not seeing? It seems like the advantages far outway the disadvantages?
Since you've provided nothing quantitative, you're succumbing to your own admitted bias with subjective evaluations of the outcomes. One in particular -- that you don't think motherboard RAID is stable -- seems to be completely coloring your position. To help weight your other "Advantages", you've assumed dumb decisions, such as buying the wrong drives, to reinforce them.

The weight of the advantages and disadvantages are application dependent. The main problem might be that I mistook James911's chassis for a 14-bay unit, when it really has 12 bays. Sticking with that case, it makes sense to use the 12 ports on the RAID card. Given a 14-bay chassis, I'd stick with the 12-port card and use two ports on the motherboard to run the OS drives in RAID1.

It's pretty clear his application isn't performance critical, so I think either solution for the OS drives would be completely adequate.
 
Disadvantage: If your OS is on a 20 drive array (vs 1 or 2 drives).. It has to spin up those drives whenever the server does anything at all.. Even if you're not accessing anything off the large storage portion. If your server only acts as a file server.. fine.. but if say it's also a SAB or webserver.. You might want those 20 drives off most of the time for power savings.
 
Hi, I would like to know the average self initializing and HDD detecting time for ARC-1231ML. Anyone can assist?

Earlier I used Adaptec 31205 before and the self initializing time took about 40sec and HDD detection 30sec, which is extremely annoying to me as PC user.

the adaptec card is SAS/SATA, and ARC-1231ML is only SATA. I hope this might make the seft boot time shorter.

I am looking for hardware raid controller than can self boot fast.

Thank you.

On a current box with a 1231ML connected to 12 Seagate ES.2 1TB Drives, from the time the Areca BIOS message flashes up on the screen until the time that the next option ROM screen pops up is 28 seconds if I just let it sit there, 25 seconds if I hit escape during Firmware Init message.

How often do you reboot that is becomes that annoying?
 
On a current box with a 1231ML connected to 12 Seagate ES.2 1TB Drives, from the time the Areca BIOS message flashes up on the screen until the time that the next option ROM screen pops up is 28 seconds if I just let it sit there, 25 seconds if I hit escape during Firmware Init message.

How often do you reboot that is becomes that annoying?

Thanks for the prompt feedback. does the 28 sec include HDD detection time as well?

Reboot PC at the moment could be only 2-3 times a day. The frequency is high normally during new system setup where repeatedly getting into bios; or reboot to boot with optical drive or USB. With Adaptec the additional 40+30 seconds were just too long. How i missed the old SCSI card that took only 10 seconds for all :D
 
The 28 seconds are from the time the Areca message first pops up on the screen until the next option ROM message pops up on the screen.
 
Hi, I would like to know the average self initializing and HDD detecting time for ARC-1231ML. Anyone can assist?

Earlier I used Adaptec 31205 before and the self initializing time took about 40sec and HDD detection 30sec, which is extremely annoying to me as PC user.

the adaptec card is SAS/SATA, and ARC-1231ML is only SATA. I hope this might make the seft boot time shorter.

I am looking for hardware raid controller than can self boot fast.

Thank you.

I have a 1280 so double the slots and slower (because of more ports) but you can see how long it took here:

http://www.youtube.com/watch?v=LpM40a684cM

Keep in mind its using a supermicro board which in itself takes quite a while to initialize (before the machine gets video).

I would expect a 1230 with 0.4 staggered power up to be around 4-5 seconds faster. It is slower if not all the disks are hooked up.

When I added an ARC-1880x hooked up to two SAS expanders my initialization is now really long lol:

http://www.youtube.com/watch?v=NAb1ZUwHxg8



Since you've provided nothing quantitative, you're succumbing to your own admitted bias with subjective evaluations of the outcomes. One in particular -- that you don't think motherboard RAID is stable -- seems to be completely coloring your position. To help weight your other "Advantages", you've assumed dumb decisions, such as buying the wrong drives, to reinforce them.

The weight of the advantages and disadvantages are application dependent. The main problem might be that I mistook James911's chassis for a 14-bay unit, when it really has 12 bays. Sticking with that case, it makes sense to use the 12 ports on the RAID card. Given a 14-bay chassis, I'd stick with the 12-port card and use two ports on the motherboard to run the OS drives in RAID1.

It's pretty clear his application isn't performance critical, so I think either solution for the OS drives would be completely adequate.

If I had to chose someone who hasn't provided anything 'quantitative' then I would have to say that fits moreso with you who has barely provided any reason for adding the additional two disks with software raid on the onboard controller. I am seeing a lot more advantages going it the route I said than what your listing. Ok so you disagree with software raid and corruption. Ok, strike that one off the list, things still seem to lean towards the advantages.

You say I have assumed 'dumb descisions' such as buying the wrong drives.

I don't think I would agree with that. I am just thinking exactly how I would do things if I were to go that route and what the downside would be.

Ok so the cheapest drive on newegg is around $35-40 in the 160-250GB size range. That is already overkill for an OS drive for me already in disk space. Well when for barely $10 more you can double/tripple your disk size up to 1TB I would probably go that route just on the off-chance I ever needed the disk space as the price difference is very little for several multiples in disk space otherwise you are paying a big premium in price per GB for those small disks.

So it seems to me you are either paying a huge premium for cost/gb getting really tiny drives (even the smallest/cheapest ones I can buy which are still over-kill) or I can spend a tiny bit more money and get tripple the space which is even more overkill. One of the biggest reasons I would go with bigger drives is possibly re-purposing them for another use in the future and the extra disk space would be usable.

I would think many people would opt to pay the extra $10-15 for 1TB drives instead of 160GB and I don't consider that to be a 'dumb decision.'

Atleast LatexRat lists a valid reason/possible advantage to doing it your way. Of course in my case it would never help because my main storage array (36TB) and my OS storage array (128GB) are always active and thus enabling any type of power management/spin-down would be useless as it would never get activated anyway but I could see how it could be advantageous in some configurations.
 
I am seeing a lot more advantages going it the route I said than what your listing.
You've said this before, and your posts drive that fact home. Because you're also over-weighting the slightest advantage since it's an advantage to your preference, I think you're not open to alternative solutions or haven't experienced enough scenarios to realize that different compromises are more appropriate for alternative situations. That is particularly demonstrated by the fact that you're eager to eliminate one perceived complexity only to exchange it for another that's actually at least as strenuous to manage or recover.
 
So it seems to me you are either paying a huge premium for cost/gb getting really tiny drives (even the smallest/cheapest ones I can buy which are still over-kill) or I can spend a tiny bit more money and get tripple the space which is even more overkill. One of the biggest reasons I would go with bigger drives is possibly re-purposing them for another use in the future and the extra disk space would be usable.

Not only that, you could short stroke the 1TB drives and gain a significant speed advantage by narrowing the track usage and therefore the head movement. You can double or more your performance for random I/O, which is much of what an OS drive does.

More Info Here:
http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html
 
Not only that, you could short stroke the 1TB drives and gain a significant speed advantage by narrowing the track usage and therefore the head movement. You can double or more your performance for random I/O, which is much of what an OS drive does.
It might, but it shouldn't. If you're thrashing the OS volume, then there's probably something wrong with your architecture. Meanwhile, short-stroking completely trashes the cost per gigabyte ratio, which houkou onchi has been trying to optimize.
 
Last edited:
It might, but it shouldn't. If you're thrashing the OS volume, then there's probably something wrong with your architecture.
It's refreshing to see such an apt comment here.

[Aside: That Tom's article has some major flaws. E.g., the access time test. I don't know if it's the amateur nature of that readership, or a general sign of the times. But, you'd be ridiculed into oblivion if you presented that paper at a Usenix conference 15+ years ago.]

Meanwhile, short-stroking completely trashes the cost per gigabyte ratio, which houkou onchi has been trying to optimize.
Careful. It sounds like your emotions are causing you to do the kind of "selective sniping" that you attributed to houkouonchi.

[From my perspective, you (2) guys are having a very productive (and educational) discussion/argument. If you can keep it focused on the technical, everyone will benefit.]

Anyway, what is this "short-stroking"? Sounds like a schoolyard joke:). [Please don't answer. (I was using the technique of "cylinder-ranging" before most of you existed.)]

-- UhClem "Forward! ... into the past."
 
It might, but it shouldn't. If you're thrashing the OS volume, then there's probably something wrong with your architecture. Meanwhile, short-stroking completely trashes the cost per gigabyte ratio, which houkou onchi has been trying to optimize.

It would only if data gets spread out on the file-system/partition and isn't all written near the begging of the drive anyway. Of course with windows there is a good chance this can happen.

Of course going the array method you basically doing an extreme version of short-storking because in a case like mine your OS volume is all in the top 1% of the disk.

You've said this before, and your posts drive that fact home. Because you're also over-weighting the slightest advantage since it's an advantage to your preference, I think you're not open to alternative solutions or haven't experienced enough scenarios to realize that different compromises are more appropriate for alternative situations.

You say that I am over-weighting the slightest advantage? I mean you say to use onboard raid and two other disks in raid1 Just so you don't lose that 50-100GB (or whatever it is you use for a OS volume) out of your total raid storage. Correct me if I am wrong but that seems to be pretty much your entire argument to this thing?

So in a 12x3 TB array with 80GB boot slice You only get to use 29920GB out of your 30000GB. So in this case the array is now only 99.73% of its full capacity and for exchange in that you paid the price of buying two extra drives, you have to power two extra drives, and use physical slots on the chasis for two extra drives as well. I am sorry but the .27% increase in raid capacity is what I would consider a very slight advantage.

Also, I am sorry but your boot volume being raid6 instead if raid1 is not a slight advantage in my book but a big one. I am sure others would agree. I have seen soo many times where data is lost or raids won't rebuild due to bad sectors in both raid1/raid10 where raid6 would have handled it just fine with no data loss.

I do realize there are some very specific situations in which doing what you said does make sense; however, for a lot of situations it doesn't. The only situation that I; myself, would ever use a separate disk or raid1 array for a boot volume is only with solaris/ZFS. And that is simply because I can't carve up the ZFS pool and take a small percentage from all the disks to use as a boot volume so I am kind of forced to go that route.

That is particularly demonstrated by the fact that you're eager to eliminate one perceived complexity only to exchange it for another that's actually at least as strenuous to manage or recover

I am sorry but this is just simply incorrect. I still standby my statement that hot-spares should not be used unless there is a very specific reason to like I stated (degraded performance causes major issues and rebuild must happen ASAP, etc..).

Granted if you don't have all that many disk failures and/or the array never gets fucked in a way it has to be restored than yes using hot-spares does not matter.

I am guessing you have not had to restore raid arrays from systems that got in a fucked state very often from someone pulling the wrong disk, raid controller dieing, etc. like I have. I am the go-to guy when it comes to raid recovery at the company I work for and we have over 1000 servers with raid arrays that are Areca, LSI, and 3ware controllers. I can't even count how many times I have had to recover an array and I will tell you right now when you do it disk order is always one of the most important things you need to make sure you have right.

Yes it is true the average user will never get their array in a fucked state where it has to be recovered but it can still happen. I had a 20 disk array based off 1TB seagate drives and within only 2 years of operation I have had 5 disk failures. If I was using a hot-spare this would have caused some major complexity if I ever had to restore the array.

When you said backup the configuration earlier I assume you are talking about Areca controllers right? Because the only way that would ever be useful in a hot-spare setup is if you made sure it got backed up (and kept the old ones) every time a disk failed. Oh yeah and if you ever recover an array with hot-spares you are pretty much 100% going to have to get the drives back in the original order which you better be pretty damn careful with doing to not f*ck it up. And its not like you can just 'import' this backup which has all the raid disk order and everything so you are back to 100%, it doesn't work that way.

So I gotta ask. How many times have you had to recover an array that was in a failed state or simply couldn't be read/imported/etc? I have done it on 3ware, LSI, and Areca and at least 10 times but I am sure more as it is now too many to remember.
 
It might, but it shouldn't. If you're thrashing the OS volume, then there's probably something wrong with your architecture. Meanwhile, short-stroking completely trashes the cost per gigabyte ratio, which houkou onchi has been trying to optimize.

Well, it doesn't make it THAT much worse. A 160GB Seagate at newegg is about $41 delivered, where a 1TB Seagate is about 50 delivered. Percentage Wise it is a decent swing, but for the performance you will get now, and the extra 840GB you get later when you repurpopse it as a whole drive, $9 isn't all that much in the grand scheme of things. Also, whether or not you are thrashing the OS volume, it makes a difference. Before SSDs hit big, I used to short stroke the 150GB and then 300 300GB Velociraptors to 100GB each and then RAID0'd them, and it made a HUGE difference, much more than just the RAID0.
 
Last edited:
Thanks for all the help guys. looks like i did install OS on the wrong array so that fixed up most the issues. .. cant believe i did such a stupid thing and redid the volume and that fixed the rest

now i just got to get the backplane to work properly on my norco 4224 since three bays aren't working
 
...
Of course going the array method you basically doing an extreme version of short-storking because in a case like mine your OS volume is all in the top 1% of the disk.
Shouldn't you have put it in the middle 1%? (Resist the gut response that "outer is faster"--penny-wise & pound-foolish).

--UhClem
 
Shouldn't you have put it in the middle 1%? (Resist the gut response that "outer is faster"--penny-wise & pound-foolish).

--UhClem

I don't know if that really would make a difference unless seeks concentrated in the middle of the platter are faster than ones in the outer edge. I know sequential reads/writes are faster on the outer edge (begging of the disk) but not sure in the case of seeks/random I/O.

Also in the case of a raid array this is not necessarily feasible because then you have to break up your huge chunk of space into multiple volumes which some people (like me) would not want to do.
 
I don't know if that really would make a difference unless seeks concentrated in the middle of the platter are faster than ones in the outer edge.
It does make a difference(further below), but ...
Seeks concentrated in the outer edge are faster than seeks concentrated in the middle, because there are more sectors per track on the outer edge. Therefore, the heads need to traverse fewer (outer) cylinders to "travel across" a given amount of data.

[Here's the key] The overall access pattern is not concentrated. It's actually pretty (pseudo-)random. Even moreso when all data is 'jumbled" into one large volume. To make matters worse, the need to access the OS volume arises, in addition to explicit program invocations, for various implicit and asynchronous reasons (paging, forking, dynamic libraries, etc. [assume for this discussion that tmp & swap are either combined with, or adjacent to, the OS volume]).

Question: Will a colony of bees, collectively, fly further, to gather a given amount of nectar, if the hive is in the middle of the orchard, or at its edge?

Also in the case of a raid array this is not necessarily feasible because then you have to break up your huge chunk of space into multiple volumes which some people (like me) would not want to do.

TANSTAAFL
If it is your own personal system, the tradeoff is certainly yours to make. I was thinking more of a large organization (your other life:)), where the investment of time and effort (by the system honcho), to analyze the initial and future needs, and organize the data accordingly, would result in a sizable return (ROI) in terms of system efficiency, responsiveness and manageability.

--UhClem
 
Last edited:
You say that I am over-weighting the slightest advantage? I mean you say to use onboard raid and two other disks in raid1 Just so you don't lose that 50-100GB (or whatever it is you use for a OS volume) out of your total raid storage. Correct me if I am wrong but that seems to be pretty much your entire argument to this thing?
Nope; that's not the reasoning I've offered. At this point, it appears that you're so focused on your own position that you're not even make an effort to read posts. Meanwhile, James911 is extremely unlikely to benefit from anything you're offering.
 
Nope; that's not the reasoning I've offered. At this point, it appears that you're so focused on your own position that you're not even make an effort to read posts. Meanwhile, James911 is extremely unlikely to benefit from anything you're offering.

Well that is how I take it with using two onboard ports vs taking a small percentage of the raid array for the OS instead.

I have read your posts however you don't really backup anything you say.

James911 is also extremely unlikely to benefit from anything you have said/offered either. Its not like he is the OP so I wasn't just saying things for his benefit either.
 
I have read your posts however you don't really backup anything you say.
It's all there. I'm not sure how I could've made that clearer.

James911 is also extremely unlikely to benefit from anything you have said/offered either. Its not like he is the OP so I wasn't just saying things for his benefit either.
I mistook his 12 bay case for a 14-bay case. I pointed that out, too -- not sure how you missed it.
 
Hey everyone, I wanted to get some folks who have more experience than I in the performance of my first home build raid-5 system. I want to throw it into production and retire my old single drive server, but I want to ensure its up to spec before doing so!

Before I get into it, I want to thank everyone for the wealth of information here!! I was very confident in my choice of raid controller and HD combo when I pulled the trigger on the egg, based on all of your posts.

Here's the hardware:

CPU: E6300
MB: Gigabyte P35-DS3L
Mem: 4 Gig Corsair DDR2
Raid: Areca-1222 (No Battery) w/ 1.49 Firmware
HD's: 4x 3TB Hitachi 5K3000
Raid: 5

Some other notes, the stripe size is 64k and write cache is disabeled. The OS is W7 on a seperate single drive.

Does the speed look correct before I load this puppy up? My main concerns here is that Crystal Mark shows a *higher* write than read. I also dont understand the "Benchmark" versus "File Benchmark" (file is much higher) performance difference. Take a look at the screenies and let me know what you guys think:

arcwritecache.jpg


crystalsa.jpg


hdtunet.jpg


File Benchmark - too high??!?!?
hdtune2.jpg


Thanks!

-Nuts
 
Last edited:
Without BBU if your system suddenly loses power some of your data will be lost or corrupted anyways, granted the data was in the cache, so you might enjoy the speed by enabling the Disk Write cache.

Write speed is higher because the controller uses its own cache for writes whereas cache is almost of no use for reads unless controller received multiple requests for same data blocks.

Before you trust your data to the new storage, test it good, verify that whatever you place there could be extracted with no corruption. Hit it hard with TB worth of data and see if it survives. Then go by your own gut feelings.
 
Thanks Jus. I suspected the write speeds were due to caching, which is why I got confused because I assumed that turning write caching off would eliminate it completely. I did this for the exact reason you assumed, because I dont have a BBU.

I'm still curious though if the read speeds are as they should be, or if the HD Tune Drive vs File benchmarks make sense.

While we're on BBU's, what causes the card to write its cache? I am assuming a shutdown of the OS (and some other events). The reason I am asking is because if I had to choose between BBU or a UPS that can sent a shut to the OS on power loss, sounds like the UPS is the best option.

For the record, most of the data is read (media server primarily) and the data that is read / write is backed up to an e-Sata drive on a nightly job. I realize this doesnt help for data corruption natively, as the files will just copy over corrupted. I realize I need to check new files on a power loss before they do the nightly synch.

I may get a BBU and UPS, however this is a home project and the budget is limited for now, so I will have to wait. So I will probably have to do one before the other.

Thanks,

-Nuts
 
Crystal Disk Mark and HDBench are pretty useless for doing disk benchmarks, particularly on server hardware, as they don't issue overlapped I/O requests and don't issue multiple requests at a time. The request queue on your multi-drive system never reaches more than one, so the multiple disks aren't exercised concurrently, and elevator seeking or NCQ are never engaged in a meaningful way.

Your config seems fine, but I'm not sure why you've chosen such a small stripe size.
 
Last edited:
mike-

In terms of the stripesize, 64k is the second largest option only second to 128k. I actually am considering changing it. For some reason it hit me right after the formatting was complete and at that time I didnt want to go back through the multi-hour format process.

Thanks for the feedback... I feel a lot more comfortable about the setup.
 
Is there anyone in Melbourne Australia who would be willing to flash an HP Expander?

- whoops, wrong forum....
 
.
I currently have 12 x 1TB HDDs in RAID6 configuration with an Areca 1231ML card. I am running out of drive space so I bought 12 x 3TB Hitachis.

Is it possible to replace a drive at a time in my existing array and rebuild (12 times)? Will the end result give me 30TB of storage or the original 10TB?

Thanks.
 
@ unix_foo:

Ya it's doable make a backup just in case if you care for the data. After you replace the 12th drive the space will still be 10tb then you have to do oce and that will expand the array to 30tb
 
.
I currently have 12 x 1TB HDDs in RAID6 configuration with an Areca 1231ML card. I am running out of drive space so I bought 12 x 3TB Hitachis.

Is it possible to replace a drive at a time in my existing array and rebuild (12 times)? Will the end result give me 30TB of storage or the original 10TB?

Thanks.

I would have a complete backup in place before I did 12 rebuilds and a capacity expansion. You already spent $1300+ on new drives. I would pick up a few extra 3TB drives, backup your array to the single drives, take out all 12 1TB drives which becomes a second backup in effect, install the new drives in the array, initialize it once and then restore from the new 3TB single drives. 3 extra 3TB drives should do it unless the 10TB you have is completely full, and an extra $330 will save you a lot of grief in case something goes horribly wrong in the upgrade. It will also give you a few extra drives. You could then resell 2 of them (or, ahem, return them) and keep the third as a cold spare (which is a good thing to have on hand immediately for a rebuild when you need it in the future)

<=- Same answer as the other thread I just posted to lol. Too late.
 
I would have a complete backup in place before I did 12 rebuilds and a capacity expansion. You already spent $1300+ on new drives. I would pick up a few extra 3TB drives, backup your array to the single drives, take out all 12 1TB drives which becomes a second backup in effect, install the new drives in the array, initialize it once and then restore from the new 3TB single drives. 3 extra 3TB drives should do it unless the 10TB you have is completely full, and an extra $330 will save you a lot of grief in case something goes horribly wrong in the upgrade. It will also give you a few extra drives. You could then resell 2 of them (or, ahem, return them) and keep the third as a cold spare (which is a good thing to have on hand immediately for a rebuild when you need it in the future)

<=- Same answer as the other thread I just posted to lol. Too late.

Excellent. That is actually a very good point. I am cash strapped after this upgrade so I am afraid I can not cough up for the extra 3 HDDs. I think I am gonna go the route you suggested except adding another step of spreading my array data to individual 1TBs after the initial backup to 3 x 3TB.

Thanks!
 
For those in the market, Areca announced a new revision of the 1880 line a few days ago (1882). Pretty much identical to the 1880 series, except for the fact that it's DDR3 now. I imagine performance is going to be nearly identical...you just get the benefit of less expensive RAM (and active cooling on the 1882ix models).
 
Back
Top