WHS Upgrade Plan - Feedback Needed

pirivan

Limp Gawd
Joined
Feb 22, 2009
Messages
346
Hi all,

As it’s been incredibly helpful in the past, I thought I would post here and see if I could get some opinions on an ‘upgrade’ I am planning for my WHS based storage server. I’ve done a ton of reading on this forum to try to gather information and I’ve developed a number of questions (thanks specifically to Odditory, BENN0, and many others in the SAS Expanders/Areca threads for all the amazing posts). First I will lay out loosely what I have setup now and then I will explain what I would like to do. Please let me know if I need to include additional information and I would be happy to fill in details as best I can.

So, without further ado, my setup now is more or less:


  1. OS: WHS – Duplication enabled on ALL the shares
  2. Case: - Norco 4020
  3. AMD Athlon 64 X2 5050e Brisbane 2.6GHz 2 x 512KB L2 Cache Socket AM2 45W Dual-Core Processor
  4. 4GB of RAM (could be bumped to 8GB)
  5. Hard drives in the server:
    1. 1x Western Digital Caviar Black WD6401AALS 640GB (OS drive)
    2. 6x 1.5TB Seagate drives (ST31500341AS)
      1. I would need to look up firmware versions on each but I know that as I purchased them I made sure to use the Seagate firmware checker to see if they had a firmware update available (none did but that's not to say that they might not have updated firmware available)
    3. 4x 1TB Western Digital Caviar Green WD10EACS
    4. 2x HITACHI Deskstar 7K2000 HDS722020ALA330 (0F10311) 2TB 7200 RPM
  6. Random assortment of drives in 2 external 5-bay E-SATA enclosures
    1. 1x HITACHI Deskstar HDS722020ALA330 2TB 7200 RPM drive
    2. 1x SAMSUNG HD203WI 2TB 5400 RPM drive
    3. 4x SAMSUNG HD154UI 1.5TB drives
    4. 10TB total space to backup the 7TB of data I currently have
  7. ASUS M3N-HT Deluxe/HDMI AM2+/AM2 NVIDIA nForce 780a SLI HDMI ATX AMD Motherboard
    1. 3 x PCI Express x16 (dual x16 or triple x8) (2 in use by the AOC-SASLP-MV8 cards)
    2. 1 x PCI E x1 (this has something in it currently)
  8. AOC-SASLP-MV8 x2
  9. The Norco backplane is connected via onboard motherboard ports and the AOC-SASLP-MV8’s to support all 20 hot swap drive bays
My upgrade plan is as follows (I will not start executing this plan until Vali is released though I probably will purchase hardware ahead of time):


  1. Purchase 1x 2TB HITACHI Deskstar HD32000 IDK/7K (0S00164) 2TB 7200 RPM 32MB Cache SATA 3.0Gb for the storage pool and probably continue to purchase more of these drives as I expand the storage pool
  2. Purchase and upgrade the processor (1 of the 2 listed, I am leaning toward the X4 for lower power usage)
    1. AMD Phenom II X4 945 Deneb 3.0GHz 4 x 512KB L2 Cache 6MB L3 Cache Socket AM3 95W Quad-Core Desktop Processor – $139
    2. AMD Phenom II X6 1055T Thuban 2.8GHz 6 x 512KB L2 Cache 6MB L3 Cache Socket AM3 125W Six-Core Desktop Processor - $199
  3. Purchase an HP 24 Port SAS Expander so that I can get rid of the AOC-SASLP-MV8’s and stop using onboard SATA ports connected to the backplane
    1. I have concerns if this expander will work with my motherboard
  4. Purchase a hardware RAID controller that works with the HP 24 Port SAS Expander, connect it to the expander internally and then connect the internal expander ports to the Noroco backplane
    1. As I am using the expander I don’t want/need an expensive RAID card with many ports
    2. I would prefer that the RAID card has reasonable rebuild performance (24 hours would be nice instead of 4-5 days)
    3. I don’t want to spend much more than $350 if possible, I am happy to deal hunt on E-Bay if there are good deals to be had on the right part. I’d like to buy new but I really don’t want to spend $600 to $1000 on a RAID card for performance I don’t really need. I am streaming 1080p files (1 at a time generally), not running DB or web servers.
    4. So far I have considered Areca 1680 series cards, HP P212, HP P410/411 and Dell PERC 6/E cards but I can’t decide (more on this below)
  5. Wipe the my current WHS install and install Server 2008 R2 with Hyper V on the 640GB drive
    1. Licensing isn’t a concern here, I have a 08 R2 license
    2. Use the 640GB drive as the OS drive and to hold the Vali VM
  6. Setup multiple RAID 5 volumes within 08 R2 and then present these to the Vali VM for the storage pool, NO duplication enabled in Vali
    1. RAID5 #1 – 6x1.5TB drives
    2. RAID 5 #2 – 4x1TB drives
    3. RAID 5 #3 – 3x2TB drives (this array would get more drives added to it as time went on)
    4. I am not sure if it is worth going for hot spares or RAID6, more on this below
Basically, I am trying to create a migration path to get away from duplication as my current server drive failure prevention method (outside of my backups). I really do like WHS and I LOVE how simple duplication is (frankly I was praying they would make a “simple” version of something like FlexRAID and bake it into Vali but that’s another topic) and the recovery from disk failure but I am tired of allocating half my available disk space for duplication AND then buying additional backup disks. Unfortunately it sounds like the additional complication of hardware based RAID is really the best alternative. Please also don’t suggest FlexRAID, which while it SOUNDS like a great solution on paper, as far as I can tell it is literally just 1 guy developing it and I just can’t put all my storage in the hands of 1 guy (he’s a really nice guy but still, too risky). Please also don’t suggest ZFS. It looks/sounds wonderful but I’m a Windows “admin”. It’s what I know and what I am comfortable troubleshooting; I don’t want to move all of my important data to a platform I have 0 familiarity with.

I have read the threads about the HP 24-Port SAS Expander and the Areca cards and have gone back and forth about RAID card choices a million times. I’m concerned that ANY of the RAID cards/expander will work with my motherboard or if they will work with each other or with my drives. People vacillate all over the place in the thread about which cards are good and which are not (in terms of both compatibility and performance). So, in all of your expert opinions, given that I don’t currently have a RAID card and am not married to 1 in particular, what’s my best bet given the requirements I outlined? Feel free to suggest something that I don’t outline below. I feel a lot of people were just trying to see if the RAID card they already HAD worked or not and as I don’t currently have one I’d like to know what my best bet might be (given some of my parameters).

I was really tempted by the HP P212 or HP P400 series RAID cards as their prices are absolutely fantastic online via e-bay or amazon resellers for what is a fairly recent, modern card ($270 or so). In fact I was certain I was going to go with this card until I did some additional thread reading. I was gravely concerned by forum goers saying that the rebuild performance was atrocious (5-6 days) and that motherboard compatibility could be a nightmare. It also sounds like the HP RAID management utility isn’t very good. It would be a bonus that I could use it to flash the SAS expander firmware though which I suppose is nice.

Again I also was tempted by the Areca series but it’s hard to discern which card actually works properly with the SAS expander and plus most of the Areca cards that people seemed to like/be interested in are more in the $600-$1,000 range (IE recent cards) and include many more ports than I think that I need. It also sounds like certain Areca cards might have severe performance issues (1680) with multiple RAID5 arrays (which was my plan) and spotty compatibility with the consumer drives we are all using. On the plus side it sounds like they at least have decent support.

The only other card I have really considered so far is the Dell PERC 6/E as it sounds like it might fit my price requirements but again I am slightly concerned about its motherboard/SAS expander/drive compatibility (not sure what the rebuild performance is like either). It sounds like it does have better motherboard compatibility than the HP’s but I have no idea what the performance is like. I do have some experience with Dell RAID cards from work (all we deploy is Dell servers) and I suppose I am reasonably happy with the BIOS RAID configuration/OSMA web management but I don’t think it’s great. Frankly, I can barely name on 1 hand web based management GUI’s that I am truly happy with but that’s another story.

Has anyone tried anything similar with a motherboard like mine? Do I need to purchase some other motherboard processor/combination that is more “raid card”/expander friendly? I would REALLY rather not do this as that pretty much turns the “upgrade” into a “complete rebuild”, thus sort of nullifying the point. But if that’s really the best/only way to go I’d rather hear it before I start (even if it’s not what I want to hear!).

Do I need to NOT use some of my drives and purchase more new 2TB drives (Hitachi?)? I would prefer NOT to have to replace all the drives (IE the 1.5TB drives), but if I have to I have to ensure that the entire project isn’t some kind of nightmare. It sounds like (if I have read correctly) that I might have to use a utility to disable TLER on the 1TB WD10EACS drives and potentially update the firmware on my ST31500341AS Hitachi drives to prevent freezing in raid (which affects certain firmware revisions).

Should I go for RAID6 (I’m worried about performance issues) or just do RAID5 with a hot spare? I’m also of the mindset that even RAID6/RAID5 + hot spare is a bit overkill given that I do have backup drives; it’s not like I am in a high uptime environment in my home. The only main/real reason for wanting a hotspare or RAID6 is that it sounds like depending on controller choice etc I could be looking at some extremely long rebuild times based on array sizes.

A final question/option that I haven’t even REALLY considered is, not purchasing a RAID controller card or expander at all (sticking with the MV8’s/motherboard) and simply going for software RAID5 arrays in 08 R2 to present to Vali (I could in theory use the expanders with the MV8’s as well and stop using the motherboard ports too). What will the performance be like here, atrocious? Would rebuild times be abysmal? Will a faster/additional core processor (IE 6 core proc) improve general RAID and rebuild performance significantly? It sounds like this might be the simplest/cheapest solution that would avoid some of the SAS expander/RAID card/motherboard compatibility issues but performance is a concern based on what I have read about software RAID (I know there have to be reasons why people avoid it). Pretty much no one around here discusses it and I assume that there are multiple good reasons for that

Thanks to anyone who fully read the above post, I am sure it was a bit to slog through and I know I have a ton of questions in there. I tried to read as many relevant posts as possible (those 45 pager threads are a long haul) but I think I only generated MORE questions after reading! It’s a great community around here with a lot of great information for a tech enthusiast on a budget like me (treadstone makes me jealous)! I really appreciate the time and your feedback would highly valued! Let me know if I need to include more detail and I will try to do so!
 
Last edited:
Subscribed. Fairly sure I'm gonna learn a lot from this thread.

Good luck man!
 
Dear OP

this is a fine engineering situation. I am not qualified to give any immediate suggestion but will keep tracking this thread.

You have very complex setup. Following is thinking, not a suggestion because I do not have experience with more than 4 drives for home use. (business-yes)

My friends say always for complicated critical system, 1st thing to know, is it possible to clarify funding limit for the "total project" so that we can itemize and prioritize what can or cannot do easily. I understand you say USD350 for the RAID card but you may need to spend for other things as well. based on the figure, some members can give you very good suggestion specific to your request, or they can sometimes give you alternative solutions.

Sorry, my non-suggestion not adding much for now. Cheers.
 
OP Great post. Well detailed. Installing new video cards on my main machine so I am stuck at 800x600 during driver download and did not read the whole thing.

First, on the raid card either get the Areca 1680i,x, or LP or wait for the new cards to become widely available and get the 1880 series equivelants.

Second, on CPU, if you want to stay AMD, I would suggest an Athlon II X2, 3, or 4. The Phenom II X4's use a lot of power and the 6 cores are not better.

Third, if you are going to do an overhaul of this magnitude, I can offer the experience that spending a few dollars more on the front end saves a lot of hassle on the back end.

Fourth, the R2 + Hyper-V thing works great. I actually just flashed (destructive flash) the firmware on some SSD's and had another restore from backup go fine from my Hyper-V WHS.
 
First off 2008R2 with Hyper-v is amazing. thats how i have my environment at home and i wouldnt have it any different.

Stay away from the HP Raid Cards they will take days to rebuild.
I would get a 1680i, x, or LP. NOT the "ix" ix cards have a builtin expander that causes stability issues with the HP sas expander.

The only problem that i forsee is that you have 4 different sets of drives (OS, WD 1tbs, Seagate1.5's, Hitachi 2's) So if you connect the OS drive to the Areca/HP then you are starting off with 4 RAIDsets. This will cause performance issues on the Areca's.
Also the WDs may not work with the Areca.

Which revision of the norco do you have? (ie. How many non-hotswap HDD mounts do you have?)
Are you good with a dremel?
 
@Danny Bui

Thanks for the words of encouragement; we'll see how things go! Hopefully some people with experience with some of the issues/questions I brought up will be able to chime in and enlighten us all. I am interested to see where I end up as well. I recall when I built the server initially, I started out going one direction (Server 2008/Free NAS on an Intel Core i7 platform) and ended up somewhere else (WHS and a low powered AMD CPU) based on user feedback here!

@lightp2

Exactly. My experience with multiple drive/RAID systems is limited to business usage as well. In that scenario you pay A LOT more for hardware that is all built and tested together. You can be pretty darn sure that the Dell server, with the Dell RAID card, with the Dell tested/approved SAS drives, connecting to an MD1000 is all going to play nice together. Unfortunately as a regular person it's not really practical to throw that kind of money at my storage needs just so that I KNOW it will all work together (unfortunate trade off you have to make).

I see what you mean about the funding limit for the "overall" project. I guess I haven't exactly gotten there yet. My initial thinking was an investment of $250 on the SAS expander, $150 on the processor, $350 or so for the RAID controller card and $120 or so for another 2TB drive (to make a 2TB drive RAID5 array). So, about $900 initially. Realistically, I'm not too concerned about drive cost. I'd be buying more of those at some point anyhow! However, even given that, for the right item/peace of mind I probably would be willing to spend more on the RAID controller card if people were really convinced that it was truly a good option. The only issue is that there isn't much of a consensus that "THIS" is the RAID card to get or heck even "THIS" is the brand to get. It's all over the place as to what works/what doesn't/what's crappy management/what's poor performance with which drives. It seems silly to spend a lot on a RAID controller that you then can't really even count on to work/be a good experience. Also, as I mentioned above, it sounds like the more you pay for RAID controllers most of what you are getting is additional ports; which is unnecessary with a SAS expander.

The only reason I started to get a little price sensitive was reading the threads and potentially thinking that I would need to replace ALL the drives as well as get a new motherboard/CPU combo just to make the SAS expander/RAID card work. That's why I sort of suggested the possibility of going with software RAID at the end of my post; in case what I am trying to do is too much of a mess. Anyhow, thanks for the feedback, I will be interested to see what others have to say about my initial post!

@pjkenned

Thanks for the reply! I am going to keep taking a hard look at the Areca1680 series but they seem to be expensive due to their high port count for what I want to do with the SAS expander. Also it sounds like (based on the Areca thread) they might be a bit picky about drive type. At least their support/management utility sounds good.

I will take your CPU recommendation under advisement. Though, based on some newegg filtering, the lowest W I can get is 95W if I want a quad core (which I do as this will be a virtual host box now). So the AMD Phenom II X4 945 Deneb 3.0GHz AM3 95W Quad-Core is still looking like my best option.

I totally agree (unfortunately) with your assessment about spending more dollars to begin with on the front end. With my intial WHS build it turned out to be somewhat of a disaster and the "old" motherboard I was going to use didn't work right and I ended up buying an additional motherboard and CPU (the ones I have now) anyhow! That's sort of why I wanted to put out feelers on this forum and see if people basically say "Yeah, you could have a ton of trouble with that, you need to purchase X motherboard, X raid card and X drive types to feel confident that it will work, that's the reality". I'd rather not have the file server with all my important data be a complete poorly working mess if possible (and that may mean spending more money up front).

Good to hear that Hyper V + WHS works so well! I didn't go with that initially because I wasn't familiar enough with Hyper V or virtualization. But we've deployed it quite a bit more at work recently and I feel a bit more at ease (not perfectly so but quite a bit more).

@nitrobass24

It sounds like if possible Hyper V 08 R2 + WHS all the way for me! Good to know about the HP cards. It's extremely unfortunate given their price/relatively decent specs. But I guess like everything, you get what you pay for. Of the Areca cards, what is the "cheapest" one, i, x or LP? As for having 4 RAID sets that's actually not my plan. The plan would be to only have 3. My thought was to have the OS drive I will leave connected to the motherboard, unless there is some reason it really needs to be connected to the RAID card that I am not aware of.

As for the performance issues on the Areca's with multiple RAID sets; that's what I was afraid of. Are there other RAID cards that handle this better? Is is the reality that I just need to get a few more Hitachi 2TB drives and keep all the 1TB and 1.5TB drives out of there? However, it does sound like as I have older WD 1TB drives you can update their firmware with the TLER fix (which you can't on the newer 1.5/2TB drives). Even if I did that there is a part of me that would like to have multiple 2TB RAID sets. The idea of having a large 15 disk (at some point) RAID5 set is a bit unsettling in terms of rebuild time. I'd rather have 2 2TB disk raid arrays or something but it sounds like I would run into performance issues then.

As for the Norco revision, it's a 4020 but I don't know if there were slightly different revisions of it. I can say that I purchased it in March 2009. It only has 1 non-hot swap drive by HDD mount in it (that is where my 640GB WD black OS hard drive is). As for being good with a dremel (I don't even have one at home), I'd like to lie and say that I am but sadly, no, I am not very handy with tools. Whenever a solution might involve significant case modding etc my reaction is generally "throw some money at it instead of me significantly ruining something I already have". However, I am curious as to why you might ask that? I guess I am not sure as to what solution you might be thinking of. I can always convince someone else I know who is a bit more handy to assist me if it's an appealing solution!

@all

Thanks for the replies so far everyone anyone who pops onto this thread please do read my initial post first; that's where all my real meaty "questions" and concerns are. I look forward to hearing more responses. Again as I side note I should mention what a great community this is. My girlfriend was reading over my shoulder and she remarked "It's amazing how polite and coherent all the responses are; just real, intelligent people".
 
Hey have you heard of ZFS? It's a great way of storing your files, and... just kidding mate. :D

Let me add this though:
Please also don’t suggest ZFS. It looks/sounds wonderful but I’m a Windows “admin”. It’s what I know and what I am comfortable troubleshooting; I don’t want to move all of my important data to a platform I have 0 familiarity with.
Sounds like a reasonable argument to me, especially since you're already quite 'deep' into the Windows option.

But let me ask; how important is your data? Traditional hardware RAID5 is limited in the protection that it offers your data. You can't rule out any corruption over time, imposed by your RAM, your HDDs, your RAID, whatever.

As i understand, you currently use WHS with duplication on all your files. Now you're moving to a single hardware RAID5 volume, or two as you discussed. That still puts your data with a single point of failure; both the filesystem and the RAID engine/controller are single points of failure.

Now first i was thinking "keep at least your most precious files backed up - aside from any redundancy which is not the same as backup". But as you used duplication on all your current files, that leads me to believe that you deem all data to be equally precious.

Then i thought about your idea of two arrays; you could backup data by having it stored on both arrays. But if you value all data equally, that would mean you essentially have a cloned array of the main RAID5 array; whereas you would have liked to use the second array for storage.

Now i had a wilder idea:
  • use all your TLER/CCTL-capable disks in your new hardware RAID5 configuration; makes most sense and allows you to use windows solution
  • use a simple but dedicated pc that acts as backup, runs a non-windows OS and uses all your non-TLER capable disks (i.e. new 2TB disks you buy which do not support TLER/CCTL) in a software RAID configuration; i.e. no need for a second hardware card.

This may combine the best of worlds:
  • allows to you use a regular windows setup just like you planned.
  • additionally, have a full backup of your data on a second system you could access if your primary WHS/Vail system goes down for whatever reason; or want to restore corrupt/missing/accidentally deleted files.
  • allows you to use only TLER-capable disks on your Windows hardware RAID, while not running a risk with your non-TLER capable disks leading to headaches, split arrays and possibly even loss of (a lot of) data.
  • allows scheduled backups of the data; for example at night. And may also allow the system being off except at night when it powers on automatically and shuts down after backup is complete.
  • if you opt for ZFS on this second system, you would also have the benefits of checksums confirming the stored consistency of your files, and with the use of snapshots you can go back in time to restore a corrupt file. Imagine you came across a file on your Windows RAID5 that looked corrupted. You could then log to your second system and manually copy a historically older version of this file.
  • you wouldn't have to do any maintenance on your second system; just check from time to time if it's still working. So after initial setup you would not have to spend time on it; it should do its work by itself and the power-down can make power consumption/noise a non-issue.

The drawbacks of this option are that you have less storage space, or need more HDDs as you would be using a backup which essentially means you need double the raw storage space. It also requires you to spend time on setting up the second system the way you want it, like scheduled incremental backups. It doesn't have to be FreeNAS or ZFS though; you could just make it a Windows PC. In that case an existing windows PC in your home might suffice. The advantage of a backup beyond any RAID is huge! Backups is what keeps your data alive; RAID alone may fail in that job. That's my biggest concern.

Please accept my apology if this advice is in any way inappropriate. But since you do run into the issue of what to do with non-TLER capable disks you buy now or already have, this sounded like a great way out transforming your upgrade plan in a backup setup with pretty much everything unaltered except adding a second system into the equation.

If you at all like this idea, please shout. Else i'll be quiet; i don't want to spoil your thread. :D
 
@sub.mesa

As the resident "ZFS evangelist" interesting post. You make a series of interesting posts. But yes, it's not exactly the direction I want to go in. I find the platform fascinating but it boils down to raw time. I don't have time/want to make time to learn a completely different platform well enough to feel comfortable deploying it for my important data. Now if I worked with ZFS/Linux/Unix/Solaris 8 hours a day like I do with Windows I'd be doing the opposite and not going with a Windows solution for sure!

In any case, that being said I think I should have made my initial post a bit more clear. While I have duplication enabled on ALL my shares, I am NOT using that as a backup:

Random assortment of drives in 2 external 5-bay E-SATA enclosures
  1. 1x HITACHI Deskstar HDS722020ALA330 2TB 7200 RPM drive
  2. 1x SAMSUNG HD203WI 2TB 5400 RPM drive
  3. 4x SAMSUNG HD154UI 1.5TB drives
  4. 10TB total space to backup the 7TB of data I currently have
I have 2 external E-SATA enclosures that backup all the data I currently have in the server. I store them off-site and then periodically bring them back out to update the backup.

So, currently, I basically have WHS duplication for redundancy against drive failures and my backup drives in case of catastrophe (fire, robbery, controller failure etc) or in case of multiple drive failures. With my "new" plan I still plan to continue to do this. Except I will be using RAID for my on-site drive fail resilience mechanism as it is more space efficient than duplication (though it does involve much longer rebuild times).

Now, what you were suggesting is actually quite a good idea; a secondary server as a "hot" onsite backup using ZFS or whatever. Unfortunately while this is a cool idea that I would like to implement at some point it's just not in the cards for me financially at this point. However, I may go with a similar idea to what you suggest and simply move all the non TLER/CCTL hard drives to the external E-SATA enclosures as backup drives to avoid any issues with them and the RAID card I purchase. I'm not happy to lose the 1.5TB and 1TB drives storage space but it may save me huge headaches in the long run if they have issues with RAID controller cards. At some point I could get really crazy and RAID5 the external enclosures to make my backup redundant but I haven't gotten there.

I think at this point if I was to have a "backup" server locally for a "hot" backup I would follow the thread here that discusses using the SAS expander to go to another Norco case + motherboard (to power the sas expander) + drives rather than have a full on "backup" server. It would just be a dumb, local, external "array" for my current server that would serve as a hot backup.
 
Last edited:
Well if you are worried about a 16 drive RAID6 set, then i would do RAID60 before two RAID6 sets because you will get better performance and have the same amount of redundancy.

The 1680x is the cheapest option usually unless you can find a good deal.
 
@nitrobass24

Not a bad point actually. With an Areca card could I start initially with a RAID6 volume of 4 drives and then 'upgrade' that to RAID60 when I hit 8 drives? Any idea what the rebuild time would be on something like that? (8 2TB disks in RAID60 with an Areca 1680 series). Were you me, would you go with RAID5 with the idea to move to RAID50 or RAID6 with the idea to move to RAID60?

It sounds like I should keep an eye open for Areca 1680x cards (open box ebay etc). It's funny, there is an open box1680IX right now on the 'egg for cheaper than a 1680x; open box can be a sweet deal. Any idea if I should expect compatibility issues with my motherboard and the Areca 1680x/HP SAS expander?

In terms of the 1880 series, do we have any idea when that is coming out exactly or pricing? I can't seem to find any exact information about either from vendors, forums etc. I assume that means no one knows but I thought I might ask. It sounds like they are pretty late getting those out the door. I also couldn't find any information if they handle multiple RAID arrays better or not. But it looks like only a couple of people have them for testing purposes only so there are only pretty basic benchmarks out. My concern is that if I went with 2TB drives (hitachi's most likely) as my storage target, what happens when a 2.5 or 3.0TB drive becomes available/affordable. Will I need another RAID controller card because the performance of adding another array is so poor?

@anyone

I find it interesting that no one yet has mentioned Dell Perc6/e or 08 R2 software RAID option (with a decent multi-core CPU) or thought I would need to change the motherboard. Is just not many people using either of the Dell Perc6/e or software RAID options? It does sound like at least with software RAID you don't have to worry about controller failure; though it sounds like the performance is horrible. Anyhow, anyone is free to chime in on what I have in the first post etc with different solutions. Thanks so far for the feedback! It's interesting how I'm already considering changes to the plan that I hadn't exactly considered (RAID60, Hitachi 2TB drives only, waiting for Areca 1880 cards etc). Even I am curious as to where I will end up.
 
You could try a different RAID card such as the LSI 9620 or something similar but for me the out of band management is so clutch.

Bluefox brought a 9280 i think it was over to my house and we messed with it for a while, but the management software was just a beating to use.

No-one really knows for certain how the 1880s will pan out since they are not available yet but one can hope.

I plan on buying one when available. My 1680x will be for sale.
 
Yeah, it was a 9280. The LSI management software was a PITA to put it lightly. I can't really recommend them too much after my experiences with it.
 
@nitrobass24

I think I am going to stay away from the LSI cards. Based on numerous threads here they really sound like they are just not worth the hassle for the value. Plus, I already tend to hate most RAID card management software period (it all looks like it was designed 10 years ago by an engineer who didn't care about design) so I can't really imagine what really BAD management software must be like.

I am leaning toward going with an Areca 1680x; I am sure the 1880 series will be pretty darn pricey when it launches anyhow. I am putting together a bit of a "revised" plan based on some of the opinions so far on the thread but I'll wait to see if more people post responses to my initial queries I suppose as I may have to adjust the plan further!
 
Okay, so after reading back over a few threads I have decided that for now, my plan would be to go with an Areca 1680x or 1680lp as they should have decent performance (besides with multiple RAID arrays), good management utilities and are confirmed to work with the HP SAS Expander. I COULD wait for the 1880 series but I have a feeling a new card like that will command more of a price premium than I want and I am sure I won't need most of the performance benefits they will offer (unless they have better performance with multiple RAID6 arrays, that would be nice). My only question about the 1680x is that it seems like it would be preferable to the 1680lp as it has both internal AND external cables. That way to connect to the SAS expander that would be inside my current Norco, I wouldn't need a cable connecting them on the OUTSIDE of the case but I would still be able to connect the 1680lp to another RAID card in an external, secondary "array" someday. Am I correct in my thinking here or is there another reason why the 1680x is actually the preferable card? Either way it seems like a fairly minor difference and I should probably go for whichever I can get the best open box price on.

I think I am also going to standardize on the Hitachi 2TB drives and essentially plan to NOT include anything else in the array/storage server to avoid any RAID/drive related complications. I will most likely try to sell all/most of the 1TB WD green and 1.5TB seagate drives I have to re-coup some of that unplanned cost. I can only fit 5 drives into each of my 2 external E-Sata backup enclosures anyhow.

This isn't my whole revised plan (revised as of now) as I want to read through the SAS expander thread a second time and see if there are additional details I missed. I want to make sure I get all the APM/spindown time details down so that I don't run into any issues with the RAID card/drives. I also need to investigate firmware versions on my current Hitachi 2TB drives and make sure there isn't any reason I need to flash them. I will update the thread when I flesh out some of these details based on all the other forum posts!

Hopefully my motherboard doesn't turn out to be an issue but it sounds like the only way to find out may just be to test it when I get all the equipment and start to put it together after the Vali release!
 
the reason most people end up with the 1680x is $$

I found mine for $350 the next best thing was almost twice as much
 
Alright after some re-reading of important forum threads I have more or less have my "plan" setup with what I believe SHOULD work without any major issues.

Purchases

  1. Purchase and upgrade the processor
    1. AMD Phenom II X4 945 Deneb 3.0GHz 4 x 512KB L2 Cache 6MB L3 Cache Socket AM3 95W Quad-Core Desktop Processor – $139
  2. HP 24 Port SAS Expander - $225
  3. Areca 1680x/lp/i $350 or so used (assuming I can find one, hopefully one will pop up before the Vali release)
  4. Purchase appropriate cables to attach SAS expander to RAID card depending on card purchased (8087 or 8080 cables)
  5. Purchase fan to attach to SAS expander heatsink (or RAID card if it has no fan)
    1. http://www.newegg.com/Product/Product.aspx?Item=N82E16835119049
  6. Purchase 4x 2TB hard drives for $129 each or so
    1. HD32000 = 7K2000 are the same, retail boxed vs bare drives
    2. These are reported to work fine in RAID5/6
  7. Sell 2x AOC-SASLP-MV8 cards, 4x1TB WD Green WD10EACS. 4x 1.5TB Samsung drives after project is over to recoup costs
  8. Total Cost: ~$1,300 (may be able to take this down to $900 after selling hardware)

Upgrade Plan

  1. Put the 3 new Hitachi drives into the server as soon as I receive them and test some transfers for a day or so to make sure none are DOA/bad
  2. Migrate all data to the external enclosure drives (including PC backups)
  3. Make sure backups to the external enclosures are current
  4. Disable duplication on all shares and then remove drives from the pool
    1. Remove all the 1.5TB drives from the pool
    2. I have 7TB or so of data, I should be able to copy it all to the 1.5TB seagates
    3. Once all data is copied off the shares, delete all the shares and remove the 1TB and 2TB drives from the pool
    4. Take 2 1TB drives, put them in the external enclosure and move all the data off the 2TB Hitachi to them
  5. Update the firmware on the motherboard to the latest to make sure it supports the new Phenom II X4 Proc
  6. Install the hardware RAID card, SAS expander, processor and remove the SM cards
  7. Wipe the 640GB WHS OS and install Server 2008 R2 with Hyper V
    1. Make sure all drivers are installed and the external enclosures are accessible
  8. Update the Areca 1680x/lp/i RAID controller card to 1.48 firmware version if necessary
    1. It sounds like you have to install 4 files to update the firmware
  9. Update to latest hard drive firmware for 2TB Hitachi drives
    1. Latest as of April 2010 is 3EA (sounds like I may have to talk Hitachi into sending this to me if I decide to do this)
  10. Put 4x new 2TB Hitachi drives and 3x current Hitachi drives into the server
  11. Setup RAID6 volume on the RAID controller with 2TB Hitachi's
    1. Select 64-bit LBA when setting up the card
    2. With the Areca controller, for proper array spindown on RAID arrays with Hitachi 2Tb (7K2000) drives, the "Stagger Power On Control" value needs to be increased to 1.0, 1.5 or 2.0 seconds. You'll need to experiment to find the lowest setting. For testing set "Time To Spin Down Idle HDD" to 1, wait a minute for the array spindown, then access the drive with the array data and wait for it to spin back up. Watch the event log for any drive timeouts- if you see any, you'll need to hard-reboot the computer and try a higher stagger power on value (i.e. try 1.0, then 1.5, then 2.0 seconds).
    3. It sounds like the lowest stagger power on control setting I could use for RAID6 with the Hitachi’s would be 2.0
    4. HDD Power Mgmt settings: 6.0, 2, 10, disabled. (nitrobass24, working settings)
    5. It sounds like it might be good to disable ALL the APM and power management functions so that these don’t have issues (and I won’t be running the array all the time anyhow)
    6. RAID6 – 7x 2TB drive (~9TB total usable space)
  12. Install WHS 2010 as a VM, allocate it 2GB of RAM and just store the VM on the 640GB drive that hosts the 2008 R2 OS
  13. Present the RAID6 9TB volume to the WHS 2010 VM
  14. Put drives into external enclosures:
    1. 6x 1.5TB Seagate ST31500341AS
    2. 1x SAMSUNG HD203WI 2TB 5400 RPM drive
    3. 9.98TB total space to backup the 9TB RAID6 array
  15. Re-setup all the WHS shares with the same names and copy data from the external enclosures to the new 6TB volume
  16. Setup backup jobs to external enclosures using Syncback
    1. Present all external enclosure disks to the VM
  17. Backup 08 R2 Hyper V host to external enclosures somehow (not sure how yet)
    1. Also backup the WHS 2010 VM, not sure yet exactly what I will use to do this
At this point realistically all I am not positive about is if the RAID card/SAS expander will play nice with my motherboard, if I really need to bother updating the Hitachi drive firmware (doubtful but not sure) and what precisely I should use for the APM/power management settings for the hard drives (set on the controller NOT on the drives themselves). As I mentioned I am considering disabling all APM and power management functions anyhow as the server is only on for about 5-6 hours per day.

Hopefully I can start getting this all together when I find a 1680x/i/lp for a decent price (I decided not to wait for 1880 series, I probably won't want to shell out for how much they will retail for initially) AND when we get an actual release date for Vali. My guess is Vali will be around sometime fall/winter so I have a little while to wait for a RAID card and decent prices on the 2TB Hitachi drives.

My only "future" concern for the build is that the 1680 cards seem to have poor performance with multiple RAID arrays so at some point I will either have to "risk" having a 20 drive RAID6 array or start new arrays (which would cause the performance issues). If 3TB (or whatever) drives come out at some point I would then also certainly have to setup additional arrays, causing the performance issue with multiple arrays. As a side note from a 'recovery' standpoint, will the configs for the RAID array get stored on my drives as well if I am using one of the 1680 series and the Hitachi 2TB's, in case the controller fails?
 
Last edited:
Unable to resist a good deal I bought 4x 2TB Hitachi hard drives on newegg, the Phenom II X4 processor and a fan to attach to the SAS expander I also have on the way. My plan is to install the upgraded processor in my current WHS install (to make sure it works with my motherboard) along with the drives and SAS expander. I am just going to do some write tests to the 2TB drives to make sure that none are DOA and potentially get updated firmware for the drives. As for the SAS expander I will just confirm the firmware version and is getting power from the PCI-E slot.

Then it's simply time to wait for a good deal on an Areca 1680x/lp/i, order appropriate 8087/8088 cables based on the card and then wait for the Vali release! Hopefully all of this happens soon rather than later, I'm already excited to start the project! I will try to update the thread as I roll through my plan in case anyone is curious or reads it in the future and wonders how it turned out!
 
Small update/question. In an effort to prepare for the CPU upgrade I thought I would crack the case open. Much to my displeasure I find that my Norco 4020 case is seemingly impossible to open now. I have removed the 4 screws that hold the top on and it seems that no matter how much pressure I apply to the so called "buttons/tabs" and try to slide it backward it won't move. Any tips? I've removed it a number of times in building the system originally and didn't have nearly so many problems with it!

If anyone has a picture of the bottom of the case lid that would be handy. I can't tell how the lock mechanism is even supposed to work and I'm a bit afraid of forcing it TOO much with screwdrivers.

*EDIT*

In case anyone reads this post later and was having trouble opening their Norco 4020 case this is more or less what I did:

"I was able to force it up with screw-drivers on the left side and then force up the little top "lip" in the front (until it popped up over the black 'top' portion) and then pop the right side up with more flat-head screw drivers. All-in-all I only scratched a bit of the black paint off in the front so it's not very noticeable. That really was a huge pain in the ass. What an atrocious little locking mechanism! I think I am just going to completely flatten the 'security' little metal tabs/flaps so that it can't "lock" anymore but so that I can still use the top."
 
Last edited:
So, in case anyone is interested I have proceeded slightly with my plan. The motherboard BIOS is updated and the new Phenom II X4 Processor is installed and working nicely. The 4x 2TB Hitachi drives are in the current WHS server doing some write tests to make sure that none are DOA or have initial problems. I haven't updated the firmware on any of these drives and while I haven't completely decided I don't think I will unless I have issues when I actually get the RAID6 array setup.

I also have received my HP SAS expander though I have not yet installed it. I have a fan for it (http://www.newegg.com/Product/Product.aspx?Item=N82E16835119049) but I haven't decided the best way to attach the fan to the HP SAS Expander. Any ideas? I'm not TOO worried about it overheating but I thought it might not be a bad idea. Once I get the fan on I think I will throw it into the server to make sure it at least is accessible via device manager. I COULD hook it up to one of my AOC-SASLP-MV8 cards but I think that adds unnecessary complication when I want to migrate away from them anyhow. Mostly I just want to make sure my motherboard can get the necessary juice to the SAS expander.

I am tempted to buy an additional 4GB of RAM for the server but DDR2 800MHZ 2GB sticks seem to be holding their value pretty well at $100. 2Gb will probably be enough for the WHS 2010 VM anyhow.

I still haven't found a good deal on a used 1680x/lp/i but I am keeping my eyes peeled on e-bay etc for one. It looks like they are holding their value pretty well without the 1880 series out! At least with no word yet even of a WHS 2010 release date I can continue to wait for a good deal on a 1680!
 
Maybe zip ties to attach the fan to the expander?

Glad to see that your issue was with the case was sorted out.

Good luck man! Still interested dude.
 
Yeah, I had thought about zip-tying it but I felt like there must be a more elegant solution. Someone mentioned that they were glad the fan came with screws; which I took to mean that they have screwed it into the card somehow onto the heatsinks. But there doesn't appear to be any screw-holes to use on the HP SAS Expander itself. However, I may just zip-tie it anyhow if there isn't really any other good way.

That case was a HUGE pain. That locking mechanism is so poorly designed. I'm glad I have disabled it so that it can never be in the way again! Norco support did let me know that they might have a different case top/locking mechanism on the new 4224 case but we'll see.
 
Sub-d.

I've learned a lot already :D

Well good, I'm glad someone else is finding it interesting! :)

A tiny update. I have put the HP SAS Expander in the server but it's hard to tell if it really is working or not as I don't have it hooked up to anything! The AOC-SASLP-MV8 cards are working great and I would really rather not mess with hooking it up to those.

I'm still trolling the ebay/used market for an Areca 1680x/lp/i but still no luck. Either way, with the recent WHS 2010 beta release it's clear that it still has a ways to go. I am really hoping they completely fix some of the potential "data loss" bugs before it launches.

The only unfortunate part about waiting this long is that my storage has continued to expand so that means I need to purchase more hard drives than I intended. I need to increase my current storage pool, so I need 2 hard drives for that (1 for storage 1 for duplication) and then another one for the RAID array I will move to when WHS 2010 releases. The good news is that it all means that I will end up with a fairly large RAID6 array with the space saving from moving away from duplication as redundancy and hopefully won't need to expand it for quite a while. I am keeping my eyes open for good deals on the Hitachi 2TB drives that are known to work well in RAID with the areca cards.
 
I am learning new things as well and I am very interested in your build - subscribed! :)
 
Pirivan, any reason why you didn't mention Adaptec RAID controllers? Especially since you have concerns about compatibility. Is it because of their prices or RAID management software? Just curious.
I have one for 8 drives in RAID 6 at the moment, and was thinking on expanding my own storage to RAID 60. I don't use WHS duplication either, and would rather not to, as it would require a separate backup machine anyway, given the size of the storage and the purpose of backups.

I won't do anything before next year anyway, as I'm moving house, and would as well wait for WHS 2.0 and the 3TB drives to be released, even though my storage is saturating. And maybe Adaptec will have SATA 3.0 RAID controllers too by that time, although it's not as important as a good working rig.

I'd rather avoid the Norco case, it's ugly, fits only 20-24 drives, is not very well built, it's made for racks, not homes, but I don't really see any alternative, I might have to get a couple and cover them up, as building my own full tower storage chassis seems out of question.

In the mean time, I'm subbing too and hope you'll have no horror stories with your upgrade.
 
@pirivan there is no harm in hooking up the SASLP to the expander, as it will have no effect on how the OS sees the drives. Think of the SAS expander like a USB hub, but without drivers. Also do not use the WHS Beta it will corrupt your data. It is in the middle of a complete Drive Extender and File System makeover and until both are complete you simply cannot rely on its data integrity.

@Chimel The Adaptec RAID controllers dont work well with the SAS Expanders, and IMO the Areca is a much better card.
 
@Xfinity Thanks! I will try to keep it updated as the project rolls along; it looks to be a long haul but I will get there eventually (Christmas? Who knows when WHS 2010 will be RTM!)

@Chimel As far as the Adaptec RAID controllers go Nitrobass24 is exactly correct. I was looking for controllers pretty much based on 2 attributes. First, I wanted them to be compatible for SURE with the HP SAS Expander. Second, I wanted them to have reasonable rebuild performance. To begin with I was pretty much exclusively looking at the Areca and HP RAID cards as I knew they worked with the SAS Expander (and you can use the HP RAID cards to flash the firmware on the SAS Expander if need be). However, the HP RAID cards have reportedly HORRIBLE rebuild times, whereas the Areca has great rebuild times (comparatively). Also I understand that as far as RAID cards go the Areca has pretty decent management utilities. Yeah, it's still pretty crummy and looks bad, but in the world of RAID cards it appears to be a bit ahead of the game.

The Areca cards really only have 2 major downsides. The first is price. They are expensive compared to most of the other options I was looking at. So, I am trying to pick up one used, hopefully when the 1880 series releases. The second major downside is performance with multiple RAID6 arrays which I have heard is not so good AT ALL. I was planning on doing this initially but since I have decided to go with an Areca card (1680x I think ideally) I am just going to have 1 2TB drive RAID6 array, with a maximum size of 20 drives. 40TB should be plenty for me to grow with for a couple of years at my current rate.

If I decide to go with 3TB drives at some point I will just have to look for another RAID card I suppose. Honestly though I think by the time that 3TB drives are at a price point where I actually want to buy them I will be willing to consider a whole-sale replacement of the entire machine anyhow and move to a Norco 4224, new motherboard with SATA 6GBPS, (8 core proc?) etc.

As far as the Norco case goes Chimel, I have a tendency to agree with you. Though saying it ONLY fits 20-24 drives sounds a bit funny. For that price, there is literally nothing else that even comes close to fitting that many. Yeah, you can get converters to stuff that many drives into a large ATX case but usually you will have to have some internal drives and they won't be hot swappable AND you will spend a lot on the desktop case + a lot on the converters that will add up to far more than the Norco case. However, I do concur that it is not very well built. I have heard that the backplane connectors are flimsy, I can relate to how cheap/lame the case top is, the hot swap drive bay hinges are made of cheap plastic and the power/reset buttons are flimsy plastic that you push on to push a button instead of an actual hard button. All that being said, it's really still a great bargain for the amount of drive capacity you get so I can forgive any of the 'cheapness' of it. Hopefully the 4224 will fix some of the 'flimsiness' concerns with some of the components.

Don't worry too much about needing a rack, I got a 12U or so decent looking one that is mobile and actually looks alright. It has a glass front door and metal sides etc. It sits in the office and doesn't look completely out of place. Just put it in the garage and run ethernet out to it if you want it to be less of an eye-sore (assuming you have a garage, which I do not).

@Nitrobass24

Thanks for the tip; very helpful as per usual. Maybe I will try hooking up the SASLP to the expander just to see how it goes. That way I can confirm that the SAS Expander works and retire/sell one SASLP, and connect everything on the backplane to the SAS expander (right now 4 ports on the backplane are connected to the motherboard which I don't like). I just need to order an 8087 to 8087 cable and I should be ready to go!

As far as WHS 2010 goes, yeah I have no plans on using/deploying the beta and I won't touch the RTM (whenever that is) until it has been reported that the data corruption bug is squashed.

As just a side note/addition to the project, I decided to purchase some additional 2TB Hitachi drives. Both to add 2 to my current WHS as it is running low on space and then another 2 for either use in the back enclosures or in the RAID6 array. Normally I would never do this but they were on sale for $89 apiece on newegg for open box hard drives so I took the plunge. Basically my thinking was this, I already have 4 2TB drives sitting around. I can add 2 of those to my current WHS to expand the space. Then, when I receive the open box drives, I can check them on Hitachi's site to see where the warranty is at AND then throw them into the server and let them sit there doing nothing for 20 days or so and see if they fail. If their warranty is expired or they fail right away I can always return them for a refund and I'm not out much as I don't NEED them for anything now and won't be putting any data on them initially. It seemed like a reasonable plan to me but we'll see!
 
Last edited:
I'll trust you guys on the Adaptec, and I've read enough on other threads about the Areca to know about its performance, but I am a bit sad about the Adaptec-HP compatibility, given Adaptec long standing reputation on the sector.

Yeah, the Norco case is cheap at $300. Don't just let your new Hitachi idle, stress them with the manufacturer's or third party tools for 48 hours. I used Seagate Tools for my drives, was handy to return drives that would have failed only when reaching 95% used capacity a year after the purchase. You got a pretty good deal on those!
 
The second major downside is performance with multiple RAID6 arrays which I have heard is not so good AT ALL.

Do you have any good links that you can share? I want to see some figures because I am sitting on a Dell Perc 6/i at the moment and I am thinking about upgrading to an Areca card.

Did you hook up the SASLP to the expander yet?
 
@Xfinity

Unfortunately, no I don't have any particular performance benchmark links. I believe I read the anecdotal information that the areca cards don't work well with multiple RAID arrays a number of times in either the Areca Owners or HP SAS expanders thread (both are quite lengthy). I am fairly certain that a user or two in those threads quoted the performance numbers they were getting with multiple arrays but I don't recall what they were. I am pretty sure that they didn't post any benchmark product screen-shots or anything of that nature. Either way, for my purposes it was enough to know that ahead of time so that I simply could plan around it. I decided I would just standardize on 2TB Hitachi's for the array rather than chance potential issues people were seeing with multiple arrays.

I have not yet hooked up the SASLP to the expander yet. Unfortunately I do not have the SFF 8087 cable I need to connect the HP SAS expander to the SASLP. I ordered yesterday but I don't expect it to be here until next week. I will report back with how that turns out once I receive the proper cable. :)
 
Well as a small update I now have 4 'open box' hard drives from Newegg. I checked the warranty online and they are all still good until 2013 which is fine by me. If they make it to 2013 and then die I wouldn't even care about replacing them at that point, I would just want to buy a new/bigger one anyhow I am sure if one dies. In any case, I placed them in the server and I am trying to do some reliability testing but I am having trouble finding a decent program oriented toward hard drive stress testing. I would love something that basically just writes to each of the 4 new drives (and deletes the data if the drive fills) for 48 hours or so. I could just manually copy data a number of times but I'd prefer something better if anyone knows of a tool. As a side note, I did try Seatools and it looks like the Hitachi 2TB drives aren't supported (they didn't show up in the tools) and the Hitachi tools are all only bootable tools (I would like something that runs from within the OS).

Another small update, I now have an external 8088 cable (I bought one just to have one on hand for the future) but still no internal 8087 cable to connect the the HP SAS Expander to the SASLP. Hopefully it will get here sometime this week so I can test connecting it up in this way. I'm still watching out for a 1680x/lp/i, no luck yet but one is bound to turn up used for a decent deal eventually. Worst case scenario I can always buy one new for a decent deal online ($550 or s) but I won't be quite as happy about it :). At least it looks like WHS 2010 is a ways out so I have time to wait!
 
Well as a small update I now have 4 'open box' hard drives from Newegg. I checked the warranty online and they are all still good until 2013 which is fine by me. If they make it to 2013 and then die I wouldn't even care about replacing them at that point, I would just want to buy a new/bigger one anyhow I am sure if one dies. In any case, I placed them in the server and I am trying to do some reliability testing but I am having trouble finding a decent program oriented toward hard drive stress testing. I would love something that basically just writes to each of the 4 new drives (and deletes the data if the drive fills) for 48 hours or so. I could just manually copy data a number of times but I'd prefer something better if anyone knows of a tool. As a side note, I did try Seatools and it looks like the Hitachi 2TB drives aren't supported (they didn't show up in the tools) and the Hitachi tools are all only bootable tools (I would like something that runs from within the OS).

I recently bought a new 2tb Samsung drive for my WHS box, and I was able to use the Western Digital diagnostic tool to stress test it. It has a tool that writes zeros to all the sectors in the drive. I also ran a scan with HD Tune.

Might I suggest that you build a separate machine, possibly from an old computer you have, to test out your drives. I had an old computer lying around that I bought an eSATA dock for and use that to test my drives before I actually place them in my WHS server.

Anyway, I will be following your build progress. The Samsung drive was a replacement for a drive that died in my WHS box. Since that happened, I am realizing that having some sort of redundancy is important. Not to mention doing regular backups of the data.
 
I recently bought a new 2tb Samsung drive for my WHS box, and I was able to use the Western Digital diagnostic tool to stress test it. It has a tool that writes zeros to all the sectors in the drive. I also ran a scan with HD Tune.

Might I suggest that you build a separate machine, possibly from an old computer you have, to test out your drives. I had an old computer lying around that I bought an eSATA dock for and use that to test my drives before I actually place them in my WHS server.

Anyway, I will be following your build progress. The Samsung drive was a replacement for a drive that died in my WHS box. Since that happened, I am realizing that having some sort of redundancy is important. Not to mention doing regular backups of the data.

Thanks for the suggestion coolrunner84, I will try out WD's diagnostic tool. If I was so inclined I could connect them to one of my secondary computers but I don't think it will be truly necessary. I am fine with placing them in the WHS server and testing them on there as long as they aren't in the storage pool they can't mess anything up. In any case, good thinking on redundancy as well as real backups; I have had occasions that I was glad I had duplication enabled.

As a small, general update in case anyone has a similar issue I ran into some problems with the Open Box Hitachi drives. Basically I placed all 4 drives on the bottom of the backplane in my Norco 4020 and tried to do a full format on each. Each drive would get to 100% of the format and then fail with this error:

The format did not complete successfully

If you looked in the event logs you would see an event like this for each of the drives that the format failed on:

Event Type: Information
Event Source: dmio
Event Category: None
Event ID: 30
Date: 8/24/2010
Time: 9:19:34 PM
User: N/A
Computer: SERVER-WHS
Description:
dmio: Harddisk2 write error at block 3907029167: status 0xc0000185

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
Data:
0000: 00 00 00 00 04 00 4a 00 ......J.
0008: 00 00 00 00 1e 00 05 40 .......@
0010: 00 00 00 00 00 00 00 00 ........
0018: 01 00 00 00 00 00 00 00 ........
0020: 00 00 00 00 00 00 00 00 ........

So, immediately I tried to download and run Hitachi's drive test tools, only to find out that the Hitachi drive test tools did NOT support my motherboard chipset in the WHS box and couldn't see any of the drives. So, I moved one of the drives into a secondary PC, booted and ran the quick test (passed), did a complete format (still would error on formatting) and ran the complete test (passed). At this point I was at a loss and decided to RMA all 4 drives and submitted RMA information.

However, right before I sent out the drives, on a whim, I decided I would put 4 OTHER Hitachi 2TB drives I had purchased a few weeks ago that I KNEW were working into the server on the BOTTOM row of the backplane. To my surprise these drives would ALSO fail to format now that they were in the bottom row. So, I unpacked all the 4 drives I was going to RMA, put them BACK into the server into the 3rd row and shocker, they all were able to format. I have since just tested them by writing data to essentially fill each of the 4 drives with zero errors/issues doing this. As far as I can tell, the drives are fine. The crux of the issue here appears to be that in my norco I have the bottom 4 ports connected to onboard ports on my motherboard (as I could only connect up 16 ports with my 2 SASLP cards). So, either these ports are bad or the bottom 4 ports on the backplane are bad OR there is just some sort of compatibility issue between the two. It is a bit odd though that the drives are detected just fine during post and in the OS when connected to the bottom 4 ports, I just simply can't format them within windows. At the very least I am glad that A) the drives aren't bad and B) this brought to light a potential odd issue with the bottom 4 ports on the backplane connected to onboard SATA ports on my motherboard before I actually needed to use those slots.

Either way, hopefully I will receive the 8087 cable this week and I can start using the HP SAS Expander with 1 SASLP and stop using the motherboard to connect to the backplane at all (the motherboard SATA ports will just be used for the OS drive and disc drive).
 
Well as a small update, I did finally receive an SFF 8087 cable and attempted to connect my SASLP to the SAS Expander with the 8087 cable and then the other ports on the SAS expander to the Norco 4020 backplane with the 8087 to x4 SATA cables. Unfortunately in the short time that I messed with it I had absolutely no success. As soon as I had everything connected the SASLP saw 0 drives connected with them all connected to the expander. I may need to try playing around with which port is connecting the SASLP to the SAS expander or perhaps with settings in the SASLP BIOS but I have to say that I was disappointed that it didn't work right away.
 
Well as a small update, I did finally receive an SFF 8087 cable and attempted to connect my SASLP to the SAS Expander with the 8087 cable and then the other ports on the SAS expander to the Norco 4020 backplane with the 8087 to x4 SATA cables. Unfortunately in the short time that I messed with it I had absolutely no success. As soon as I had everything connected the SASLP saw 0 drives connected with them all connected to the expander. I may need to try playing around with which port is connecting the SASLP to the SAS expander or perhaps with settings in the SASLP BIOS but I have to say that I was disappointed that it didn't work right away.

Not working with an HP expander, but the Chenbro. The SASLP-MV8 card worked with the expander first time, every time. There are no settings in the SASLP-MV8 Bios that should have any effect on the expander. Do play around a bit with the ports, but don't rule out the thought that you might have a defective SAS expander.
 
Not working with an HP expander, but the Chenbro. The SASLP-MV8 card worked with the expander first time, every time. There are no settings in the SASLP-MV8 Bios that should have any effect on the expander. Do play around a bit with the ports, but don't rule out the thought that you might have a defective SAS expander.

Well, again I have messed around with port placements (moving around which port the 8087 cable to the SASLP was plugged into) and which port the 8087->4x SATA was plugged into a bit and I still can't get any drives to show up in the SASLP BIOS. I have not tried every combination by any stretch of the imagination (I have 3 PCI-E slots on my motherboard and haven't tried all combinations of SASLP and SAS expander in various slots). I haven't messed with and SASLP BIOS settings or motherboard BIOS settings either.

As I am grasping at straws I am considering a FW update on the SASLP card(s). It's currently at FW version 3.1.0.15N. It looks like 3.1.0.21 is out for the SASLP but I can't seem to find any 'release' notes for it so I'm not sure it is worth applying once I figure out how to actually flash the card(s).

According to the bag my SAS expander came in it is FW version 2.02; I am not sure if that makes a big difference or not in terms of compatibility with the SASLP.

I have noticed when the system is booting looking at the SAS expander I see the following activity lights: lower 3 are lit, the 2 above that aren't on then and then the top light is also on. On the SASLP that is connected to the SAS Expander the RDY2 and RDY1 lights are lit. As far as I can tell so far (having not tested all the ports) I only see the two RD1 and RDY2 lights lit on the SASLP if I am plugged into any port OTHER than Port 9 on the SAS expander. There aren't any lights on the SASLP if I plug into Port 9 on the SAS expander.

I am going to also order another 8087 cable in case there is some issue with the one I have as that might be an easy fix. Beyond that I guess I may have to look into trying to RMA the SAS Expander to SynergyDustin or firmware updates for the SAS Expander/SASLP.

EDIT
According to a post from Odditory (below) on the SAS expander thread I am wondering if there is some issue with my motherboard:
"I noticed after I moved the expander #2 into the PCIEX16_3 slot that the top most LED started to blink, where as before it would just be solid on!"

I am not positive but I don't think I have ever noticed the top most LED on the SAS expander ever blinking. I have the following motherboard: http://www.newegg.com/Product/Product.aspx?Item=N82E16813131343
 
Last edited:
Do you have drives plugged directly into the SASLP-MV8 as well as plugging the cable into the SAS expander? That arrangement does not work. Both ports of the SASLP-MV8 need to be plugged into the same type of device (SAS or SATA). It will see your SATA drives, set itself into SATA mode, and then won't be able to communicate with the SAS expander.

If you haven't already, try disconnecting the other 8087 port on the SASLP-MV8 controller and see if it can see the expander.
 
Do you have drives plugged directly into the SASLP-MV8 as well as plugging the cable into the SAS expander? That arrangement does not work. Both ports of the SASLP-MV8 need to be plugged into the same type of device (SAS or SATA). It will see your SATA drives, set itself into SATA mode, and then won't be able to communicate with the SAS expander.

If you haven't already, try disconnecting the other 8087 port on the SASLP-MV8 controller and see if it can see the expander.

Darn, I wish that was the case (that would have been a nice easy solution) but no. Whenever I have been experimenting ALL that is connected is the 8087 from the SAS Expander to the SASLP-MV8 and two 8087 -> 4x SATA breakout cables from the backplane to the SAS expander. Nothing else has been connected to the SASLP-MV8 cards.

EDIT

More and more I am starting to think that it is an issue with my motherboard not supplying power properly to the SAS Expander based on this page of posts:

http://hardforum.com/showthread.php?t=1484614&page=21

John4200 and Treadstone have a motherboard similar to the one that I have except with 4 PCI-E slots. Of course the slot that "worked" for them is the white PCI-E slot, which does NOT exist on the revision I have; yay. So if this is indeed the issue unless I can find a way to "fix" that (unlikely without modification to the SAS expander card like I believe Treadstone did) I either need to get a new motherboard (not an appealing choice) OR sell the HP SAS exapander and buy a Chenbro CK13601. My primary concern with the CK13601 is that I am not positive yet if that would work with the Areca 1680x/i I am planning to buy.
 
Last edited:
John4200 and Treadstone have a motherboard similar to the one that I have except with 4 PCI-E slots. Of course the slot that "worked" for them is the white PCI-E slot, which does NOT exist on the revision I have; yay. So if this is indeed the issue unless I can find a way to "fix" that (unlikely without modification to the SAS expander card like I believe Treadstone did) I either need to get a new motherboard (not an appealing choice) OR sell the HP SAS exapander and buy a Chenbro CK13601. My primary concern with the CK13601 is that I am not positive yet if that would work with the Areca 1680x/i I am planning to buy.

Hard choices indeed!

In my experience its better to buy hardware that you know for a fact that work with each other. Sometimes the mantra "it could work" is good, but in some cases its better to know before you put something together - especially if its a server ;). Its better for your wallet and your precious time! Just my $0.2.

Keep the good work up pirivan and keep us updated!
 
Hard choices indeed!

In my experience its better to buy hardware that you know for a fact that work with each other. Sometimes the mantra "it could work" is good, but in some cases its better to know before you put something together - especially if its a server ;). Its better for your wallet and your precious time! Just my $0.2.

Keep the good work up pirivan and keep us updated!

Unfortunately there is a large part of me that agrees with your sentiments! It wouldn't be AS big of a deal if I hadn't gone ahead and purchased/upgraded the Phenom II X4 CPU already (which is of course now just outside of the 30 day newegg return period). So if I want to move to another motherboard that I KNOW is works with the HP SAS expander I will likely have to move to a SuperMicro/server style Intel based board that I believe is mentioned as working in the SAS expander thread. That ALSO means that it would be a bit more difficult for me to make the motherboard change PRIOR to moving to a RAID6/Server 2008 R2 hosting a WHS 2010 VM environment. I'm not sure how well WHS would like me swapping from an AMD -> Intel environment (I predict a BSOD but it HAS worked for me in the past swapping architectures like that with Windows XP).

I will have to go back through the HP SAS expander thread and weed out specific models users listed as working. Like this Asus board was one http://www.newegg.com/Product/Product.aspx?Item=N82E16813131228 that bluefox mentioned works for him. But again, it's a 'server' board that is intel based. Heck, just based on a quick search on Newegg it looks like there aren't even any AMD 'server' boards that would accept the CPU I already purchased so it REALLY looks like a move to Intel is my only choice if I want a 'server' style motherboard.

I am not sure if such a motherboard exists but if I really do have to changed to a new motherboard to avoid a huge hassle I will most likely be looking for a 'server' style motherboard (looks like ASUS or SM are the main choices on newegg) that I can confirm works with the SAS expander and has dual NIC's, supports a core i5 (LGA 1156) at least 1 PCI-E x1 slot, and ideally 3-4 PCI-E slots (x16, x8 x4 would be great, or 2 x16 and 1 x4). Then I guess I will try my luck at selling off the AMD gear!
 
Back
Top