Build-Log: 100TB Home Media Server

Man I have been waiting a long time for this thread.

Looks great treadstone!

BTW is SES2 enabled and working with your Expander?

Thanks Nitro... I wish I could try it out, but as I mentioned before, I can't move this beast myself right now since I dislocated my shoulder a week and a half ago :(
 
I haven't really added it all up but I figure somewhere around $16k+"

I figure that my media server cost under $600 plus hard drives. Hard drives are about $150 for 2TB. 16 of these are $2400. About $3000 for 32TB of storage. No technical problems. No heat problems. No noise.

Duplicate it 3 times for $9K.

Sometimes a simple solution is much cheaper.
 
The motherboard is not AWARE that there is anything plugged into the black slots as the config query at the beginning goes unanswered (which was my original problem). So all I did is modify the PWRGD signal that goes to the two black slots. The motherboard configuration hasn't changed which means that the blue slots still get their x16 bandwidth.

As to the X58 motherboard, I wanted to use an i7 based processor and along with my above listed criteria for the slots/card distribution, there wasn't much to choose from.

I have the server at home already, It's been sitting in my garage for the last two weeks and I haven't been able to move it into my basement and hook it up :(

Sorry, still trying to make sense of this. Your'e saying by modifying the PWRGD signal for the two black slots they now answer the config query, correct? If they're answering it, then that is telling the motherboard to send 8 lanes to that black slot. If the black slot gets 8 lanes instead of not getting any, then the Blue slot paired up with it only gets 8 lanes instead of the full 16 lanes.

You only have a total of 16 lanes to work from coming from the CPU with the way the P55 chipset works. So with this board, Asus used a nForce200 switch chip to make those 16 lanes look like 32 lanes. All 16 lanes go straight to the nForce200 chip which then sends out 32 lanes. They divide these 32 lanes amongst the 4 black and blue PCIe slots. If the black slots aren't occupied, the blue slots each get 16 lanes. If the black slots are occupied, they get 8 lanes and the blue slots get 8 lanes. The 4 lanes for the white slot come from the P55 chip.

So, if you've configured the black slots to always answer the query that they're occupied, then they'll always get 8 lanes, leaving the blue slots with only 8 lanes. Is that you did with the modification? Best way to tell, run Everest it'll tell you your lane config for each slot.

I want video of you moving this monster into you basement :eek:
 
I read that article, then I read another by the same author. That guy is such a tool! And most likely flat out wrong. Yeesh.

LOL. Yeah if you read this article, it's like ... OMG ALL RAID systems are doomed :) Whatever will we do ????

I haven't really added it all up but I figure somewhere around $16k+"

I figure that my media server cost under $600 plus hard drives. Hard drives are about $150 for 2TB. 16 of these are $2400. About $3000 for 32TB of storage. No technical problems. No heat problems. No noise.

Duplicate it 3 times for $9K.

Sometimes a simple solution is much cheaper.

Of course I could have build this entire system for a lot less, but I didn't care about that. I wanted all drives in a single system. Sure I could have used a bunch of Norco 4220 cases that are WAY cheaper, but where is the fun in that :)

Also, the cost of the HDD has come down over the past two month, by now I can get the same setup for almost $3k less....

Sorry, still trying to make sense of this. Your'e saying by modifying the PWRGD signal for the two black slots they now answer the config query, correct? If they're answering it, then that is telling the motherboard to send 8 lanes to that black slot. If the black slot gets 8 lanes instead of not getting any, then the Blue slot paired up with it only gets 8 lanes instead of the full 16 lanes.

You only have a total of 16 lanes to work from coming from the CPU with the way the P55 chipset works. So with this board, Asus used a nForce200 switch chip to make those 16 lanes look like 32 lanes. All 16 lanes go straight to the nForce200 chip which then sends out 32 lanes. They divide these 32 lanes amongst the 4 black and blue PCIe slots. If the black slots aren't occupied, the blue slots each get 16 lanes. If the black slots are occupied, they get 8 lanes and the blue slots get 8 lanes. The 4 lanes for the white slot come from the P55 chip.

So, if you've configured the black slots to always answer the query that they're occupied, then they'll always get 8 lanes, leaving the blue slots with only 8 lanes. Is that you did with the modification? Best way to tell, run Everest it'll tell you your lane config for each slot.

I want video of you moving this monster into you basement :eek:

I don't think I made myself clear enough. The HP cards do NOT reply to any requests. There are NO lanes on the HP SAS expander cards. The only 'bus' that is actually connected is the SMBus. I needed to change the behavior of the PWRGD signal because the motherboard would set the black slot PWRGD signals low again right after it tried but did not receive any configuration data! That caused the HP SAS expander card to go right back into a reset state.

Again, the slot is used only for mechanical support and power. That's it! Or in other words, It receives power and a reset signal from the black slots. No PCIE lane change happens during the boot process. The blue slots still have their full x16 configuration. The motherboard isn't aware that there is anything plugged into those slots!

I do understand how the PCIE system works and how they are distributed and what the function of the nForce200 chips is. In fact, when I was looking for motherboards, I downloaded the Intel chipset datasheets to figure out what chipset would actually work for my particular setup, then I would go and look at motherboards that utilized that particular chipset.

Sorry, I don't think there will be any videos of the server move... I will try and post some pictures once it is all setup and running.
 
As someone who's used Arecas before, I have to ask why you're not using Hitachi drives?

Also, if you plan on moving the chassis with the drives removed, invest in a label maker.

those WD drives he bought *should* be okay but I wasn't thrilled with the failure rates after owning just 8 of them briefly. the biggest factor that created issues with certain drives in the past was the onboard expander on Areca 1680ix and Adaptec 5 series cards greater than 8 ports, which he's been able to circumvent with the HP expander approach.

raid card doesn't card what order the drives are in, as long as they're all present. just don't mix drives from two different arrays obviously.
 
Last edited:
Nice, I didn't realize the slot would just provide power without switching lanes over with that mod, good find.

Good luck with moving it, I guess we'll settle with just some running pics :)
 
Nice, I didn't realize the slot would just provide power without switching lanes over with that mod, good find.

Good luck with moving it, I guess we'll settle with just some running pics :)

Yep, now you've got it. If the motherboard would have had the same PWRGD signal behavior on the black slots as on the white slots, I would never have had to look into this. Which also means I would never have figured out what signals the card 'really' uses.

Anyway, it was a really simple modification and now everything is good :)
 
I have a Norco 4220 WHS build, but seem to be running into a lot of problems with drives falling in & out of the Raid 6 array. I found out my WD 2T WD20EADS drives have 2 different firmware. Drives with firmware version 01.00A01 seem to be holding fine. But the drives with firmware 04.05G04 will always drop from the array & the Adaptec 52445 Raid controller alarm goes off on almost every reboot. For now, I have taken the drives with firmware 04.05G04 out of the chassis.

Is your array holding steady?
I think I will try the WDIDLE3 and WDTLER programs & see it it will give me some relief. If these don't work out then I may get rid of the WD drives & go with Hitachi. I currently have 2 Hitachi & they seem to work fine.
 
Ok after understanding a bit more about URE with my post:
http://hardforum.com/showthread.php?t=1513474

The guy has a point about raid 5, but he got raid 6 all wrong. Raid 6 requires a error at the same sector, which decreases the probability of error way down.

I have come to the conclusion you definitely need to use raid 6.

If you use raid 5, even with a URE of 10^15 you are look at a expected value of 114 TB per bit error. This means for when a hard drive fails and you have to rebuild the probability of a bit error is about 79%. While yes it will not destroy your entire raid an error is an error and could be bad.

However looking at raid 6 you are looking at an expected value of 103398 YB (yottabytes) per bit error, which means never.
 
I have a Norco 4220 WHS build, but seem to be running into a lot of problems with drives falling in & out of the Raid 6 array. I found out my WD 2T WD20EADS drives have 2 different firmware. Drives with firmware version 01.00A01 seem to be holding fine. But the drives with firmware 04.05G04 will always drop from the array & the Adaptec 52445 Raid controller alarm goes off on almost every reboot. For now, I have taken the drives with firmware 04.05G04 out of the chassis.

Is your array holding steady?
I think I will try the WDIDLE3 and WDTLER programs & see it it will give me some relief. If these don't work out then I may get rid of the WD drives & go with Hitachi. I currently have 2 Hitachi & they seem to work fine.

Jumper them to SATA-1. This is a known issue/ fix with WD Greens and Adaptec controllers. It is also a <$2 fix that will take a few minutes versus spending hours and lots of money on a different enclosure.
 
Odditory and I were so close to both buying that case (literally the very one he has). :p
 
BTW @ treadstone et al. I had a similar issue with the HP SAS Expander in the Asus P6T7 WS Supercomputer. Funny thing was that I was using a non-NF200 board that seemed to have a similar issue to test it on when the Asus was not working. My solution involved a Supermicro motherboard though. Let's just say... it was frustrating, but I'm happy now.
 
Awesome project sir, I myself am getting into ripping my blu-ray collection but I'm not even near the scale you are undertaking >:p.

Anywho, since you will be rockin a raid, whether it be raid 5 or 6 (I recommend raid6 from previous experience); will you have spare hotswaps ready in the event your array is compromised?
 
Odditory and I were so close to both buying that case (literally the very one he has). :p

Sorry man... When I came across the case and noticed for what it was listed at, I figured I had to act on it before someone else would ;) ... guess I was right :D

BTW @ treadstone et al. I had a similar issue with the HP SAS Expander in the Asus P6T7 WS Supercomputer. Funny thing was that I was using a non-NF200 board that seemed to have a similar issue to test it on when the Asus was not working. My solution involved a Supermicro motherboard though. Let's just say... it was frustrating, but I'm happy now.

At first I thought this might be related to the fact that they used the nForce200 chips but I think (also based on what you just posted) it's just the way Asus implements the additional PCIE slots...
 
Different questions for those of you who own a Supermicro motherboard:

Does SM support PUIS for HDD that have that feature via the BIOS on the on-board SATA ports?
 
I have a Norco 4220 WHS build, but seem to be running into a lot of problems with drives falling in & out of the Raid 6 array. I found out my WD 2T WD20EADS drives have 2 different firmware. Drives with firmware version 01.00A01 seem to be holding fine. But the drives with firmware 04.05G04 will always drop from the array & the Adaptec 52445 Raid controller alarm goes off on almost every reboot. For now, I have taken the drives with firmware 04.05G04 out of the chassis.

Is your array holding steady?
I think I will try the WDIDLE3 and WDTLER programs & see it it will give me some relief. If these don't work out then I may get rid of the WD drives & go with Hitachi. I currently have 2 Hitachi & they seem to work fine.

I'm pretty sure the culprit is the onboard expander on the 52445. Unless there's a way to flash all your WD drives to the same version (not sure, don't own WD's anymore), you can try the SATA-1 jump like pjkenned mentioned, or my personal preference which is getting Adaptec to swap your 52445 for one with the newer onboard expander chip. I sold Krobar my 52445, he ran into some issues with dropouts and then Adaptec swapped his card for a newer rev. If I recall he was in the UK though. Hopefully they'll honor it in the U.S. too.
 
Different questions for those of you who own a Supermicro motherboard:

Does SM support PUIS for HDD that have that feature via the BIOS on the on-board SATA ports?

The two that i have used recently

X8DAL-i
X8DTL-i

use the ICH10R chipset for the SATA ports and there is not an option for that.
 
The two that i have used recently

X8DAL-i
X8DTL-i

use the ICH10R chipset for the SATA ports and there is not an option for that.

Thanks Nitro. The Asus MB I am using doesn't seem to support that either :(

Wonder when or what MB actually support that feature...

The WD20EADS support it if you jumper them. But the MB needs to support it otherwise the MB will not be able to identify/recognize the HDD during POST...
 
Im confused as to why/how you would use that feature?

Why would you want your OS drives to boot into standby?
 
Spinup power consumption.

I need to see how I can tweak the system to bring the total power consumption down.

I configured the ARC-1680i to do staggered spin-up and it works (at least most of the time) but at the beginning, when you power up the system it appears as all drives are spinning up at the same time. I haven't had the chance to really play with it for the past two weeks, so this is all from memory.
There are two drives that are connected to the MB and 48 are connected via the HP SAS expander cards to the ARC-1680i.
With all 50 drives spinning up at once, the draw is in excess of 1100W!
It's not a problem as the power supplies and the UPS have no issue handling that load, but if I can change the configuration to have the drives spun up on request, that would be better.
 
Right but the fact that 48 of your drives are on the Areca, wouldnt the Areca have to support it.
mobo supporting it would only matter for 4 drives.
 
Actually 2 drives and yes you are correct. I was asking bout the MB because I want to get those 2 drives to only spin up once the POST is done and it's time to load the OS.
Unfortunately, the ARC-1680i does NOT support PUIS (or at least I couldn't get it to work when the drives had the PUIS jumpers on it).
 
Actually 2 drives and yes you are correct. I was asking bout the MB because I want to get those 2 drives to only spin up once the POST is done and it's time to load the OS.
Unfortunately, the ARC-1680i does NOT support PUIS (or at least I couldn't get it to work when the drives had the PUIS jumpers on it).

Areca and Adaptec dont support PUIS, I think Highpoint do.
 
This is a very impressive build, can't wait to see more!

lol some of the porn addicts in the forum must be really jealous right about now.
 
Wow I just have to say I love this build
Its sick
although Im not envious of your credit card bill
 
Nice build! Wish I had the dough to build something like that, still running stackers + 4in3's...
 
Nice. I'd hate having to move that thing for whatever reason.

Anyway, how much did you spend on the 52 drives?
 
100TB? Is that all? :D

Looking forward to reading through the build in detail when I get home...
 
Back
Top