PSU for a 20 HDD system?

Boomslang

Limp Gawd
Joined
Apr 28, 2007
Messages
451
Hello, I'm new here, so I apologize if this was touched upon in the past.

This summer I'm building a storage server that will contain 18 storage disks and 1 OS disk. I am looking for a PSU that will be able to power these drives in addition to an AMD 64 X2 Brisbane at stock clocks, with 4 SATA controllers in the PCI slots. I would like to spend $200 or less. I have no preference as to whether I should use a single PSU, or use one for the system and one for the HDDs. Combined cost should still be around $200.

The two options I've entertained so far:

1.) Dual PSUs: The FSP Green PS FSP300-60GLN ATX2.0 300W for the system + The SeaSonic M12 SS-600HM ATX12V / EPS12V 600W for the drives

2.) Single PSU: The PC Power & Cooling Silencer 750 Quad (Black) EPS12V 750W for the entire thing

I'm fairly certain I have enough wattage for the system, the only thing I'm unsure about is the amperage. Does 18A on quad 12V rails mean the same thing as 72A on a single 12V rail? I read through a bit of the info in the stickies, and I know I have enough amperage with either option to handle all of the drives spinning up at once (my controllers do not support staggered spinup; they are quite cheap). Also, both options use PSUs that have decent efficiency numbers, which I value. Active PFC is important.

To sum it up: Lots of hard drives need power. I have $200 to spend. I care about efficiency. Single PSU? Dual PSU? Amps distributed over rails, or all on one? Taking your thoughts. Thanks.
 
First: I suggest a single psu over two.
Second: Go with a single rail if you decide to go for 2.
Third: I'd try a nice Zippy, or Silverstone. They should have more than enough juice for 20.
 
Get SATA cards with staggered bootup. After that, most 600W+ PSUs can handle the load. Probably even the Corsair 620W..
 
Get SATA cards with staggered bootup. After that, most 600W+ PSUs can handle the load. Probably even the Corsair 620W..

4 higher-end SATA controllers would cost far more than a slightly heftier PSU. . .

To the OP: The single PSU solution would probably be optimal I should think. May even be slightly overkill.

Just out of curiosity, how much space do you have with 18 drives? And better yet, what kind of porn are you into that would require that much space? :D

(Sorry, had to say it. :p)
 
4 higher-end SATA controllers would cost far more than a slightly heftier PSU. . .

To the OP: The single PSU solution would probably be optimal I should think. May even be slightly overkill.

Just out of curiosity, how much space do you have with 18 drives? And better yet, what kind of porn are you into that would require that much space? :D

(Sorry, had to say it. :p)

Thanks, guys, for all suggestions so far.

I'd be using 500GB drives unless 750s drop in price by the summer - Price per gig on the entire system would be 33-34 cents per gig, with a total storage of 9TB, but since I'd be using RAID5, I'd see around 2/3 of that after format.

HD midget goat porn sure takes up a lot of space.
 
You're getting PCI slot SATA controllers? Hm..

Yes, because this will be serving SMB shares, so my bottleneck is my networking equipment, which about matches the bandwidth of the PCI bus. This way I can save more money. It would be nice to have some sweet disk I/O, but it wont matter once it hits my NIC.
 
I see nothing wrong with the PC Power & Cooling 750w you've pointed out in your original post. It's a nice solid Seasonic built single rail design.

You'll find that a lot of people here recommend the Corsair HX620 or 520 - it seems the trendy thing to do lately. They're also good Seasonic designs with a similar platform.

I like any of the single rail designs for this type of stuff especially.
 
It looks like I'll be sticking with the PC P&C 750w PSU to power the whole system. Thank you for the reassurance.

I'm not getting Zippy because Newegg does not sell them, although I've heard great things and I regret this limitation.

I checked the single rail Silverstone out, and found it to be ten bucks cheaper than the PC P&C unit. The specs looked good, and it's apparently got modular cabling. However, it doesn't have enough ends for me. I'll be using molex splitters so that I can power two SATA drives from each molex plug, and I'll be using each SATA power connector individually. With the Silverstone, I would be able to support 18 drives max with this method, leaving no power for the OS HDD. The PC P&C has enough plugs for my needs.

Once again, thank you all - I'm pretty confident now, but I'm willing to hear out any further suggestions.
 
Yes, because this will be serving SMB shares, so my bottleneck is my networking equipment, which about matches the bandwidth of the PCI bus. This way I can save more money. It would be nice to have some sweet disk I/O, but it wont matter once it hits my NIC.

Gigabit?
 
I've been in a similar situation -

a pair of 3ware 9500s-12 which are 12 ports for SATA disks.

I only could get 16 disks to fit in my server case. It was one of the Coolermaster cases with three of the 4 disks in 3 of 5 1/2 space. I ended up having two power supplies with the 4 pin splitter into two sata power connectors.

It worked nicely for a while until I considered it a complete waste because of the PCI bandwidth. They were set in disk duplexing with raid 0. The write would only go up to 80 MB/sec while same for reading too. It was horrible compared to when my single disk, for the OS, could even meet the write/read performance when it was hooked to the onboard SATA connector.

If I was to do it all over again, I would have made sure that my board had two of PCI-X slots.

I might even consider selling my raid cards if there are any interest.
 
I've been in a similar situation -

a pair of 3ware 9500s-12 which are 12 ports for SATA disks.

I only could get 16 disks to fit in my server case. It was one of the Coolermaster cases with three of the 4 disks in 3 of 5 1/2 space. I ended up having two power supplies with the 4 pin splitter into two sata power connectors.

It worked nicely for a while until I considered it a complete waste because of the PCI bandwidth. They were set in disk duplexing with raid 0. The write would only go up to 80 MB/sec while same for reading too. It was horrible compared to when my single disk, for the OS, could even meet the write/read performance when it was hooked to the onboard SATA connector.

If I was to do it all over again, I would have made sure that my board had two of PCI-X slots.

I might even consider selling my raid cards if there are any interest.

I was planning on using cheap <$30 Rosewill 4-port SATA150 controllers with software RAID5, mostly because a friend uses the same controllers with a bunch of 250gig drives in his 1.5TB setup. He is experiencing fairly dismal speeds, but he also has the gigabit NIC on the same PCI bus as the SATA controllers. His hardware is much older and crappier than mine, and I'm sure that contributes in no small part. However, speed still isn't a priority - cost and storage space are. The server will not be in any sort of mission-critical environment, and I have enough patience to wait out the transfers.

If I were to get a PCI-X setup, I'd be paying more for the motherboard, much more for the SATA controllers, and I still wouldn't be able to get more than the real-world bandwidth of the gigabit NIC. Therefore, I'm content with what I've got here.

@SilenceGold - If you were curious, I'm using a Cooler Master Stacker case with (qty 4) 3x 5.25" -> 5x 3.5" bay adapters, non-hotswappable. With the PC P&C unit, I'll be able to power 16 HDDs off the molex connectors (with splitters) and 6 more drives with the SATA power connectors. I'll probably wind up powering 14 drives off the molex and 6 drives off the SATA, and have one molex spare for use with a CDROM drive that will not be kept onboard except when installing the OS. I will power the fans using what's left over.
 
I was planning on using cheap <$30 Rosewill 4-port SATA150 controllers with software RAID5, mostly because a friend uses the same controllers with a bunch of 250gig drives in his 1.5TB setup. He is experiencing fairly dismal speeds, but he also has the gigabit NIC on the same PCI bus as the SATA controllers. His hardware is much older and crappier than mine, and I'm sure that contributes in no small part. However, speed still isn't a priority - cost and storage space are. The server will not be in any sort of mission-critical environment, and I have enough patience to wait out the transfers.

If I were to get a PCI-X setup, I'd be paying more for the motherboard, much more for the SATA controllers, and I still wouldn't be able to get more than the real-world bandwidth of the gigabit NIC. Therefore, I'm content with what I've got here.

@SilenceGold - If you were curious, I'm using a Cooler Master Stacker case with (qty 4) 3x 5.25" -> 5x 3.5" bay adapters, non-hotswappable. With the PC P&C unit, I'll be able to power 16 HDDs off the molex connectors (with splitters) and 6 more drives with the SATA power connectors. I'll probably wind up powering 14 drives off the molex and 6 drives off the SATA, and have one molex spare for use with a CDROM drive that will not be kept onboard except when installing the OS. I will power the fans using what's left over.


Unless you also had multiple aggregate gigabit links
 
Yes, but since I have no PCI slots left, I'd have to move up to a PCI-E slot, and that's another $50. :(

Aren't the first gigabit links on the nForce cards linked through PCIe, I think 1 of the 2 gigabit links on my nForce 4 Ultra-D is, the nVidia one is, the Marvel is on PCI. Hell even my board only has 3(?) PCI slots.
 
Aren't the first gigabit links on the nForce cards linked through PCIe, I think 1 of the 2 gigabit links on my nForce 4 Ultra-D is, the nVidia one is, the Marvel is on PCI. Hell even my board only has 3(?) PCI slots.

The motherboard I have picked out is here: http://www.newegg.com/product/product.asp?item=N82E16813138041

I'm not sure if the NIC rides the PCI bus for this mobo. I'll try and find out, and if anyone knows for certain, I'd be interested in hearing, but I'm still not sure it would influence the decision. It would just be good to know.
 
Also, what drives are you using, and do you have them yet?

I do not have anything yet, this is a summer build and is still in the planning stages. If I were to build right now, I'd be using Samsung Spinpoint T 500GB drives ( http://www.newegg.com/Product/Product.aspx?Item=N82E16822152052 ). I've used them in the past and I've been pretty happy with them, speeds aren't awesome but everything else is. They are $125, or 25 cents per gig. As far as bang for your buck goes, they are only trailing the 320GB drives, but I'd rather pay the 2 extra cents per gig and get the more spacious drives. The 750s are running about 33 cents per gig, which is a pretty dramatic step up.

If the summer brings more price cuts, I wouldn't mind getting larger drives at all.
 
Well those HD501LJ drives each have a max spin up current draw of 2.4amps, so multiply that by 20 and you've got 24amps (a peak of 288 watts) being used if all the drives spin up at once. If you've got a controller capable of doing a staggered power-up then you lessen that value a bit.

As far as normal operations of the drives go:

164 watts = all 20 drives at idle
250 watts = if all 20 drives did a seek operation at exactly the same time.

You have to add to that the draw of the other components in your system, but that's going to be quite a small addition when compared to all the drives. I would look at the single rail designs offered in this thread. The Corsair 620, the PC Power and Cooling 700w, or the 600w version, or if you're just rolling in the dough you could go for that $300+ Zippy/Emacs. :D
 
Well those HD501LJ drives each have a max spin up current draw of 2.4amps, so multiply that by 20 and you've got 24amps (a peak of 288 watts) being used if all the drives spin up at once. If you've got a controller capable of doing a staggered power-up then you lessen that value a bit.

As far as normal operations of the drives go:

164 watts = all 20 drives at idle
250 watts = if all 20 drives did a seek operation at exactly the same time.

You have to add to that the draw of the other components in your system, but that's going to be quite a small addition when compared to all the drives. I would look at the single rail designs offered in this thread. The Corsair 620, the PC Power and Cooling 700w, or the 600w version, or if you're just rolling in the dough you could go for that $300+ Zippy/Emacs. :D

Thank you very much for the concrete numbers, that's very helpful - but you've got a math error, for 2.4 amps x 20 != 24. That's 48 amps, if I'm not mistaken. Does that mean I should double your wattage calculations as well?

And that Corsair 620 is looking mighty good, especially at $50 less than the PC P&C after rebate. It's got all the plug ends I need, but It's got only 6 amps to spare during spinup. I'm guessing this will be sufficient, but I'm not positive. It, however, is not single rail, and it seems that the majority of recommendations are leaning towards single rail. Is there any real reason why I should consider the Corsair 620 over the PC P&C 750, aside from price?
 
assuming 2.4a per drive on spinup, that's 48a including some headroom. You should be fine with the Corsair or the Silverstone Olympia 650 (54a). Depending on the rest of your hardware, that is. Less if you have staggered spinup.
 
Thank you very much for the concrete numbers, that's very helpful - but you've got a math error, for 2.4 amps x 20 != 24. That's 48 amps, if I'm not mistaken. Does that mean I should double your wattage calculations as well?

And that Corsair 620 is looking mighty good, especially at $50 less than the PC P&C after rebate. It's got all the plug ends I need, but It's got only 6 amps to spare during spinup. I'm guessing this will be sufficient, but I'm not positive. It, however, is not single rail, and it seems that the majority of recommendations are leaning towards single rail. Is there any real reason why I should consider the Corsair 620 over the PC P&C 750, aside from price?

The Corsair is definitely single rail. Read jonny's review.
 
assuming 2.4a per drive on spinup, that's 48a including some headroom. You should be fine with the Corsair or the Silverstone Olympia 650 (54a). Depending on the rest of your hardware, that is. Less if you have staggered spinup.

Have now switched my top pick to the Silverstone Olympia 650. Thanks for the suggestion. $150, single rail @ 54A, 12 molex and 6 SATA. Sounds good to me!
 
The Corsair is definitely single rail. Read jonny's review.

http://www.corsairmicro.com/corsair/HX_power_supply.html

This is the one I was looking at, claims triple 12V rails. I'll look for that review.

EDIT: I'm pretty sure the review says triple rails as well - here's the link I'm using: http://www.jonnyguru.com/review_details.php?id=32&page_num=2

Am I missing something?

EDIT 2: For the Silverstone unit, Newegg reports 4x triple molex connections, whereas Jonny and the manufacturer's page reports 2x triple molex connections. 6 molex plugs may not be enough unless I drop a splitter or two on the SATA plugs. What's with the discrepancy in specs between these sources?
 
Thank you very much for the concrete numbers, that's very helpful - but you've got a math error, for 2.4 amps x 20 != 24. That's 48 amps, if I'm not mistaken. Does that mean I should double your wattage calculations as well?

And that Corsair 620 is looking mighty good, especially at $50 less than the PC P&C after rebate. It's got all the plug ends I need, but It's got only 6 amps to spare during spinup. I'm guessing this will be sufficient, but I'm not positive. It, however, is not single rail, and it seems that the majority of recommendations are leaning towards single rail. Is there any real reason why I should consider the Corsair 620 over the PC P&C 750, aside from price?



OMG I'm so embarrassed, I can't believe I did that. For some reason I kept thinking 10 drives when I did the spin-up calc even though you said (and I typed) 20. :D

I did the seek and idle calculations in a hurry but the stats on the drive say 10.6w per drive when seeking, 8.2w at idle.

So exactly 212w if all drives are seeking at once
Or 164w with all drives idling

Sorry about that... That significantly changes your power requirements spin-up wise, but the recommendations in this thread still hold true. I'd definately go for the Corsair HX620 versus the 520 although either will work for this specific application - it's nice to have some headroom. And of course I can't recommend the PC Power & Cooling 750W enough either - they're all fine units.
 
Have now switched my top pick to the Silverstone Olympia 650. Thanks for the suggestion. $150, single rail @ 54A, 12 molex and 6 SATA. Sounds good to me!

Tip: Deadeyedata.com with the coupon code JONNYGURU will be cheaper.
 
http://www.corsairmicro.com/corsair/HX_power_supply.html

This is the one I was looking at, claims triple 12V rails. I'll look for that review.

EDIT: I'm pretty sure the review says triple rails as well - here's the link I'm using: http://www.jonnyguru.com/review_details.php?id=32&page_num=2

Am I missing something?

EDIT 2: For the Silverstone unit, Newegg reports 4x triple molex connections, whereas Jonny and the manufacturer's page reports 2x triple molex connections. 6 molex plugs may not be enough unless I drop a splitter or two on the SATA plugs. What's with the discrepancy in specs between these sources?

Looking at the Seasonic main PCB inside the Corsair PSU reveals only two rails, labeled 12V1 and 12V2. There is no third rail. This is illustrated well at Hardware Secrets. Although I can not say that these rails are or are not somehow electronically separated in the PSU's circuitry somewhere, I did find that there was no OCP (over current protection or "limiter") on either of these rails as I was able to load any given connector up 30 to 40A with no drop in voltage, system shut down, etc.

So it is my opinion that we essentially have a single 12V rail PSU here. Certainly there is nothing wrong with this given the problems high end video cards have had with getting enough power from a single 12V rail when the OCP is set to the typical 240VA limit. But we do lose the advantages of multiple rails such as protection from damage to one rail from a short on another and the simple "filtration" of noise introduced from one rail to another.

Bottom of the page.
 
@Bbq: Thanks mang! Appreciate the info.

@aznx: Ahh, interesting. I definitely missed this while skimming. That's very odd that the company would make false claims. Looks like the price quoted on Deadeyedata.com will cause me to stick with the Silverstone OP650 though - that price is so good.
 
@Bbq: Thanks mang! Appreciate the info.

@aznx: Ahh, interesting. I definitely missed this while skimming. That's very odd that the company would make false claims. Looks like the price quoted on Deadeyedata.com will cause me to stick with the Silverstone OP650 though - that price is so good.

It's not really a false claim. Though a single rail is pretty good in its own. :p
 
I have a very similar set up only in a Lian Li case. I am using a Seasonic 700 and am very happy with it.
 
it's not a false claim: I remember Redbeard saying that when the labels and manuals were printed (long before the psu was done), they were still on an older atx spec, where max per rail was 20a.
 
http://www.corsairmicro.com/corsair/HX_power_supply.html

This is the one I was looking at, claims triple 12V rails. I'll look for that review.

EDIT: I'm pretty sure the review says triple rails as well - here's the link I'm using: http://www.jonnyguru.com/review_details.php?id=32&page_num=2

Am I missing something?

EDIT 2: For the Silverstone unit, Newegg reports 4x triple molex connections, whereas Jonny and the manufacturer's page reports 2x triple molex connections. 6 molex plugs may not be enough unless I drop a splitter or two on the SATA plugs. What's with the discrepancy in specs between these sources?


Virtual tripple rails, but it's actually a single rail PSU.
 
Back
Top