Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
As others have said it is a true 1kw PSU. It shall have an extremely long life, I don't see anything on the horizon coming near to obsolete it. Talking about sever PSU's and thir true rating is one of the reasons I recomended Zippy. They are super solid psu's designed for a server market where stability is #1.Ockie said:Thats why I hate enthusiast psus, their actual output is so damn low. I'm expecting this 1KW to give me nothing less than 1KW of output. Its interesting because when you deal with server grade PSUs they don't talk about peak power, peak power is just like a perk or something
That 680 of yours did have some great numbers too... but I guess it's just like riced out cars... all the stickers, all the show... no go
EnderW said:Don't know why you would have loose connectors, I've owned 2 PCP&C units and never had anything like that.
As far as the number of connectors, I'm surprised you didn't check the diagram of the harness on their website.
I noticed they didn't sleeve the very ends of some of the wires - that's kinda disappointing since they did it on my 510W.
trust_no1 said:^I think that's because he used extensions and splitters?
Anyway, I would contact PC P&C and talk to them about it, I'm sure they can try to solve those problems. At that price I would expect no less.
Mine seems pretty solid.Ockie said:Pull down on your 24 pin atx cable and you will see some of the pins pulls out just enough to prevent the machine from starting.
mashie said:It is an enthusiast PSU designed and aimed for highly overclocked SLI/Crossfire systems, not for servers. Most people with need for 1000W+ in a server should use a 2+1 redundant PSU anyways.
mashie said:Still I don't consider any of them anything but enthusiast grade hardware. And as such I fully understand why they went for 6 SATA plugs in the PCP&C.
Majin said:So that 1Kw beast puts out 1000 watts normal useage and 1200 watts peak.
What is the draw from your wall socket?
Also, how does it feel to own the power draw of 10 X 100 watt light bulbs inside a computer?
Majin said:So that 1Kw beast puts out 1000 watts normal useage and 1200 watts peak.
What is the draw from your wall socket?
Also, how does it feel to own the power draw of 10 X 100 watt light bulbs inside a computer?
Devistater said:I'm so tempted to try and get a 150g raptor. The performance on those is nothing short of amazing, a huge leap from the 74g series, which was a leap from the 36g series.
Guspaz said:...a PCIe 1x slot wouldn't be enough, that is only 150MB/s... You'd need to use PCI-X 2.0, which does 3.4GB/s, enough for even 21 raptors.
Where'd you come up with this? He has 20 500gb drives, not 20 raptors, but whatever, let's roll with it. Areca makes a 24-port pci express x8 card. x1 is 250 MB/s both ways (theoretically) so x8 is 2 GB/s. PCI-X goes to 266/64 at max IIRC, which is 16 gbits, or 2 GB/s. There's a reason that new buses come out, they're faster x16 is twice as fast, of course.Guspaz said:The thought of 21 raptors in RAID5 (for about 3.1TB of space) is blowing my mind. But with that sort of speed, the PCI bus just wouldn't be able to handle it. Even a PCIe 1x slot wouldn't be enough, that is only 150MB/s... One raptor can peak at 88MB/s (not even considering cache), and 21 of them in RAID... Even PCIe 16x might not be enough, if they even make PCIe 16x RAID controllers. You'd need to use PCI-X 2.0, which does 3.4GB/s, enough for even 21 raptors.
CmaN3 said:He wasnt saying that he had 21 raptors, he was just fantasizing about the speed that you would get from that.
unhappy_mage said:Where'd you come up with this? He has 20 500gb drives, not 20 raptors, but whatever, let's roll with it. Areca makes a 24-port pci express x8 card. x1 is 250 MB/s both ways (theoretically) so x8 is 2 GB/s. PCI-X goes to 266/64 at max IIRC, which is 16 gbits, or 2 GB/s. There's a reason that new buses come out, they're faster x16 is twice as fast, of course.
Now, as to transfer rates on such a setup, you might get a few GB/s (88MB/s * 20 = 1760) reading from disk, but writes that are less than a stripe wide are just going to be painful - you go to a 2r/2w scenario.
Ozone77 said:Ockie,
I have heard nothing but good things from PC P & C. I would call them up and see if you can trade your standard 1K PSU for a 'custom-wired' one that has all the conectors and cable lenghts that you need.
Oneos said:You had those parts laying around?!?!
No kidding, must be nice to have stuff like that just laying around.Oneos said:You had those parts laying around?!?!
kpolberg said:Galaxy 3.0 seems like a awesome setup, but I was wondering about the setup on the picture of Neptune. What rack cabinet is it?(looks alot like Chieftec 4U or something similiar. And also, the rack cabinet below the Neptune, is it a SCSI fiber backplane?
Also, what are those drive cabinets on Neptune called? I was kind of looking at a similiar setup, but instead of Supermicro's drive cages, I was looking into Chenbro SK-335, same thing, just different design.
GLSauron said:How does the stacker 801 compare to the original in your opinion. Did you tear apart galaxy 2.x to build 3.0 or did you leave it together?
If you pulled it apart whey did you decide to get the 801 instead of just updating the PSUs and running duals on the original?
So whats your total storage capacity at your place?
Ockie said:The 810 is great. Its built is just so much more improved, stronger, steel top as opposed to alum. This is a great deal to me because with the weight the drive towers punches into the soft alum. which makes it look a bit weaker than it really is. Anyways, the 810 is IMO superior to the 101.
I have so much storage I don't even know I'll have to count it some day, I just kept adding drives and machines whenever I need more. I'm all good with machines now since I got some room to expand