Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
The correct responce is that you want the entire rack of them.
that's exciting and scary a bit.
32 cores per 2u? That seems a little weak imo for this advanced supercomputer.
Using a standard quad x quad setup, you can attain the same core count in 2u's (or 64 processor threads if you will) and it's on air right off the shelf.
Or am I wrong here? Obviously this is still a more ideal setup for research.
Using Quad Core Xeons - you would have to have 8 sockets on the motherboard. (8 X 4 = 32). I thought 4 sockets was the max right now with a vertical daughter board - but I could be wrong.
And this thing has way more memory slots than anything off the shelf that I've seen.
Also, 2U is only about 3 inches thick. So they crammed a lot of shit into that chassis!
I was referring to two 1U units holding 4 quad cores each, thus you have 32 processors in the same (actually smaller since the depth is less) package.
Where can you get 1U quad-cpu quad-cores? Or are you talking about blades?
DAAAAAAAAAAMMMMMMMMMMMNNNNNNNNNNNNNNN!!!!
I wish I understood about half of what you just said.
If you ever want to know how to build a turbine engine - we'll trade information, mmmKay?
Sweet! I have stress simulators available too!
Turbine powered car, mmmmmm... *drool*
Only issue with them is their dislike of constant RPM changes and their poor fuel efficiency at idle (if new technology makes me wrong let me know, thats from when I researched em in highschool).
My dream car is a 4rotor wankel made with the latest materials and computer design at about 1200hp in a 4wd stripped down lotus elise with a sequental 6 speed. If they sometime soon design a CVT that can handle 2000hp though I may have to modify my favorite day dream
Very cool - though that thing probably generates a ton of heat.
These things cost more than your house per node.
The BTUs on these things are insane; a cabinet with integrated cooling is absolutely required, and needs a capacity of ~10000W thermal load over and above a minimum ~10000W per cabinet per cooling unit. e.g. If you have enough A/C for 20000W per hour, you can support two cabinets, but it's iffy. Those are the estimating numbers, NOT the actual numbers. Actual numbers are around 18000W per cabinet, combined. Assuming an 80% efficiency on your cabinet cooling of 10kW gives you 8kW, thusly 10K external per unit.
Now, where's the 18kW come from? 42 in a rack. That's the raw numbers, not the actual. Actual is affected by cabling, storage, air flow, etcetera. Recommendation is 21.5kW cooling per cabinet. You also need dedicated racks for switches, storage, and so on. So for a typical installation, you'll need >30kW cooling capacity per cabinet. In addition to this, you will also need lots and lots of power cabling - each unit under load will peak around 15A - so you'll need your own substation before you even think about it - 630A @ 120VAC per cabinet, or 315A @ 220VAC. To put this in real perspective, my entire datacenter as a whole is only ~550A @ 120VAC including all switches and storage equipment. No, the dual socket dual-board Xeon is NOT much better - the thermal characteristics are pretty close when factoring in your FB-DIMMs.
Sure, watercooling's an option. If you can figure out the plumbing. I did, and found that it would cost more than twice as much as the cabinet.
I was more thinking along the lines of these units, $510 (or about $1,300 barebone) for the chassis with PSU... not bad at all.
LMAO. I didn't know you knew what most peoples houses valued at on here. Don't make assumptions, you know what they say....
550A @120v? That's a cute datacenter.
btw, if you are talking about thermal properties of the supermicro unit, you are way off.
Except you have no way to actually link the distinct units, so you have two 8-core systems with severe bus limitations. You can HTX link two of the H8Q-series boards into a single 32-core 8-socket logical with 4 PCI-X buses. The winner is pretty clear there.
... that you have no clue about hardware and licensing costs, and I do? To actually build out a p575-based system, the per-node cost could buy me several small companies. Just the software licensing for a single p575 node can easily exceed $1M. You also have to factor in the cost of the frame, which is very not cheap, as it's an entire system and backplane in and of itself.
*shrug* No. It's efficient. That's excluding cooling and lighting, obviously, as well. There's well over 200 systems in that footprint, and that number is going up while amperage is going down.
Exactly why I said this is still a more ideal setup.
Like I said, careful with your assumptions. Also you make the assumption that you are the only one with this system experience. Keep in mind, that I installed a 570.. I just don't find them... very interesting. Also, try fitting them into a dell cabinet and just dealing with IBM alone, especially since they bitch the entire way... makes for a PITA experience that I hope to never cross again. Gladly, we only have two of these systems (the other one is a much older one).
Anyways, you mentioned purley a node.. you even bolded it. For example, a p570 node sets you back about $250kish and up depending on your configurations. But like you said, it's not the node that is expensive, it's the activation of certain features, the licensing, service costs, and other plans/hardware they stick you with... they love to tease you with limited activation too (not to mention trying to sell you their overpriced cabinets).
It's cute. 200 systems.
I was disputing your numbers for the reason that you mentioned peak power of 15amps per unit, or 630a (120v) with 42 machines. Well these units have a .9-1kw psu, if you did your math using common known power conversions (A x V = W), you are looking at 15a x 120v = 1800w per unit, not possible. Now if you used the max power output assuming 100% efficiency based on a 1kw rating (not possible), you are looking at (w/v = a) 1000w / 120v = 8.33a, almost half of your estimates. Now your cooling capacity per cabinet (assuming 42 units, 1p, 120v @ 100%) is accurate (approx 32.256kw)
Well, that's because you don't get to play with them. I'm not even going to ask why you installed it in a PoS Dell cabinet; we just threw out all ours. Threw out. Complete and utter trash. Everything's been replaced by Sun 1000-38's or IBM T42's. I'm working on replacing the 1000-38's with T42's, especially since the Sun PDUs are such utter crap. IBM by yourself isn't so bad, so long as you have a decent rep. Thankfully, I probably have the best in the country, from what I keep hearing.
And you never buy just one p575, since you have to buy the frame anyways.
Your numbers are correct by the book. HOWEVER, the book is wrong. Firstly, the Supermicro power supplies have REALLY bad efficiency overall, second,
you're ignoring NEC/NEBS, and third, peak inrush versus typical. TYPICAL on these measured, is around 10-12A or ~80-60% efficient. NEC/NEBS states that electrical load per circuit should not exceed 80%, or ~12A @ 120V or 12A * 42 = 504A = EXACTLY 80% of 630A. So my numbers are absolutely correct and up to code - especially since these bastards do an inrush of ~14A typical which would blow a 504A combined away. (14A * 42 = 588A)
Now it makes sense, don't it?
Don't ask me either, I hate Dell cabinets too. I would much rather prefer APC or Great Lakes.
Got any of them dell cabinets left? Since you guys seem to be scaling down, got any HVAC units sitting around?
Yep, those costs sounds accurate. I was just using the node comment in specific
These SM units are HE units. 80-85%+
If you count inrush, then I can agree.
<slightly off topic >
AreEss - Have you worked with/did you considered Superdomes ? What's your opinion on them ?