32-core 4.7 GHz Power6 system

The correct responce is that you want the entire rack of them.
 
In 10 years or so something like that will be mainstream for desktop computing. :cool:
 
32 cores per 2u? That seems a little weak imo for this advanced supercomputer.


Using a standard quad x quad setup, you can attain the same core count in 2u's (or 64 processor threads if you will) and it's on air right off the shelf.

Or am I wrong here? Obviously this is still a more ideal setup for research.
 
32 cores per 2u? That seems a little weak imo for this advanced supercomputer.


Using a standard quad x quad setup, you can attain the same core count in 2u's (or 64 processor threads if you will) and it's on air right off the shelf.

Or am I wrong here? Obviously this is still a more ideal setup for research.

Using Quad Core Xeons - you would have to have 8 sockets on the motherboard. (8 X 4 = 32). I thought 4 sockets was the max right now with a vertical daughter board - but I could be wrong.

And this thing has way more memory slots than anything off the shelf that I've seen.

Also, 2U is only about 3 inches thick. So they crammed a lot of shit into that chassis!
 
Using Quad Core Xeons - you would have to have 8 sockets on the motherboard. (8 X 4 = 32). I thought 4 sockets was the max right now with a vertical daughter board - but I could be wrong.

And this thing has way more memory slots than anything off the shelf that I've seen.

Also, 2U is only about 3 inches thick. So they crammed a lot of shit into that chassis!

I was referring to two 1U units holding 4 quad cores each, thus you have 32 processors in the same (actually smaller since the depth is less) package.
 
I wonder if Crysis running at 2560 by 1600 would make that server cry. No GPU. Just using all those cores.

I would cut off my head and have it put in a jar as long as that jar sits on top of one of those Power6 systems! Futurama moment. :)
 
I was referring to two 1U units holding 4 quad cores each, thus you have 32 processors in the same (actually smaller since the depth is less) package.

Where can you get 1U quad-cpu quad-cores? Or are you talking about blades?
 
*sigh* Ockie. Just.. no. Shush. Seriously.

Hi. My name is AreEss.
I have root on more than a few POWER6 based systems.
So, yes, I actually know what the hell I'm talking about.

Mainstream? Forget it. These things cost more than your house per node. Second, very old news for IBM customers - these were announced ages ago. Third, the P575's aren't that impressive. Especially compared to SP-linked p595's, which are 64-core 2TB per.

I'm in the middle of a deployment of IBM gear. The p575's are limited per node, and software limited. To say nothing of the licensing migraines. I opted for the larger p570's, along with p520's, p505Q's and JS22's. With AIX6, I get to move all my web stuff onto JS22's, and never have to deal with the crap that is Veritas ever again - HACMP now does hot stateful failover of entire LPARs and WPARs between hardware. I'm expecting by 2Q09 to be able to lose an entire BladeCenter of JS22's and drop the whole load onto a p570, without anybody ever noticing.

The whole beauty of P6 is that I need less hardware, and do a lot more. I'm replacing over eight cabinets of Sun trash with less than 4 of IBM. I'm also eliminating better than half our maintenance costs, tripling transaction capacity, and reducing cooling load by an entire A/C unit. If I get the budget, I can replace my p520's with fractional p570's (the p570 is an SP-linked multi-chassis system,) but I don't need to. I'm replacing 24 core UltraSPARCs with p520's, and getting huge performance gains. You don't even wanna know what the p505Q's do to the T2000's - there's a reason Sun gave up on Rock.

EDIT TIME!

Re; Supermicro quad-1U's.

Wrong box. The box you are looking for is:
http://www.supermicro.com/Aplus/system/1U/1041/AS-1041M-T2+B.cfm
4 x AMD 8000-series
The BTUs on these things are insane; a cabinet with integrated cooling is absolutely required, and needs a capacity of ~10000W thermal load over and above a minimum ~10000W per cabinet per cooling unit. e.g. If you have enough A/C for 20000W per hour, you can support two cabinets, but it's iffy. Those are the estimating numbers, NOT the actual numbers. Actual numbers are around 18000W per cabinet, combined. Assuming an 80% efficiency on your cabinet cooling of 10kW gives you 8kW, thusly 10K external per unit.
Now, where's the 18kW come from? 42 in a rack. That's the raw numbers, not the actual. Actual is affected by cabling, storage, air flow, etcetera. Recommendation is 21.5kW cooling per cabinet. You also need dedicated racks for switches, storage, and so on. So for a typical installation, you'll need >30kW cooling capacity per cabinet. In addition to this, you will also need lots and lots of power cabling - each unit under load will peak around 15A - so you'll need your own substation before you even think about it - 630A @ 120VAC per cabinet, or 315A @ 220VAC. To put this in real perspective, my entire datacenter as a whole is only ~550A @ 120VAC including all switches and storage equipment. No, the dual socket dual-board Xeon is NOT much better - the thermal characteristics are pretty close when factoring in your FB-DIMMs.
Sure, watercooling's an option. If you can figure out the plumbing. I did, and found that it would cost more than twice as much as the cabinet.
 
DAAAAAAAAAAMMMMMMMMMMMNNNNNNNNNNNNNNN!!!!

I wish I understood about half of what you just said. :eek:

If you ever want to know how to build a turbine engine - we'll trade information, mmmKay?
 
32 cores? nice

but does it run Doom ? :eek:
 
DAAAAAAAAAAMMMMMMMMMMMNNNNNNNNNNNNNNN!!!!

I wish I understood about half of what you just said. :eek:

If you ever want to know how to build a turbine engine - we'll trade information, mmmKay?

Sweet! I have stress simulators available too!
Turbine powered car, mmmmmm... *drool* :D
 
Sweet! I have stress simulators available too!
Turbine powered car, mmmmmm... *drool* :D


The Company I work for actually builds the T53/T55 that has been used in the UH-1 Hueys and Chinooks among others, the Navy's new LCAC's also. It's also been used for the Miss Budweiser racing boats. 1500 - over 3000 HP depending on version and fuel system.

And the engine's main shaft is only 3 ft long. :D
 
Only issue with them is their dislike of constant RPM changes and their poor fuel efficiency at idle (if new technology makes me wrong let me know, thats from when I researched em in highschool).
My dream car is a 4rotor wankel made with the latest materials and computer design at about 1200hp in a 4wd stripped down lotus elise with a sequental 6 speed. If they sometime soon design a CVT that can handle 2000hp though I may have to modify my favorite day dream :D
 
The problem with Wankel rotarys are that they are inefficient. They have so much surface area that it draws out a ton of heat so it needs to burn more fuel. But they are powerful.
I have a Mazda RX7 FD
 
Only issue with them is their dislike of constant RPM changes and their poor fuel efficiency at idle (if new technology makes me wrong let me know, thats from when I researched em in highschool).
My dream car is a 4rotor wankel made with the latest materials and computer design at about 1200hp in a 4wd stripped down lotus elise with a sequental 6 speed. If they sometime soon design a CVT that can handle 2000hp though I may have to modify my favorite day dream :D

Turbines are inefficient at near sea level - Just like Wankels. Wankels are more efficient when turbocharged and so are turbines. That's the primary difference between the t53 and t55 - the t55 has a couple of extra compressor stages and produces almost twice the power as the t53.

And the rpm change thing---- t55's turn around 12k at idle - around 22k at full honk. We produce an engine that runs at 60k at 100% and can hit 65k at full power.
The centripetal forces that you have to overcome to just go through idle to full power cycles in just unthinkable.

The actual compression ratio of a turbine or even a true jet is on the order of 4 - 6 to 1, unlike a piston driven engine. That's why they burn a lot of fuel. But - the turbine makes a HUGE amount of power compared to it's weight and that's the attraction of turbine engines. And we won't get better efficiency until the matierals science will get us to temps around 3000F - 3500F degrees, currently we are around 2250F for a high performance turbine. Built of lots of Titanium and Inconel, we'll need ceramics and carbon-carbon materials and an improvement in coating's technology, platinium alumide at the present - to get better performance. But I'm hoping!!

Sorry for the thread jack...
 
The Wankel is awesome from an engineering standpoint, although it does have a thirst for both fuel and oil.

Love my RX8.

Had to chime in on the rotary love. ;)

al
 

Very cool - though that thing probably generates a ton of heat.

I was more thinking along the lines of these units, $510 (or about $1,300 barebone) for the chassis with PSU... not bad at all.


These things cost more than your house per node.

LMAO. I didn't know you knew what most peoples houses valued at on here. Don't make assumptions, you know what they say....

The BTUs on these things are insane; a cabinet with integrated cooling is absolutely required, and needs a capacity of ~10000W thermal load over and above a minimum ~10000W per cabinet per cooling unit. e.g. If you have enough A/C for 20000W per hour, you can support two cabinets, but it's iffy. Those are the estimating numbers, NOT the actual numbers. Actual numbers are around 18000W per cabinet, combined. Assuming an 80% efficiency on your cabinet cooling of 10kW gives you 8kW, thusly 10K external per unit.
Now, where's the 18kW come from? 42 in a rack. That's the raw numbers, not the actual. Actual is affected by cabling, storage, air flow, etcetera. Recommendation is 21.5kW cooling per cabinet. You also need dedicated racks for switches, storage, and so on. So for a typical installation, you'll need >30kW cooling capacity per cabinet. In addition to this, you will also need lots and lots of power cabling - each unit under load will peak around 15A - so you'll need your own substation before you even think about it - 630A @ 120VAC per cabinet, or 315A @ 220VAC. To put this in real perspective, my entire datacenter as a whole is only ~550A @ 120VAC including all switches and storage equipment. No, the dual socket dual-board Xeon is NOT much better - the thermal characteristics are pretty close when factoring in your FB-DIMMs.
Sure, watercooling's an option. If you can figure out the plumbing. I did, and found that it would cost more than twice as much as the cabinet.


550A @120v? That's a cute datacenter. :D


btw, if you are talking about thermal properties of the supermicro unit, you are way off. ;)
 
I was more thinking along the lines of these units, $510 (or about $1,300 barebone) for the chassis with PSU... not bad at all.

Except you have no way to actually link the distinct units, so you have two 8-core systems with severe bus limitations. You can HTX link two of the H8Q-series boards into a single 32-core 8-socket logical with 4 PCI-X buses. The winner is pretty clear there.

LMAO. I didn't know you knew what most peoples houses valued at on here. Don't make assumptions, you know what they say....

... that you have no clue about hardware and licensing costs, and I do? To actually build out a p575-based system, the per-node cost could buy me several small companies. Just the software licensing for a single p575 node can easily exceed $1M. You also have to factor in the cost of the frame, which is very not cheap, as it's an entire system and backplane in and of itself.

550A @120v? That's a cute datacenter. :D

*shrug* No. It's efficient. That's excluding cooling and lighting, obviously, as well. There's well over 200 systems in that footprint, and that number is going up while amperage is going down.

btw, if you are talking about thermal properties of the supermicro unit, you are way off. ;)

Love to see where you're getting your numbers. I'm basing 18kW on 42 units with 4x 2nd Gen 8000's @ 105W, 2x 15kRPM, and full DIMM complement in one cabinet at peak utilization, which is where you generally should measure if you're expecting an average 60-80% load. Now, if you go to the 3rd Gen processors, yes, the numbers are dramatically different @ 55W per CPU. You also have to factor in target temperature, which I require at 55C or below at 100% for all sockets and PWM/VRM to not exceed 67C.
Not going to argue the thermal load can go way down, but you're also not buying 42 units in a single cabinet for thermal efficiency, you're buying it for RAW POWER. (Insert Tim Allen grunting here.) TBH I generally factor 1 processing cab + 1 interconnect cab @ 18kW total thermal. Bear in mind the Interconnect Cab will usually have 2-4 gigabit switches, usually some FC gear and disk, and InfiniBand edge gear e.g. Voltaire Director ISR2004, ISR9096, or ISR9024's, or TopSpin^WCisco 7000D's, etcetera.
 
Except you have no way to actually link the distinct units, so you have two 8-core systems with severe bus limitations. You can HTX link two of the H8Q-series boards into a single 32-core 8-socket logical with 4 PCI-X buses. The winner is pretty clear there.

Exactly why I said this is still a more ideal setup.

... that you have no clue about hardware and licensing costs, and I do? To actually build out a p575-based system, the per-node cost could buy me several small companies. Just the software licensing for a single p575 node can easily exceed $1M. You also have to factor in the cost of the frame, which is very not cheap, as it's an entire system and backplane in and of itself.

Like I said, careful with your assumptions. Also you make the assumption that you are the only one with this system experience. Keep in mind, that I installed a 570.. I just don't find them... very interesting. Also, try fitting them into a dell cabinet and just dealing with IBM alone, especially since they bitch the entire way... makes for a PITA experience that I hope to never cross again. Gladly, we only have two of these systems (the other one is a much older one).

Anyways, you mentioned purley a node.. you even bolded it. For example, a p570 node sets you back about $250kish and up depending on your configurations. But like you said, it's not the node that is expensive, it's the activation of certain features, the licensing, service costs, and other plans/hardware they stick you with... they love to tease you with limited activation too (not to mention trying to sell you their overpriced cabinets).

*shrug* No. It's efficient. That's excluding cooling and lighting, obviously, as well. There's well over 200 systems in that footprint, and that number is going up while amperage is going down.

It's cute. 200 systems. ;)



I was disputing your numbers for the reason that you mentioned peak power of 15amps per unit, or 630a (120v) with 42 machines. Well these units have a .9-1kw psu, if you did your math using common known power conversions (A x V = W), you are looking at 15a x 120v = 1800w per unit, not possible. Now if you used the max power output assuming 100% efficiency based on a 1kw rating (not possible), you are looking at (w/v = a) 1000w / 120v = 8.33a, almost half of your estimates. Now your cooling capacity per cabinet (assuming 42 units, 1p, 120v @ 100%) is accurate (approx 32.256kw)
 
I wonder what the liquid volume is? What happens when it springs a leak :eek:

Anything holding/pumping/routing any liquid will eventually leak!
 
wow. I'd have hated to be the guy that had to set up the watercooling system on that thing.

Hope that thing never springs a leak!
 
Exactly why I said this is still a more ideal setup.

I missed that part. I blame the painkillers. :p

Like I said, careful with your assumptions. Also you make the assumption that you are the only one with this system experience. Keep in mind, that I installed a 570.. I just don't find them... very interesting. Also, try fitting them into a dell cabinet and just dealing with IBM alone, especially since they bitch the entire way... makes for a PITA experience that I hope to never cross again. Gladly, we only have two of these systems (the other one is a much older one).

Well, that's because you don't get to play with them. I'm not even going to ask why you installed it in a PoS Dell cabinet; we just threw out all ours. Threw out. Complete and utter trash. Everything's been replaced by Sun 1000-38's or IBM T42's. I'm working on replacing the 1000-38's with T42's, especially since the Sun PDUs are such utter crap. IBM by yourself isn't so bad, so long as you have a decent rep. Thankfully, I probably have the best in the country, from what I keep hearing.

Anyways, you mentioned purley a node.. you even bolded it. For example, a p570 node sets you back about $250kish and up depending on your configurations. But like you said, it's not the node that is expensive, it's the activation of certain features, the licensing, service costs, and other plans/hardware they stick you with... they love to tease you with limited activation too (not to mention trying to sell you their overpriced cabinets).

Actually, compared to everyone else, IBM cabs are reasonable for what you get. And then some - especially compared to HPaq and Sun. Sun most especially, don't even ask what a 1000-38 costs. But yeah, itemizing it: $250K hardware, $6K 3yr SWMA (IIRC. May be a higher bracket.) BUT! Then comes Oracle. Who does NOT license per-socket any more, for "increased value." Each socket in a p575 is 6/12 (0.75 multi) CPUs for licensing, or 12/20 CPU licenses, per Oracle. Or so you'd THINK. Each QCM is 0.75 x 4 Socket Count. So, you end up with a p575 at 16 cores being licensed as 6
http://www.oracle.com/corporate/press/2005_dec/multicoreupdate_dec2005.html
So, going by: http://www.awaretechnologies.com/oracleProducts/priceList.html
Enterprise Edition Base: $480,000 (12x $40,000)
RAC Addition (Req'd): $240,000 (12x $20,000)
Partitioning (Req'd): $120,000 (12x $10,000)
Developer Named: $5,000 (5x $1,000 - 5 developers in company)
Oracle Total: $845,000 List
Ding, there went $1M+ for a single node just for Oracle. BUT WAIT.
You also need HACMP, backup, etcetera. I'll tell you flat out, a database on a p575 is going to be in the terabyte range. Veritas list on NetBackup 6.5 is $18,000 'per virtual terabyte' (required to license per virt terabyte for features they're promising and still haven't delivered) per year. Or it's $15,000 and $3,000 maintenance per year. They still haven't figured out which it is. Either way, you have to license the 2TB of database times 7 days of backups or 14TB @ $252,000/yr (or $54,000/yr maintenance? Who knows, they don't.) Ouch much? I don't have the TSM licensing costs in front of me or memorized, but you'd need TSM EE plus Database plus For SysBack plus Disaster Recovery. Not cheap.
But wait, there's STILL more. This doesn't cover HACMP, additional software for Oracle on the workstation side, and so on and so forth. And you never buy just one p575, since you have to buy the frame anyways.

It's cute. 200 systems. ;)

*shrug* I've worked on larger. I'd have killed back then to get 200 physical systems into 550A. Of course, then there was the 1600A -48VAC systems in addition to that, but meh. These days, an install of the size I had back then would only be found at cRackSpace, land of equipment theft and people who think it's "if" not "when." (Yah, DR/BC is a big part of my job, and always has been.)

I was disputing your numbers for the reason that you mentioned peak power of 15amps per unit, or 630a (120v) with 42 machines. Well these units have a .9-1kw psu, if you did your math using common known power conversions (A x V = W), you are looking at 15a x 120v = 1800w per unit, not possible. Now if you used the max power output assuming 100% efficiency based on a 1kw rating (not possible), you are looking at (w/v = a) 1000w / 120v = 8.33a, almost half of your estimates. Now your cooling capacity per cabinet (assuming 42 units, 1p, 120v @ 100%) is accurate (approx 32.256kw)

Your numbers are correct by the book. HOWEVER, the book is wrong. Firstly, the Supermicro power supplies have REALLY bad efficiency overall, second, you're ignoring NEC/NEBS, and third, peak inrush versus typical. TYPICAL on these measured, is around 10-12A or ~80-60% efficient. NEC/NEBS states that electrical load per circuit should not exceed 80%, or ~12A @ 120V or 12A * 42 = 504A = EXACTLY 80% of 630A. So my numbers are absolutely correct and up to code - especially since these bastards do an inrush of ~14A typical which would blow a 504A combined away. (14A * 42 = 588A)
Now it makes sense, don't it? :)
 
Well, that's because you don't get to play with them. I'm not even going to ask why you installed it in a PoS Dell cabinet; we just threw out all ours. Threw out. Complete and utter trash. Everything's been replaced by Sun 1000-38's or IBM T42's. I'm working on replacing the 1000-38's with T42's, especially since the Sun PDUs are such utter crap. IBM by yourself isn't so bad, so long as you have a decent rep. Thankfully, I probably have the best in the country, from what I keep hearing.

Don't ask me either, I hate Dell cabinets too. I would much rather prefer APC or Great Lakes.

Got any of them dell cabinets left? Since you guys seem to be scaling down, got any HVAC units sitting around?


And you never buy just one p575, since you have to buy the frame anyways.

Yep, those costs sounds accurate. I was just using the node comment in specific :)


Your numbers are correct by the book. HOWEVER, the book is wrong. Firstly, the Supermicro power supplies have REALLY bad efficiency overall, second,

These SM units are HE units. 80-85%+

you're ignoring NEC/NEBS, and third, peak inrush versus typical. TYPICAL on these measured, is around 10-12A or ~80-60% efficient. NEC/NEBS states that electrical load per circuit should not exceed 80%, or ~12A @ 120V or 12A * 42 = 504A = EXACTLY 80% of 630A. So my numbers are absolutely correct and up to code - especially since these bastards do an inrush of ~14A typical which would blow a 504A combined away. (14A * 42 = 588A)
Now it makes sense, don't it? :)

If you count inrush, then I can agree.
 
Don't ask me either, I hate Dell cabinets too. I would much rather prefer APC or Great Lakes.

Got any of them dell cabinets left? Since you guys seem to be scaling down, got any HVAC units sitting around?

Not quite scaling down per se. Just rearranging(TM). I'm probably going to keep the same cabinet count, just move things around for redundancy and failsafe sake. Right now I have failover and target in the same cabinet in some instances, and I don't like that. I will waste 20U to have that safety.
Cooling is actually being upgraded. Replacing all our ducting, and going back to a properly redundant configuration. Right now, temperatures are outside what I want (or will accept) but no power or room for additional units.

Yep, those costs sounds accurate. I was just using the node comment in specific :)

Yeah, it's budget time already. I don't budget ANYTHING without ALL the numbers. ;)

These SM units are HE units. 80-85%+

Not what I have. I also tend to ship better PSUs, anyways, which more wattage. I don't like how close it is to limit with the big CPUs and the upgraded cooling. I try to keep PSUs at 80%-90% load.

If you count inrush, then I can agree.

Have to; if you don't, you will explode things (literally!) on any sort of power outage.
 
<slightly off topic >

AreEss - Have you worked with/did you considered Superdomes ? What's your opinion on them ?
 
Hey AreEss, do you need assistant? I can work for you free, if you teach me :D
 
<slightly off topic >

AreEss - Have you worked with/did you considered Superdomes ? What's your opinion on them ?

TOTALLY on topic, since those bastards are packing the sx2000 chipset. It's arguably one of the finest chipsets ever built. They're also the last, best PA-RISC, the PA8900. The PA8900 is a beast in it's own right, and then some.
Unfortunately, HP went the Itanic route, and new Superdomes are near impossible to get with PA8900's. Instead, you have to "upgrade" to "Integrity" - which is just a sad, depressing joke.
 
Back
Top