Government unveils world's fastest computer

beowulf7

[H]F Junkie
Joined
Jun 30, 2005
Messages
10,433
I figured many folks here would be interested in the fastest computer in the world (to date),

Government unveils world's fastest computer

WASHINGTON (AP) -- Scientists unveiled the world's fastest supercomputer on Monday, a $100 million machine that for the first time has performed 1,000 trillion calculations per second in a sustained exercise.

The technology breakthrough was accomplished by engineers from the Los Alamos National Laboratory and the IBM Corp. on a computer to be used primarily on nuclear weapons work, including simulating nuclear explosions.

The computer, named Roadrunner, is twice as fast as IBM's Blue Gene system at Lawrence Livermore National Laboratory, which itself is three times faster than any of the world's other supercomputers, according to IBM.

"The computer is a speed demon. It will allow us to solve tremendous problems," said Thomas D'Agostino, head of the National Nuclear Security Administration, which oversees nuclear weapons research and maintains the warhead stockpile.

But officials said the computer also could have a wide range of other applications in civilian engineering, medicine and science, from developing biofuels and designing more fuel efficient cars to finding drug therapies and providing services to the financial industry.

To put the computer's speed in perspective, if every one of the 6 billion people on earth used a hand-held computer and worked 24 hours a day it would take them 46 years to do what the Roadrunner computer can do in a single day.

IBM and Los Alamos engineers worked six years on the computer technology.

Some elements of the Roadrunner can be traced back to popular video games, said David Turek, vice president of IBM's supercomputing programs. In some ways, he said, it's "a very souped-up Sony PlayStation 3."

"We took the basic chip design (of a PlayStation) and advanced its capability," said Turek.

But the Roadrunner supercomputer is nothing like a video game.

The interconnecting system occupies 6,000 square feet with 57 miles of fiber optics and weighs 500,000 pounds. Although made from commercial parts, the computer consists of 6,948 dual-core computer chips and 12,960 cell engines, and it has 80 terabytes of memory.

The cost: $100 million.

Turek said the computer in a two-hour test on May 25 achieved a "petaflop" speed of sustained performance, something no other computer had ever done. It did so again in several real applications involving classified nuclear weapons work this past weekend.

"This is a huge and remarkable achievement," said Turek in a conference call with reporters.

A "flop" is an acronym meaning floating-point-operations per second.

One petaflop is 1,000 trillion operations per second. Only two years ago, there were no actual applications where a computer achieved 100 teraflops -- a tenth of Roadrunner's speed -- said Turek, noting that the tenfold advancement came over a relatively short time.

The Roadrunner computer, now housed at the IBM research laboratory in Poughkeepsie, New York, will be moved next month to the Los Alamos National Laboratory in New Mexico.

Along with other supercomputers, it will be key "to assure the safety and security of our (weapons) stockpile," said D'Agostino. With its extraordinary speed it will be able to simulate the performances of a warhead and help weapons scientists track warhead aging, he said.

But the computer -- and more so the technology that it represents -- marks a future for a wide range of other research and uses.

"The technology will be pronounced in its employment across industry in the years to come," predicted Turek, the IBM executive.

Michael Anastasio, director of the Los Alamos lab, said that for the first six months the computer will be used in unclassified work including activities not related to the weapons program. After that about three-fourths of the work will involve weapons and other classified government activities.,

Anastasio said the computer, in its unclassified applications, is expected to be used not only by Los Alamos scientists but others as well. He said there can be broad applications such as helping to develop a vaccine for the HIV virus, examine the chemistry in the production of cellulosic ethanol, or to understand the origins of the universe.

And Turek said the computer represents still another breakthrough, particularly important in these days of expensive energy: It is an energy miser compared with other supercomputers, performing 376 million calculations for every watt of electricity used.
 
The thing that stood out most to me is that 2 years ago, computers were hitting 100 teraflops. 2 years later, a computer now broke 1 petaflop. Forget about Moore's Law of doubling transistor density (and therefore, roughly speed) every 18-24 mos. We're going up an order of magnitude every 2 years. :cool:

OK, so 2 years does not a trend make. :p

Will we see 1 exaflop soon (10^18)? :D
 
*facedesk*
There is absolutely zero POWER6 in this entire system.

Also, color me entirely unimpressed. Oh boy, they got some good programmers. It's all off-the-shelf junk with a hodgepodge InfiniBand interconnect, using the BRUTE FORCE method. e.g. "hey look what happens when I throw this REALLY REALLY big brick at the window!" Anybody half competent could put that hardware together in that configuration. (See also; copying Cray XT3 and later badly.)

EDIT: Oh, and to the "bah you just have sour grapes" folks - I've been shipping CellBE's. Yes, they're great specialized processors for certain tasks. No, I don't want LLNL. IBM has them through momentum, and can have them. I want nothing to do with LLNL. I do know that in proper Staged-Ring configuration, I can build just as fast. :p
 
*facedesk*
There is absolutely zero POWER6 in this entire system.

Also, color me entirely unimpressed. Oh boy, they got some good programmers. It's all off-the-shelf junk with a hodgepodge InfiniBand interconnect, using the BRUTE FORCE method. e.g. "hey look what happens when I throw this REALLY REALLY big brick at the window!" Anybody half competent could put that hardware together in that configuration. (See also; copying Cray XT3 and later badly.)

EDIT: Oh, and to the "bah you just have sour grapes" folks - I've been shipping CellBE's. Yes, they're great specialized processors for certain tasks. No, I don't want LLNL. IBM has them through momentum, and can have them. I want nothing to do with LLNL. I do know that in proper Staged-Ring configuration, I can build just as fast. :p

For the money, they could have gone high density at least... and at least gone a little more advanced in their data center. It looks like an office room with a raised floor basically.
 
For the money, they could have gone high density at least... and at least gone a little more advanced in their data center. It looks like an office room with a raised floor basically.

Actually, I have a little trouble hitting the density with the CellBE's because their thermal load just blows ass. (Not going to take it back PS3 fanbois. The CellBE is too damn hot. Worse than a Pentium 4.) But presuming they were based on Bladecenter H's, that's going to be 7 total (2x Opteron 2000, 4x CellBE 3.2GHz) per 9U. The problem is in their actual execution based on the photos.

See, the systems in the release photo aren't Bladecenters, or even Blades. Those are IBM x3655's, of which I have a handful or three. They have some bus problems compared to the stuff I ship, namely, not enough. They're also demonstrating their ignorance of proper InfiniBand cabling - Texas Univ. unveiled their stupid-oversize Sun-built InfiniBash switch only after months of nothing but cabling nightmares. IBM seems to have taken the opposite route, installing a large sub-core InifiBand switch per cabinet to limit cables to 1-2m. In InfiniBand, cable distance has a relatively small impact on latency, but an incomprehensibly huge impact on stability and lifespan.

So, let's presume they have x3650's doing I/O linked by InfiniBand to Bladecenters with 14 QS21's for 28 CellBE's per. That's 3 x 28 = 84 CellBE / cabinet. If I use the other boards and take the 4 to 2 Opteron bandwidth hit, that gives me 42 Opteron 2000 + 42 CellBE per cabinet, or better overall density versus Roadrunner. Also bear in mind, I can get 42U of server into a 42U cabinet, but sacrifice 1 cabinet for every 9 for networking. So 9 compute, 1 network. 4-8 support cabinets are required for ever 4 rows, or basically, 4 rows would need counted as 5. Figure each rack requires 12 square feet (unevenly distributed,) that's 108sqft/row, 540sqft for 1344 Opterons, 1344 CellBE's or 0.200sqft/CPU. IBM did 19908 CPUs in 6,000sqft or 0.301sqft/CPU. They also do not mention cooling, which is obviously very proprietary information, but likely not included in the calculations. (It is not in mine; however, most of my cooling is absorption by chiller with heat evacuation as opposed to traditional forced air primary with heat recycling.)
 
Just saw this on wikipedia. If the fastest computer in the world can get by only using dual core CPUs, then I'll probably be fine without a quad, too!
 
Lol, pipe water in from the arctic, water cool that baby and overclock it to oblivion!

Why?

An IBM 7014-T42 with rear door heat exchanger using an off the pallet chiller can do 50K BTU/hr of heat absorption and removal. Not a typo; 50,000 BTU/hr - which is why I use them. A fully loaded cab from me can easily throw over 60,000BTU/hr. (Shut up, it's AMD's fault.) But that drops you to 10K BTU/hr removal by air or ~4 tons of A/C per cabinet. That's actually not a terrible number; two maxed out p570's, maxed out p550, maxed out p520 in a T42 by themselves require ~30,000BTU/hr on current models. Meaning you need zero tons air convection/conversion. (This does NOT mean you do not need air chilling; you DO. You have ambient to exchanger differentials, max ambient, etcetera. A heat exchanger is dependent on the air passed through it.)
 
80 terabytes huh....i need to get my ass in gear, gotta get 78 more to tie.....
 
who would want to use a 3655? IBM just announced their Quad Core offerings for the 3655 and 3755. IBM was/is the last to get them up. And they are not even available yet!

IBM loves their BladeCenter S way to much to care about rack mounts. They cant even share storage in the BCS yet. And they only allow RAID 0,1 and 1e. Not even 5 (Until they release the refreshed SAS Module)

I would have gone with a few hundred 3950 M2's personally.
 
who would want to use a 3655? IBM just announced their Quad Core offerings for the 3655 and 3755. IBM was/is the last to get them up. And they are not even available yet!

IBM loves their BladeCenter S way to much to care about rack mounts. They cant even share storage in the BCS yet. And they only allow RAID 0,1 and 1e. Not even 5 (Until they release the refreshed SAS Module)

I would have gone with a few hundred 3950 M2's personally.

IBM, unlike the rest, takes the time to do things right. Either way, those are 3655's based on their CPU disclosure. They may be current x3650 3.5's, but it's more likely they're x3655's. Either way, systems like this don't get designed and put up overnight.

Secondly, you are very, very, very wrong. So wrong it is not even remotely funny. Frankly, IBM doesn't care anywhere near that much about the BladeCenter S beyond it getting more people onto high margin blades. (Okay, and selling a literal boatload to a very large customer.) You're also DOING IT WRONG - the S disk module is NOT FOR MASS STORAGE, PERIOD! It is for putting in RAID1 OS mirrors equal to the number of blades, as people who have clue about the S - e.g. the company that practically commissioned the design - do. (6 blades, 1 mezzanine per blade, 6x RAID1.) Again, the S was designed to go sit in some back room somewhere, with nothing but luddites for many miles. Secondly, the SAS module is not a requirement for the S storage subassy last I checked, so it's irrelevant. If you want cheap mass storage, go buy a DS3200 or a DS3400 kit. (DISCLAIMER: I run some DS3200's. They're actually pretty nice.)

And considering the system's been in build phase for over nine months by my best estimate, I would love to borrow that time machine so I can switch in 3950M2's. Well, ignoring the fact that the 3950M2 has more than triple the power draw and double the heat output of the x3655.

STANDARD DISCLAIMER: Why yes, I do happen to have a wonderful relationship with my IBM Reseller and IBM Local Offices, and they did and do supply me with wonderful toys. :)
 
They're x3755s, which are populated with 4x Dual Core 2.6GHz Opterons and 128GB of RAM per box. the x3755's are just managing the data flow to the Cells. I work for the company that assisted IBM on the buildout, because...well, let's just say they don't know how to rack their own shit. Our company also has LANL contract positions to run Roadrunner. If I had a DOE clearance maybe I'd get to see it. My clearance however, is not DOE.
 
Ah, figured they would have kept with the x3655's since they're a more stable platform. Then again, IBM's definition of "stability" in some product lines is a synonym for "dead." To be blunt, the x3755's have NOT impressed me, primarily due to the use of the BCM5708. The chip is not very good.
IBM being incapable of racking things, that's not even remotely news. Only IBM would ship a pre-built rack with the 1U keyboard/monitor located at U13, then ask why you're not happy with their install work.

However, the use of the BCM5708 actually reveals a LOT more than you think it does. See, the BCM5708 tells us that they're using RDMA for I/O operations over MPI or MPIch via Gigabit Ethernet. (And the crowd goes "huh?!")
The BCM5708 is one of the newer 'converged' MAC+PHY designs from Broadcom, and has on-silicon TCP Offload, iSCSI Offload, and Direct RDMA (Remote DMA Operation) support. Fast and functional RDMA is an absolute requirement of most modern off-the-shelf designs, and is actually a fundamental requirement of supercomputing dating back to Cray. RDMA is the way you access memory, storage, and networking directly on a system that is not on the local bus. Connection method doesn't matter much, and it can run MPI encapsulated over GigE, Hardware Assisted over InfiniBand, or really via any connection between two or more systems. I've actually experimented with RDMA over Firewire (ugly) and RDMA over SCSI (really really confusing.)

See? Didn't need that Clearance after all. ;)
 
Ah, figured they would have kept with the x3655's since they're a more stable platform. Then again, IBM's definition of "stability" in some product lines is a synonym for "dead." To be blunt, the x3755's have NOT impressed me, primarily due to the use of the BCM5708. The chip is not very good.
IBM being incapable of racking things, that's not even remotely news. Only IBM would ship a pre-built rack with the 1U keyboard/monitor located at U13, then ask why you're not happy with their install work.

However, the use of the BCM5708 actually reveals a LOT more than you think it does. See, the BCM5708 tells us that they're using RDMA for I/O operations over MPI or MPIch via Gigabit Ethernet. (And the crowd goes "huh?!")
The BCM5708 is one of the newer 'converged' MAC+PHY designs from Broadcom, and has on-silicon TCP Offload, iSCSI Offload, and Direct RDMA (Remote DMA Operation) support. Fast and functional RDMA is an absolute requirement of most modern off-the-shelf designs, and is actually a fundamental requirement of supercomputing dating back to Cray. RDMA is the way you access memory, storage, and networking directly on a system that is not on the local bus. Connection method doesn't matter much, and it can run MPI encapsulated over GigE, Hardware Assisted over InfiniBand, or really via any connection between two or more systems. I've actually experimented with RDMA over Firewire (ugly) and RDMA over SCSI (really really confusing.)

See? Didn't need that Clearance after all. ;)

I am relatively new to the HPC environment. My prior experiences (10 years) have all been break/fix, consulting, network design for SMB and commercial clients. It's like learning computers all over again, which is both exciting, and frustrating. I've gotten quite a few certifications that I never thought about getting before coming to HPC space, and have more on the list to get.

That being said, I will agree with you that what IBM determines to be a stable platform is not necessarily the best decision. I was confused as to why they would not go with a denser solution for the data aggregation (than the x3755), such as BCH, but then I suppose they have such ridiculous bandwidth and interconnect requirements for each physical box that BCH would be limiting, as Road Runner moves from Phase 1 to 2, then to 3 in the end (we are currently in Phase 2).

We have quite a few (I think 10 or 11) x3755's in our own rack, at work. I just RMA'd a bunch of parts including system planars, CPU/RAM planars, fans, CPU's, RAM, Infiniband copper for optical media converters, air baffles, power supplies, etc. It qas a stack of parts that comprised about 200lbs worth of gear, in 30 boxes. I can't say for sure why they decided to go with the x3755, but I would imagine it has to do with the internal CPU HTT bus, and the x3755's expansion capabilities, and the roadmap to add support for Quad core Opterons, which I am to understand was just released, but is not yet available.

It's certainly an interesting solution. I believe phase 3 puts Road Runner at over 1.3 or 1.4 Petaflops.

I still wish my clearance was DOE. I'd just like to see it. To hear my coworkers describe the facility, it sounds amazing.
 
IBM, unlike the rest, takes the time to do things right. Either way, those are 3655's based on their CPU disclosure. They may be current x3650 3.5's, but it's more likely they're x3655's. Either way, systems like this don't get designed and put up overnight.

Secondly, you are very, very, very wrong. So wrong it is not even remotely funny. Frankly, IBM doesn't care anywhere near that much about the BladeCenter S beyond it getting more people onto high margin blades. (Okay, and selling a literal boatload to a very large customer.) You're also DOING IT WRONG - the S disk module is NOT FOR MASS STORAGE, PERIOD! It is for putting in RAID1 OS mirrors equal to the number of blades, as people who have clue about the S - e.g. the company that practically commissioned the design - do. (6 blades, 1 mezzanine per blade, 6x RAID1.) Again, the S was designed to go sit in some back room somewhere, with nothing but luddites for many miles. Secondly, the SAS module is not a requirement for the S storage subassy last I checked, so it's irrelevant. If you want cheap mass storage, go buy a DS3200 or a DS3400 kit. (DISCLAIMER: I run some DS3200's. They're actually pretty nice.)

And considering the system's been in build phase for over nine months by my best estimate, I would love to borrow that time machine so I can switch in 3950M2's. Well, ignoring the fact that the 3950M2 has more than triple the power draw and double the heat output of the x3655.

STANDARD DISCLAIMER: Why yes, I do happen to have a wonderful relationship with my IBM Reseller and IBM Local Offices, and they did and do supply me with wonderful toys. :)

Firstly. I am not wrong with the BladeCenter S. I only sit here and answer questions all day about when IBM is going to support RAID 5. The S stands for simple. I really wish IBM would have a complete product when they release it. The 3250 M2 Simple swap models didn't even support RAID when it was first announced, yet IBM said it did. Lots of mad customers.

The SAS Connectivity module IS needed for the BladeCenter S. The SAS HBA in the blade connects to the SAS module, then the SAS module connects to the internal drives on the chassis. The SAS modules also enable the RAID 1. That is why IBM is going to release a new SAS module that supports RAID 5. For all the customers that wanted RAID 5 to begin with. If you didn't have the SAS connectivity Module (P/N 39Y9195) You won't be able to see the disks.

I agree if you want an array with more options going to a DS3xxx would be ideal. I never said the BCS was for mass storage. You're reading it wrong.
Plus your not connecting a DS3200 to ANY BladeCenter at this time, even though the SAS connectivity module has external SAS ports. IBM finally supports SAS tape attachment to the SAS module for backup. before, there was no way to backup a BCS if you had internal storage unless you did it over LAN with like Tivoli. Sure you can MAYBE attach a DS3200 to a BCS with a SAS module, but IBM is not supporting it. :)

3850 M2 would be just as good Seeing as how these were available November of last year. With the ability to scale to a 3950 M2 with the Scale Expander kit.
 
Firstly. I am not wrong with the BladeCenter S. I only sit here and answer questions all day about when IBM is going to support RAID 5. The S stands for simple. I really wish IBM would have a complete product when they release it. The 3250 M2 Simple swap models didn't even support RAID when it was first announced, yet IBM said it did. Lots of mad customers.

The SAS Connectivity module IS needed for the BladeCenter S. The SAS HBA in the blade connects to the SAS module, then the SAS module connects to the internal drives on the chassis. The SAS modules also enable the RAID 1. That is why IBM is going to release a new SAS module that supports RAID 5. For all the customers that wanted RAID 5 to begin with. If you didn't have the SAS connectivity Module (P/N 39Y9195) You won't be able to see the disks.

I agree if you want an array with more options going to a DS3xxx would be ideal. I never said the BCS was for mass storage. You're reading it wrong.
Plus your not connecting a DS3200 to ANY BladeCenter at this time, even though the SAS connectivity module has external SAS ports. IBM finally supports SAS tape attachment to the SAS module for backup. before, there was no way to backup a BCS if you had internal storage unless you did it over LAN with like Tivoli. Sure you can MAYBE attach a DS3200 to a BCS with a SAS module, but IBM is not supporting it. :)

3850 M2 would be just as good Seeing as how these were available November of last year. With the ability to scale to a 3950 M2 with the Scale Expander kit.

Actually, the IBM press releases, and online webinars were held in November. We delivered the first one in the Southwestern US, and it was the second delivery in all of the US. Received in Dec of 07, delivered 2nd week of January, 08. Remember those webinars where they said the 3850M2 would come with the embedded ESX Hypervisor installed? Turns out they STILL don't have it working right. Furthermore, the Hypervisor comes on a 32GB USB key, which is serialized to the system planar, and has the licenses embedded in it. If the system planar goes (admittedly, not likely) then you have to get a new serialized key...from VMWare...which takes 3 days and is not covered in IBm's offered 24x7x3yr warranty, as it is not "their" part, it's VMWare's. On top of that, your performance of the VMWare hypervisor is going to be limited because it's all travelling on the USB bus. Our customer was initially pissed when they got ESX on CD instead of on the system when delivered. Then they found out all of these things, and went "Oh...well, I'm glad that's not what they gave us." Oh, and on top of all of this, if you do any updates to the embedded version of ESX, it has to be installed on spinning disk, because the USB key is chopped into two 16GB partitions, and cannot accomodate allocating space for updates. This was done so that you can maintain two concurrrent versions of ESX on they key, if you'd like. The drawback is that you do not have any space to install updates. Ridiculous implementation, if you ask me. Oh..and if you ever decide that you want to move your licensed copy of ESX 3 Embedded to another server...you fricking can't because the license ties you to use it only on the USB key that is serialized to that system planar. Oh, and this all costs $500...but that's just for they key, not the license itself. Heh...we run a pretty Blue shop here at work, but there's some things that just make me want to kick people in the nuts.

Don't even get me started on the DS3k line and how they throw businesses over the fence and force them to use the CLI for cool features.

IBM's general mentality: "Thank you for you interest, and purchase. We will stand behind our products...if only to point fingers at someone else. Maybe it will be you, because you really don't have the budget to get anything cool for the kind of money that you're looking to spend. We'll take it, but we'll make you our bitch through licensing and support contracts. Have a nice day!"
 
I think I just contributed (pretty significantly, I might add) to this getting WAY off topic. Isn't this supposed to be about Road Runner?
 
wait what? How did we get into the Hypervisor?
Either way. The hypervisor is on the USB stick. Yeah great. woo...It's a waste of money. Because it doesn't have ANY of the features the Enterprise has.Like VMotion, that everyone wants. Then if you want any COOL features you have to buy the upgrade, and that is 5k. Might as well get Enterprise on the non Hypervisor model. And everyone should know you have to put your VM's on a spinning disk, you can put them on the USB stick.

The USB stick just gets your foot in the door with VMWare, by no means is it a solution IMO.
The IBM announcement said available Nov 07. With no expected shipping. I'm just an Engineer, I have no access to ship dates.

As far as the DS3k line, it's entry storage. By no means should it be fun or easy right? Just got DS4700 or above for easy. haha


(Disclamer) I love IBM. Though this has turned into a rant. haha
 
we got onto the while VMWare 3 Embedded version the same way you got onto BCS...by going off topic.
 
Snippage of ESX rant.

I'm a HUGE ESX shop. I mean TRULY huge. Anyone and everyone who's already worked with ESX knew immediately that the Hypervisor thing was going to be preinstall on internal SSDs. We also knew well in advance that VMWare would find some way to make it an absolute nightmare and then some, just like any encounter with their licensing. What do you expect from people who think FlexLM for Windows is a good idea?

Don't even get me started on the DS3k line and how they throw businesses over the fence and force them to use the CLI for cool features.

Uh, how about the point that the DS3k isn't for people who want "cool features," which is why it uses SM3 and not SM4? Also try upgrading your SM and Firmware, since they do handle the basics. If you're looking for enterprise features, why are you buying the lowest end array IBM sells? It's NOT designed for Enterprise use, or Enterprise USERS. It's designed for SMBs without storage experience, which is why the DS3400's are packaged and not typically sold individually. If you want the good stuff, step up to DS4k's.
DISCLAIMER: I have 4 DS3k's in production including a MegaRAID+EXP3k. These are used as multi-path (2 HBA + 2 Controller) direct-attach storage specifically where we do not have FC or do not need the performance, e.g. storing local user home drives at remote locations with no other equipment. We usually get them paired with x3550's and 3TB at <$10K.

IBM's general mentality: "Thank you for you interest, and purchase. We will stand behind our products...if only to point fingers at someone else. Maybe it will be you, because you really don't have the budget to get anything cool for the kind of money that you're looking to spend. We'll take it, but we'll make you our bitch through licensing and support contracts. Have a nice day!"

See above; IBM makes good products, if you're actually buying the right ones. It's not their fault or responsibility if you're demanding functionality and features that 99.999% of their customers just don't have a use or desire for, or understand that they're going to have to pay more for the hardware designed to do.
 
Getting back on topic, when will we see 2 PF (petaflops)? Is anyone else other than IBM currently working on a petaflop system?
 
Getting back on topic, when will we see 2 PF (petaflops)? Is anyone else other than IBM currently working on a petaflop system?

Gimme money. I'll have one for you in six months.
Guesstimating because I'm busy today, it'll only set you back a cool ($5Mx25) for the Distant Memory cabinets and another ($4.7Mx25) for the Nearline Cache.
 
Gimme money. I'll have one for you in six months.
Guesstimating because I'm busy today, it'll only set you back a cool ($5Mx25) for the Distant Memory cabinets and another ($4.7Mx25) for the Nearline Cache.

Check's in the mail. :p

It seems like there often is a pissing contest between supercomputer builders. As long as there's someone willing to pay top dollar (yen, etc.) for it, IBM, et al. will continue pushing out faster and faster systems. :cool:

I just worry that more focus will placed on pure speed rather than finding greener solutions, not that I'm a tree-hugging hippie or anything.
 
Check's in the mail. :p

It seems like there often is a pissing contest between supercomputer builders. As long as there's someone willing to pay top dollar (yen, etc.) for it, IBM, et al. will continue pushing out faster and faster systems. :cool:

I just worry that more focus will placed on pure speed rather than finding greener solutions, not that I'm a tree-hugging hippie or anything.

fast is green, more speed = less time= less time = less power... not that this thing will ever sit idle... (quick.. someone backhack it and borg it.. ) WTF... why are there 19908 copys of F@H running ... and howcome i cant get within 3 feet from the door to this server room without my skin melting off...
 
Here's an article of this on NYTimes.com. Their last paragraph was pretty cool:
...

By breaking the petaflop barrier sooner than had been generally expected, the United States’ supercomputer industry has been able to sustain a pace of continuous performance increases, improving a thousandfold in processing power in 11 years. The next thousandfold goal is the exaflop, which is a quintillion calculations per second, followed by the zettaflop, the yottaflop and the xeraflop.

Like we'll see xeraflop during our lifetime. :rolleyes: :D

fast is green, more speed = less time= less time = less power... not that this thing will ever sit idle... (quick.. someone backhack it and borg it.. ) WTF... why are there 19908 copys of F@H running ... and howcome i cant get within 3 feet from the door to this server room without my skin melting off...

Yes, since I'm sure this system would be running 24/7, faster wouldn't mean less use. With the cost of electricity going up like crazy (my per kW-hr. bill has doubled in the past year :eek: ), perhaps system makers will concentrate more on lowest cost per flop.
 
I believe it's 80TB ram, not storage. I don't think anyone here even has 80GB ram.

Ahem. Aggregate, I have well past that. My H80 alone has 16GB, and my E3500 has 12GB (IIRC, haven't powered on in ages.) My workstation has 8GB and my real workstation has 16GB. My primary multipurpose server has 64GB.

But if you really want, I could go get out some of the systems at work or that I've worked on... ;)
 
Ahem. Aggregate, I have well past that. My H80 alone has 16GB, and my E3500 has 12GB (IIRC, haven't powered on in ages.) My workstation has 8GB and my real workstation has 16GB. My primary multipurpose server has 64GB.

But if you really want, I could go get out some of the systems at work or that I've worked on... ;)

Well, aggregate doesn't really count unless you can combine all those systems into one massive one. :p
 
Ahem. Aggregate, I have well past that. My H80 alone has 16GB, and my E3500 has 12GB (IIRC, haven't powered on in ages.) My workstation has 8GB and my real workstation has 16GB. My primary multipurpose server has 64GB.

But if you really want, I could go get out some of the systems at work or that I've worked on... ;)

you're counting GB of memory, RoadRunner has 80TB of RAM, so no, you don't have more than that. Also, you're RAM count is not an aggregate, as all systems are not interconnected at the system chipset level through a scalable link connection.

Granted, you have a lot more RAM in your systems than I have in mine at home :D
 
you're counting GB of memory, RoadRunner has 80TB of RAM, so no, you don't have more than that. Also, you're RAM count is not an aggregate, as all systems are not interconnected at the system chipset level through a scalable link connection.

Granted, you have a lot more RAM in your systems than I have in mine at home :D

No, he said 80GB, not 80TB. I can build out to 80TB with cash - actually, well past it, since a single memory cab is now up to 5.3TB. If we really want to get picky, fine.

I have 8 systems at work with 96GB of memory. Sun E2900's. I have a couple of smaller boxes with 64GB each.

If we want to get really picky, how about one of my remote systems - I own it, it's just not in my house, because it requires a pair of dedicated 15A circuits by itself. It's one of my old 8 socket Opterons, before Iwill got borged in a buyout spree, with 128GB of DDR2-400. No, I didn't buy the majority of the memory - only 32GB of it. The rest is trade-ins and "hey, since you aren't using this anymore."

:p
 
you are referring to roarsoar, and I am referring to those of us quoting the article. I just figured that out.
 
Back
Top