Do you think a 1 Yottaflop supercomputer will ever happen?

aphexcoil

Limp Gawd
Joined
Jan 4, 2011
Messages
322
1 Yottaflop would be equivalent to one million exaflops. The fastest supercomputers today are around 20 petaflops or .02 exaflops.

Do you think we will ever see a 1 Yottaflop supercomputer? If so, by what date? What type of simulations could be run on a supercomputer that powerful?
 
If computing power grows as quickly tomorrow as it did yesterday then yes I think it is possible and it will probably take a long time. If it happens in my lifetime I'll be on my deathbed (I'm 20 now).
 
Define "supercomputer".

The F@H network could be considered a supercomputer, and how much does that handle?
 
Define "supercomputer".

The F@H network could be considered a supercomputer, and how much does that handle?



On November 10, 2011, Folding@home's performance exceeded six native petaFLOPS with the equivalent of nearly eight x86 petaFLOPS.[101][112]


According to the last Wikipedia update. Pretty damn fast, but even now Supercomputers are finally able to surpass even that pushing toward the Exascale. Without GPU's I'd say we'd never see a Supercomputer come close to an Exaflop much less a Yottaflop any time in the near future. GPU's have really changed the game in how we think about computing, how long will that go? Depends.

The last ten years GPU and CPU's have gotten extremely powerful, at the expensive of putting off some massive heat and sucking insane amounts of power. Only in the last few years has the industry tried to get that under control. Unless some massive changes come about which protects these nano sized materials from heat damage and only need a drip of an electrical charge, I don't see a Yottaflop supercomputer in my life time. Things are already in the works (have been for a decade) that can re-birth the industry, but changes will need to be made one way or another.

Personally I think stacking is going to be one way to easily overcome the exponential growth factor, but power draw still needs addressed as does heat. Once silicon finally dies we may see some very interesting things that could have the potential of silicon level future growth. However, as of today that remains to be seen and until then we are at the end of the road only able to predict the next 10 years and nothing further with materials at hand. People are even doubting Intel's ability to prolong silicon and shrink to 5, 7, and 10nm like their roadmaps suggest they can.
 
Exponential growth is a powerful thing.

If we assume processing power scales at about the same rate as circuit complexity.
Moore's law says ~8 generations to hit 1 exoflop (from your numbers .02 x 512). About 12 - 16 years.
Another 20 generations after that to hit your target, or 30 - 40 years.

More problematic than time are the fundamental laws of physics.
To follow Moore's law for that long, the basic feature size would need to shrink to ~0.1 pico-meters. (22nm today / ( sqrt(2)^28) A single silicon atom is around 250 pico-meters in diameter. You see the problem.

To hit those kinds of speeds, you're either consuming millions of times more power than today's biggest supercomputers, or processing in a way as different from today's CPUs as an abacus would be.
 
What are you gonna compute at yotta flop ? crack your SO's password ?
 
What are you gonna compute at yotta flop ?
Weather predictions, medical predictions, physics simulations, etc. Same stuff modern supercomputers are used for, just to a higher degree of accuracy, or faster speed.
 
Well sure. Samsung will achieve that with the Galaxy S4178392. Then Apple will sue them for having Google Maps based nav on it. :p


It's possible. I'd say 25-40 years after new microprocessor materials and architectural innovations are implemented. You know, we'll see a single socket processor the size of the roller in a bal point pen doing it at full load while drawing only 2 millivolts.
 
I truly don't think so.

By the time we had individual nodes powerful enough to make this a reality, we will likely outgrow current technological standards of speed ratings. I find it very unlikely that quantum computers, or whatever surpasses the current digital Turing-computational machine, will be measured in xflops, nor will a speed rating with that unit be of any actual use.
 
The biggest issue we have is not Moore's Law but the laws of physics.

Sure, the next processors from Intel will be 14nm and later 10nm, but how much smaller can you go before you run into issues with leakage and so on. We'll soon run into the law of diminishing returns on our processors when we can't get more performance and efficiency out of it when we've shrunk them far enough. We're starting to see that lately with current processors.

I don't believe, honestly, silicon is going to get us to 1 YFLOP (yottaflop) performance. And, I don't believe graphite-based processors as mentioned in a previous thread in the Intel forum will do that either. Researchers on graphite processors have already stated that it's only good for analog, not digital, signals. So, that is another dead end there.

Our only hope right now is finding another semiconductive material that gives us the following:
  • Best power efficiency
  • Small process size
  • No leakage
  • High floating point performance
Our only next option are quantum processors if they can be made small enough and mass-produced at the same level as silicon-based processors are today. Until then, we are stuck with silicon and exploring other more exotic semiconductive materials to try to squeeze out more performance and power efficiency out of them.
 
The biggest issue we have is not Moore's Law but the laws of physics.
fo' reals.

One billion petaflops should be achievable, if unnecessary. I believe classes of "hard problems" will be solved in more clever ways than is necessary by brute force. Quantum computing or some kind of massively parallel methods (like, but not necessarily, DNA computing) with problems which can be expressed inside those systems are a couple of alternatives, but also new computing methods which have not even been invented yet.
 
LOL, I wonder what [H] will be like in 35 years time, the siblings of Steve and Kyle taking over their old man's spot.:D


Intel Forums 35 years from now:

My quantum CPU overheats -- but only in certain universes?

Quantum Core II leaked performance benchmarks --- only a 7 billion percent gain??

Hey guys, check this out! Found an old Skylake Intel processor in my dad's basement! Look at all these pins! 10 nanometer gates?? WOW.

My wife just found my new VR program and just deleted 50 exabytes of my data. Any way to restore this?
 
Researchers on graphite processors have already stated that it's only good for analog, not digital, signals. So, that is another dead end there.

This sounds perfect for memristors. We've only scratched the surface with this new technology. It looks as though memristors can simulate neurons far better than any other passive component.

We really shouldn't throw Graphite research away so quickly. What about CPU's that can program themselves on the hardware level to optimize for specific tasks. Think about how much faster a CPU would be at one application if it was designed specifically for that application. I don't thing 10x speed improvements would be out of the question.
 
The biggest issue we have is not Moore's Law but the laws of physics.

Sure, the next processors from Intel will be 14nm and later 10nm, but how much smaller can you go before you run into issues with leakage and so on. We'll soon run into the law of diminishing returns on our processors when we can't get more performance and efficiency out of it when we've shrunk them far enough.

Current chips are at 2 billion transistors. If Moore's law holds up for the near-future we should have 'chips' with around 100 billion transistors in the 2020s. That's human-brain level processing power. It's obviously a very different computer architecture, but I'm talking in terms of switching elements. If nature can jam 100,000,000,000 switching elements into a 3-pound blob of gray mush (that runs on 400 calories a day!) via natural selection it should be possible to engineer something far more powerful.

We aren't running into physical limitations yet, we're running into our own creative limits, our ability to take advantage of this incredibly powerful hardware to design the next step up.
 
This sounds perfect for memristors. We've only scratched the surface with this new technology. It looks as though memristors can simulate neurons far better than any other passive component.

We really shouldn't throw Graphite research away so quickly. What about CPU's that can program themselves on the hardware level to optimize for specific tasks. Think about how much faster a CPU would be at one application if it was designed specifically for that application. I don't thing 10x speed improvements would be out of the question.

Current chips are at 2 billion transistors. If Moore's law holds up for the near-future we should have 'chips' with around 100 billion transistors in the 2020s. That's human-brain level processing power. It's obviously a very different computer architecture, but I'm talking in terms of switching elements. If nature can jam 100,000,000,000 switching elements into a 3-pound blob of gray mush (that runs on 400 calories a day!) via natural selection it should be possible to engineer something far more powerful.

We aren't running into physical limitations yet, we're running into our own creative limits, our ability to take advantage of this incredibly powerful hardware to design the next step up.

I've always believed that human-like processing is probably the holy grail of computing. It is crazy to think that tens of thousands of years of evolution allowed us to have a brain with hundreds of millions of neurons communicate with each other via ions yet we can't or haven't figured out yet how to get processors small enough and as powerful enough as the human brain without the risk of power leakage because of how small the electron is.

We'll probably get there someday, and it'll most likely undertake a combination of newer materials and technology such as memristors and optical interconnects combined on a single die to get there. Until then, like "DFB" said above, it's a creative limit at the moment. Laws of physics unfortunately will be that roadblock when we can't get the processor small enough. Probably instead of shrinking the processor, we should revise it and rethink how it should be designed utilizing newer materials and microprocessor technology.

The ultimate goal, I still think, is when it can be figured out is most definitely quantum processors.
 
I think we're all going to finally realize that 640K of memory was enough and go back to early DOS computing.
 
We aren't running into physical limitations yet, we're running into our own creative limits, our ability to take advantage of this incredibly powerful hardware to design the next step up.

Well, I wouldn't say that...

HesN2zH.png


Off the Hennessy and Patterson book, which I assume just about every computer engineer owns, lol.
 
Well, I wouldn't say that...

HesN2zH.png


Off the Hennessy and Patterson book, which I assume just about every computer engineer owns, lol.

CPU performance did not level off in 2005. The chart is referring to the CPU - RAM performance gap as part of the discussion about memory hierarchy design.

More recently, high-end processors have moved to multiple cores, further increasing the bandwidth requirements versus single cores. In fact, the aggregate peak bandwidth essentially grows as the numbers of cores grows. A modern high-end processor such as the Intel Core i7 can generate two data memory references per core each clock cycle; with four cores and a 3.2 GHz clock rate, the i7 can generate a peak of 25.6 billion 64-bit data memory references per second, in addition to a peak instruction demand of about 12.8 billion 128-bit instruction references; this is a total peak bandwidth of 409.6 GB/sec! This incredible bandwidth is achieved by multiporting and pipelining the caches; by the use of multiple levels of caches, using separate first- and sometimes second-level caches per core; and by using a separate instruction and data cache at the first level. In contrast, the peak bandwidth to DRAM main memory is only 6% of this (25 GB/sec).

CPU transistor counts are still increasing exponentially. At the current rate we wont surpass all biological computers until the mid-2020's, and there's presumably room for improvement beyond that.
 
Intel appears confident that they can manage 5nm gates, which means we'll probably be good until around 2020. I'd imagine a 1 Yottaflop supercomputer might be possible with 1nm chips, but the heat dissipation / power requirements are going to be a large hurdle to get over.

We already have GPU's capable of multiple teraflops, and I'm sure a general purpose CPU rated at one teraflop will probably hit the mainstream around 2015/2016.

It will probably be technologically feasible around 2045-2050 at the current rate of CPU evolution. Hopefully we see an exaflop supercomputer by 2020.

Probably one of the largest problems in designing supercomputers today is the massive amount of energy that they require and handling all the heat that is generated from hundreds of thousands of CPUs.
 
CPU performance did not level off in 2005. The chart is referring to the CPU - RAM performance gap as part of the discussion about memory hierarchy design.
Yeah, I made sure to keep that line under the graph when I cropped it. But when scaling, does it matter how fast your CPU can crunch numbers if its choke point is memory bandwidth? Unless the program takes advantage of multiple threads, sitting around on a cache miss will screw you over no matter how fast you can crunch numbers that aren't there yet.

CPU transistor counts are still increasing exponentially. At the current rate we wont surpass all biological computers until the mid-2020's, and there's presumably room for improvement beyond that.

But most of that increased transistor count seems to be going towards on-die GPU development.
 
Last night for some reason this thread kept me awake and I was thinking. We have quantum limits with the shrinking process, not to mention physical limits we have to even shrink smaller. Do we really think we'll see that much improvement in the near to far future shrinking things smaller and smaller (if it's even possible).

Such as if we go from 10nm to 1nm to 100pm to 10pm, etc. Given it's even possible (doubtful) and the equipment allows it (now not possible, future maybe) would we really see massive gains? I'm thinking there has to be a hard limit to where gains of shrinkage just aren't realized. The last thing I can remember a shrinking process having crucial effect was probably with the P4 and the X-Box 360. But for the most part clock speeds were scaled back and or kept the same to keep power low and heat lower. Little improvements were actually realized except for keeping things in a steady state. Heat and Power are two huge concerns that aren't really helping in terms of keeping things going faster because we're just packing so much more into the same space to compensate. I guess that's a plus, or maybe I'm missing the bigger picture.

The only thing I can think of is a seriously radical way of microprocessors in the future to keep the momentum going. I long for the days where a high end CPU never crosses the 50w power drain again and heat can be dissipated without a huge fan and heatsink that you need to measure to make sure it fits in a case. Maybe that's just fantasy now, but I'm dying to see what comes next when silicon just dies as we know it eventually will. Not knowing the next step is just killing me as a tech geek lol. Feel free to correct any misconceived perceptions.
 
If we're talking about a supercomputer, I guess it would be possible within the next 50-100 years by having thousands of chips together as the title ask.

If it's about a single chip, it would be a bit hard like everyone above me said.
 
Yeah, I made sure to keep that line under the graph when I cropped it. But when scaling, does it matter how fast your CPU can crunch numbers if its choke point is memory bandwidth? Unless the program takes advantage of multiple threads, sitting around on a cache miss will screw you over no matter how fast you can crunch numbers that aren't there yet.

Well, look at Intel's Extreme platform. Going to high-speed massive bandwith quad channel memory doesn't really do much for most users, and it significantly increases the TDP. Like I said, we're not running into a physical limitation that can't be overcome. It's an economic and architecture-related problem.

But most of that increased transistor count seems to be going towards on-die GPU development.

In the consumer space... Regardless, what does that have to do with what I said? We're still a long ways off from maximizing transistor counts.
 
Well, look at Intel's Extreme platform. Going to high-speed massive bandwith quad channel memory doesn't really do much for most users, and it significantly increases the TDP. Like I said, we're not running into a physical limitation that can't be overcome. It's an economic and architecture-related problem.



In the consumer space... Regardless, what does that have to do with what I said? We're still a long ways off from maximizing transistor counts.

So we're going to extrapolate transistor count as the reason we'll reach yottaflops? And its just an economic and architecture problem to reach it, and how memory improvements cannot catch up plays no role?

Ok.
 
So we're going to extrapolate transistor count as the reason we'll reach yottaflops?

As opposed to what?

All I said was that we're still a long ways off from building computers with the transistor density, efficiency, and capability of brains produced by natural selection. We still have a way to go before we run into actual limits imposed by physics.

And its just an economic and architecture problem to reach it, and how memory improvements cannot catch up plays no role?
What makes you assume that memory improvements cannot catch up? I said that the gap between CPU and memory performance is the result of economics and architecture choices, not that it's not important.

The full quote from the book you mentioned claims:
...the i7 can generate a peak of 25.6 billion 64-bit data memory references per second, in addition to a peak instruction demand of about 12.8 billion 128-bit instruction references; this is a total peak bandwidth of 409.6 GB/sec! This incredible bandwidth is achieved by multiporting and pipelining the caches; by the use of multiple levels of caches, using separate first- and sometimes second-level caches per core; and by using a separate instruction and data cache at the first level. In contrast, the peak bandwidth to DRAM main memory is only 6% of this (25 GB/sec).

Intel's current quad-channel memory controllers are capable of peak bandwidth in excess of 40 GB/sec. Significant architecture changes may be needed to provide the needed memory bandwidth as CPU speed continues to increase, but you haven't shown anything that suggests such improvements are physically impossible.
 
Back
Top