NVIDIA CEO: Moore’s Law Is Dead, GPUs Will Replace CPUs

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
Moore's Law is no longer relevant, as GPUs are advancing at a much faster pace than CPUs: that was the message from NVIDIA CEO Jensen Huang, who stated at the GPU Technology Conference (GTC) 2017 in Beijing, China that graphics processors are on track to replace CPUs.

...he said that while CPU transistors have grown at a pace of 50 percent annually, performance only jumps by 10 percent. The other reason Huang says Moore's Law is dead is because it can't keep pace with advancements in GPU design. Huang talked about GPUs growing in computational capacity over the years, and how they're the better suited for advancements in artificial intelligence. Huang also said that GPUs will replace CPUs soon, adding that at this point, designers can hardly work out advanced parallel instruction architectures for CPUs.
 
Intel is something like 60% of the GPU market.

Pretty sure nVidia is getting a little horny over something they have no control over.

I mean, they're hardly dominant. Plus software has severely stagnated, more efficient, less powerful processors seem more and more capable in this slow moving environment. As SomeoneElse said: The post PC era has begun.

Honestly a seriously powerful phone with seriously power graphics you can use anywhere with any monitor and KB or what have you is the future. Some day.

The happiest ones will be hackers of course, all those connections...


A new epoch doesn't start without a little sacrifice.
We do kind of need a software and architecture reboot. It'll never happen, could you imagine what that would cost a company like Amazon with all of their big iron?
 
Correct me if i'm wrong but wouldn't this technically turn the GPU into the CPU since it would be the focal point of the system? It would fill the exact role of the CPU and GPU? I thought the GPU was discrete for graphics? Maybe i'm just stuck in the old school train of thought.
 
Correct me if i'm wrong but wouldn't this technically turn the GPU into the CPU since it would be the focal point of the system? It would fill the exact role of the CPU and GPU? I thought the GPU was discrete for graphics? Maybe i'm just stuck in the old school train of thought.
GPGPU, or "General Purpose GPU," has been around since at least 2006. NVIDIA implements this through a framework called CUDA, while AMD uses OpenCL. Some of the fastest systems on the Top 500 list are now powered entirely by GPUs, and it's the way forward for exascale. HPC nodes using Volta are already being deployed.
 
GPGPU, or "General Purpose GPU," has been around since at least 2006. NVIDIA implements this through a framework called CUDA, while AMD uses OpenCL. Some of the fastest systems on the Top 500 list are now powered entirely by GPUs, and it's the way forward for exascale. HPC nodes using Volta are already being deployed.
Ah ok, so its not really a graphics processing unit anymore.....they sort of truncated the acronym....
 
I don't think it is far fetched. We already are finally seeing over 2ghz clocks on the gpu. Get them up to 3-4ghz and add the hardware for faster general purpose integer operations... Then you have a chip that will run your applications well and has a monster 30+ teraflop fpu on it.

Would be cool to have windows ported to it. It could come in a large package like thread ripper. I would imagine it would have hbm stacks and dimms as a cache if you need more ram than what is built into the chip.
 
Ah ok, so its not really a graphics processing unit anymore.....they sort of truncated the acronym....
Technically they are because they still contain the hardware that is optimized for processing things like geometry and shaders. But GPGPU as a paradigm leverages the extreme parallelization inherent in GPU design to crunch a high volume of data at once. The hardware used in HPC applications is more geared toward computational tasks than rendering, though.
 
GPU's have been general purpose processors for a while. Look at compute, deep learning, mining.

He's not boasting he's dead on. They may not necessarily replace CPUs altogether in the short term but there's an undeniable transition happening.
 
GPUs and CPUs both crunch numbers and move data to achieve a result. As long as the end user gets the desired result, what difference does it really make? One is just more specialized then the other.
 
Said the day after he loses to Intel for Tesla's head unit supplier....Ironic

Either he's being
A) Myopic and prideful or
B) He's just trying to boost his company's stock even more.


Either way I seriously doubt his claims. GPU's and CPU's serve two very distinct purposes and not everything will be AI based. His credibility has lowered in my eyes.

It will be interesting to see where Shintai sits on this
 
Last edited by a moderator:
A new epoch doesn't start without a little sacrifice.
That's what they were saying in the 90's about RISC CPUs like Alpha, PPC, and MIPS. Turned out well for them, didn't it? /s

(ARM is the lucky one, however, as they made a shift to focus on low power embedded devices [once Acorn Computer tanked] which paid off huge in the end for the upcoming smartphone boom...)
 
CPUs also interface with system RAM and other core hardware components. You will always have a "CPU" of sorts, unless you completely redesign PC architecture and have the GPU handle those functions. I think obviously GPUs handle certain functions better than a general purpose CPU but replacing it entirely? Yeah, not seeing that happen anytime soon.
 
GPUs and CPUs both crunch numbers and move data to achieve a result. As long as the end user gets the desired result, what difference does it really make? One is just more specialized then the other.

Yes but how they crunch numbers is what's critical. GPU's specialize in vector math with massive parallelization. CPU's can do a lot more even if their vector math isn't as strong. Case in point: I wouldn't want a GPU to compile my code even though it can also be parallelized. And all those parallel cores aren't going to help with single threaded task which is still the majority of code.

Intel has for years been bragging "20% faster than previous generation" for the last couple generations now. But in real world applications were saw minimal improvements in the IPC. Well all those improvements were on the iGPU side which is essentially a vector processing side similar to regular GPUs and why we never saw the improvements except in very limited applications.

From everything I'm reading AI will take a very important role in the future, but looking at the numbers, traditional CPU's will still likely dominate as general work horses.
 
Supercomputers are being built out of GPUs (reference the Cray Titan @ https://www.olcf.ornl.gov/). The CPUs in them are really for nothing more than coordinating hundreds / thousands of GPUs doing the real work.
 
Supercomputers are being built out of GPUs (reference the Cray Titan @ https://www.olcf.ornl.gov/). The CPUs in them are really for nothing more than coordinating hundreds / thousands of GPUs doing the real work.

Yes but those super computers have very specific applications that can benefit from multiple cores using vector math. It's not like you are going to run your streaming device, word processor, server, or browser on them.

Think of it this way:

AI: Very specific focused application which is strong in vector processing/matrix ops/MIMD and can be broken down into parallel task - GPUs win here.
Most apps: Require broad instruction set support which mostly run linearly on a single core. - CPUs win here

As I learned long ago, right tool for the right job.
 
CPUs also interface with system RAM and other core hardware components. You will always have a "CPU" of sorts, unless you completely redesign PC architecture and have the GPU handle those functions. I think obviously GPUs handle certain functions better than a general purpose CPU but replacing it entirely? Yeah, not seeing that happen anytime soon.

I agree...I think we will see more and more shift to using things like CUDA for processing within the OS, but I don't see the CPU going away completely any time soon. It may be relegated to doing light workload items, but I don't think it will disappear.
 
That's what they were saying in the 90's about RISC CPUs like Alpha, PPC, and MIPS.

Well, on those days, those CPU were actually faster than anything Intel had, so they worked as expected.

Sadly, x86 and windows were already entrenched in the desktop and it was impossible to dislodge.

Remember they also lacked standards, like standard sockets, motherboards, etc, which maybe would've helped their spread withing the market.

But all that is speculation at this point.
 
Yeah it is indeed kind of stupid to spout such nonsense but hey, JSH is gonna do what he is gonna do.

Heterogeneous computing is the way forward, a stout cpu taking care of integer, branching, single thread and not vectorized math, and a strong GPU taking care of everything else that can be parallelized. Going with just one or the other will always leave you half assed.

Super computers are being deployed based on GPU's exactly because the type of problems they are meant to tackle are massively parallelized... Energy problems, or fluid dynamics, you pretty much divide your study area into N nodes, and each node will have M "simple" equations that will describe the rules by which it behaves, the matrix generated is what you will be studying. (this is a very VERY simplified take on an usual example)...
 
Said the day after he loses to Intel for Tesla's head unit supplier....Ironic

Either he's being
A) Myopic and prideful or
B) He's just trying to boost his company's stock even more.


Either way I seriously doubt his claims. GPU's and CPU's serve two very distinct purposes and not everything will be AI based. His credibility has lowered in my eyes.

It will be interesting to see where Shintai sits on this

I don't know, what if they've designed a "wrapper" of some kind to turn all of those tiny cores into 4 big cores. That would be interesting!
 
As long as single thread performance is important GPU's won't be replacing CPU's. The lines are becoming very blurry as modern GPU's are becoming more like CPU's but CPU's are optimized for running a small set of threads really fast and GPU's are optimized for running a huge number of threads really fast. If what you are trying to do only uses a few threads CPU's will be much faster because they are designed specifically for that work load.
 
GPGPU, or "General Purpose GPU," has been around since at least 2006. NVIDIA implements this through a framework called CUDA, while AMD uses OpenCL. Some of the fastest systems on the Top 500 list are now powered entirely by GPUs, and it's the way forward for exascale. HPC nodes using Volta are already being deployed.

BULLSHIT! There isn't a single system on the Top 500 list powered entirely by GPUs. In fact the closest thing to a "GPU" that powers an entire system are manycore processors like Intel's KNL. Neither Nvidia nor AMD have any GPUs that are sufficient to power a top 500 system, both require strong CPUs to actually get any work done, because there is no such thing as GPGPU in reality, they just aren't general purpose. OTOH, CPUs are general purpose.
 
GPUs wont replace CPUs, because it would make GPUs snail slow.

The only way to replace it would be to give up on all the code the GPU handles badly. But that cant be done currently and perhaps never.

GPUs scale due to running parallel tasks by nature. Its like giving CPUs something like tile based rendering on 500 Atom/ARM cores and claim its the future.
 
Yep he isn't wrong. CPUs more and more will be used to run the OS and basic frame works while all the real heavy lifting is done by GPU compute. I mean that is already pretty much the way it is anyway. If we are talking pro grade video / photo / VFX work all the heavy comput work in every popular software package is already leaning on GPU compute. GPUs are the math co processors of the day.

What is most surprising to me anyway is that Intel hasn't really responded. The change isn't exactly news to anyone that hasn't been asleep the last 10 years. That Intel hasn't gotten heavy into the GPGPU design game is surprising. AMD saw it coming and its a big part of the reason they picked up ATI.... I would bet if Intel could go back they would have outbid AMD for ATI. 5.5B 10 years abck was a steal for AMD.
 
I don't think it is far fetched. We already are finally seeing over 2ghz clocks on the gpu. Get them up to 3-4ghz and add the hardware for faster general purpose integer operations... Then you have a chip that will run your applications well and has a monster 30+ teraflop fpu on it.

Would be cool to have windows ported to it. It could come in a large package like thread ripper. I would imagine it would have hbm stacks and dimms as a cache if you need more ram than what is built into the chip.

The only reason GPUs have the performance they have is because they completely ignore general purpose operations completely. Basically, GPUs only work when you want to do the same exact thing 100000-10000000s of times in parallel.
 
Last edited:
Yep he isn't wrong. CPUs more and more will be used to run the OS and basic frame works while all the real heavy lifting is done by GPU compute. I mean that is already pretty much the way it is anyway. If we are talking pro grade video / photo / VFX work all the heavy comput work in every popular software package is already leaning on GPU compute. GPUs are the math co processors of the day.

What is most surprising to me anyway is that Intel hasn't really responded. The change isn't exactly news to anyone that hasn't been asleep the last 10 years. That Intel hasn't gotten heavy into the GPGPU design game is surprising. AMD saw it coming and its a big part of the reason they picked up ATI.... I would bet if Intel could go back they would have outbid AMD for ATI. 5.5B 10 years abck was a steal for AMD.

Xeon Phis are selling like hotcakes for a reason ;)
 
  • Like
Reactions: ChadD
like this
Yep he isn't wrong. CPUs more and more will be used to run the OS and basic frame works while all the real heavy lifting is done by GPU compute. I mean that is already pretty much the way it is anyway. If we are talking pro grade video / photo / VFX work all the heavy comput work in every popular software package is already leaning on GPU compute. GPUs are the math co processors of the day.

What is most surprising to me anyway is that Intel hasn't really responded. The change isn't exactly news to anyone that hasn't been asleep the last 10 years. That Intel hasn't gotten heavy into the GPGPU design game is surprising. AMD saw it coming and its a big part of the reason they picked up ATI.... I would bet if Intel could go back they would have outbid AMD for ATI. 5.5B 10 years abck was a steal for AMD.

As long as that real heavy lifting is can be massively parallelized, which is still only possible with very specialized kinds of work.
 
From everything I'm reading AI will take a very important role in the future, but looking at the numbers, traditional CPU's will still likely dominate as general work horses.

Nor is it clear that GPUs have a future in AI. As it turns out, if all you want to do is matrix multiplies, its much better just to design matrix multipliers and leave out all that GPU stuff. And beyond that, if you want to do AI, there are even better methods on the horizon than even matrix multiplies that also leave out all that GPU stuff.
 
As long as that real heavy lifting is can be massively parallelized, which is still only possible with very specialized kinds of work.

I don't disagree I view GPUs as math co processors. A huge boon to the right type of work... but not always pure win.

Having said that though.... the types of work that in general push CPUs happen to be the type of work that can be parallelized in general. I mean for games we are all well away a beefier GPU is better then a 10% faster CPU. For video / photo / VFX / Sound design more and more pro software users are seeing major boosts from GPU compute.

I don't expect CPUs to be replaced that is silly of course... but the need for faster and faster single core performance is imo anyway waning.

It looks to me like we are heading to a nice golden age of performance computing on the cheap... the core race should give us all 8+ cores and all the GPU work being done for AI and such should lead to a good amount of inexpensive gpu compute power. Really we are there for the most part now... you can put together power and inexpensive (relatively) workstation machines these days thanks to the combo of CPU/GPU.

Of course its a combo situation... more and more software is leaning on that combo and seeing some gains. So although mores law may seem to be slowing on how much more compute power we are getting on a cpu, looking at the whole the performance gains are still there. I mean we are at a point now where a bunch of Photoshop features won't even work without a GPU. Today you can take a 7-8 year old Intel processor combo it up with a top end Nvidia card and Photoshop will be much faster then running a brand new Intel / Intel machine.
 
Xeon Phis are selling like hotcakes for a reason ;)

Good point perhaps Intel simply looked at things in a different way. Still I wonder what Intel would have could have done with ATI. Perhaps they could have shaved some R&D time... then again perhaps not. I mean AMD should have got a shot in the arm and they do seem to be running behind in many ways. Speculation at best I guess.
 
Are we going to start using bottlecaps now?

As such it is correct. Sooner or later everything will be cloud based. Desktop haven´t long yet and workstations is pretty much dead completely, its already moved to mobility, even gaming on laptops. Laptops, 2in1, tablets, phones etc with the heavy lifting done elsewhere aka cloud.
 
Back
Top