The End of Moore's Law Could Cause Performance Parity Between AMD and NVIDIA

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,087
Coreteks on YouTube has released a video where he discusses Moore's Law and how it pertains to AMD and NVIDIA. He touches on how NVIDIA will stay ahead of AMD by convincing developers to use features such as real-time ray tracing, upscaling, and DLSS. Then he discusses how NVIDIA could pioneer future tech such as GPU accelerated A.I. and other machine learning techniques to push realism in games when silicon speed differences between the manufacturers ends. These ML packages will be developed by their steadily growing team of in-house scientists and researchers.

With Moore's Law coming to an end, will AMD and NVidia GPUs reach parity in performance in the near future? If so, how will NVidia maintain its market lead? In this video I analyse the last 10 years as well as the next decade in the GPU space, and how hardware will change gameplay itself.
 
Moore's law only applies to the current generation of computing. We're on the 5th generation soon to be on the 6th generation. It's an S-curve like vacuum tubes were before transistors were before micro-chips. When the next generation comes along the exponential S-curve growth will start again and a new "moore's law" will apply

 
Last edited:
Moore's law has ended long ago, but you can always put more cores and produce more heat to achieve more performance. Something that will upset much of the industry as more cores don't usually get utilized and something you can't do to tablets and smart phones.
 
Phones already have 8 cores. So thinking that parallelism is a PC thing doesn't reflect reality.

Further, GPU performance has been about adding cores vs MHz for well over a decade.

Unlike cpu's, gpu's and their programming api's are specifically designed to scale to the available number of cores when possible. So there is no issue of programmers having to manage that parallel programming logic themselves like in a normal CPU.

So long as the things that set Nvidia apart are not being freely licensed as industry standards, they will remain gimmick features. And, being a Linux user...nvidia's lack of a decent open source driver means their performance doesn't matter to me or impact my buying decisions at all.
 
Going way off track here (to a point) I apologize, however it does meet with the scope of this article overall.

There is the very real problem of the more and more they "shrink" the more and more they have to add to combat "dark silicon" because things are getting far too close to one another, so they run into the problem that they simply cannot "activate" the entire core/cores at the same time or it runs into TDP limitations, GPU tend to do a little better in this regard compared to CPU, but the problem still exists for transistors in general.

They want/need transistors to do all the fancy stuff, but, those transistors interfere with each other as well, only so much they are engineers can do.

At ~8nm estimated "penalty" will be ~50% of transistor count cannot activate so it begs the question will it be "worth it" to even bother adding more and more when you cannot effectively use it?

They can "add" features such as low clock core/ram (Nv did this not that long ago LOL) to help mitigate potential performance problems, however, frequency/voltage scaling issues have become a very real problem to try and overcome without resorting to exotic cooling type/methods.

Moore's Law.
Transistor count doubles every two years (now is every 2.5 years) is much more in-line with GPU than CPU. it is not "dead" but not nearly the same performance expected from doubling up transistor count like it used to give. Dark silicon problem is very much part of this.

A law, is a law, because it is "observed" as such, i.e law of momentum, when the law is no longer observed or something else removes it from being "real" that is different, but as it stands today this "law" is still observed as a matter of course..keep in mind when Mr. Moore first made this observation it was based on a yearly cycle, then it became two years for quite a long time, now it is is every 2.5 years

Quantum computing may not have the same issues, but, vast majority of things we currently do, bios, OS, software/hardware are not at all designed for this, but likely that will become a reality in the not so distant future as the various makers see that cost is very worth the efforts (cost a fraction of the same amount (financial/power/cooling and magnitudes more performance given back in return.

---------------------------------------------
-----------------

Will be "great" when they can stop "emulating" things in software and go back to doing it all in hardware while still reducing current power and temperature "costs" Nv has gotten great at forcing via their proprietary tech to do as much as possible via software layers (driver side) where AMD has unfortunately not figured out a way to maintain the hardware layer and reduce power consumption... time will tell who eventually "wins".

Maybe at some point they will "put aside" their differences and team up where pro will have certain GPU and consumer wise will have others that pretty much all have the same "features" this proprietary nonsense hinders more than helps IMO.

-------------------------------------
-----------

different analogy, a 4 banger engine racing high speed tracks will only get so far because they do not have the "monster" engine required and an 8 banger can only do so much when they are hungry for fuel that is causing thermal issues or "nimbleness" on the tight corners. Neither of these "things" is obviously the "best" as from a pure performance side of the equation, hardware easiest to optimize for raw performance, but from efficiency side of the coin software layers done intelligently can "overcome" performance deficits (to a point) while reducing power needs.

Odd how AMD has not figured out how to "shut down" various hardware sections when it is not in use and only power it up via clock gating when it is needed, or for that matter why Nv did not "go back to the books" to do this, put the fat back in but turn off what is not needed to keep power and temps in check? Maybe Intel will do exactly that, beat them both up by having all in hardware but only powering up what is needed, when it is needed, so it can be "very lean" next to no power and become a veritable flamethrower when fully powered up ^.^
 
Phones already have 8 cores. So thinking that parallelism is a PC thing doesn't reflect reality.

Further, GPU performance has been about adding cores vs MHz for well over a decade.

Unlike cpu's, gpu's and their programming api's are specifically designed to scale to the available number of cores when possible. So there is no issue of programmers having to manage that parallel programming logic themselves like in a normal CPU.

So long as the things that set Nvidia apart are not being freely licensed as industry standards, they will remain gimmick features. And, being a Linux user...nvidia's lack of a decent open source driver means their performance doesn't matter to me or impact my buying decisions at all.

The reason GPU's are about adding cores is that graphics related tasks are easily parallelized where you can split up the work to as many 'cores' as we you want, while many tasks traditionally for CPUs just aren't able to be broken down and processed beyond one thread, such as pathfinding etc.
 
Phones already have 8 cores. So thinking that parallelism is a PC thing doesn't reflect reality.

parallelism in regards to computing in general (processor) effects anything with a processor of any type IMO, can mince words I suppose of course.

problem is most of these phones "saying" they are 8 cores is often a "marketing BS" in a different fashion then when AMD called their FX chips higher core count then they actually had.

A "full fat" core or "X cores at one speed and X cores at slower speed" is the same thing in my books.
8 cores would mean every single one of those cores are identical in every fashion. Most (if not all) phones have X cores that are certain amount of cache certain speed and the other ones are different size cache and speed
(i.e big.LITTLE)

hyperthreading is a different ball game, thankfully AMD was not douche this time (from what I have seen) saying their 8 core was 16 core (because of hyperthreading) :D
 
Does he realize rtx extensions are part of dx12 and can be openly supported by AMD?

It will be interesting choice to make

Drive towards vr optimization
Drive towards traditional raster opt
Drive towards Ray tracing
Drive towards AI
Drive towards hetero memory apps

Each is a unique path for transistor allotment. You can't focus on all paths therefore you can't cater to all markets.
 
Transistor density is obviously rapidly closing in on the ceiling, but GPU's, being so parallel, won't be effected as much as CPU's which still have to and will always have to do a bunch of single threaded processes.

I think the real winner in the GPU race at that point won't be the one with the most gimmicky stuff like DLSS, but the one who does the best job "gluing" chips together al a infinity fabric.
 
If they had the same base silicone cards and said. "This card is optimized for VR and gaming, this card is optimized for Raster, Ray Tracing and Ai, and this card is optimized for Ai and Hetero memory apps...

It .. oh wait.. short of VR they already do that. never mind.
 
the article is talking about gpu's. Not cpu's. My comment was in response to the commentor prior who started referencing phones as an example of where extra cores couldn't be added ...implying that they don't have the same number of cores you can find in top end PC configurations ...which I mentioned was a false statement. 8 core cpu arm configurations are not uncommon. They will often be heterogeneous though, but is hardly "marketing BS" to say that they have 8 cores. And those numbers are completely separate from the gpu core count, which is always much larger than a cpu core count.

We're not talking about cpu's here and the coding issues therein.

And if you want to rant on semantics about who's being misleading to who regarding core counts and multi-threading.... you'll have a hard time making a case there. What defines a cpu "core" is not as clear when you strip away the physical object that you can hold in your hand as a cpu. Does a cpu core consist of an integer processor and floating point? Because internally, they are independent and for a long time, cpu's didn't even come with floating point execution units, they were co-processors at best and the floating point aspect of the FX chips is what was being dual purposed and what you are implying was amd trying to mislead consumers. At the same time this is going on, Intel is marketing their hyperthreading ...which, at the time, was orders of magnitude less close to SMP than what AMD was doing ...so you'd have AMD market their chip the same as Intel's even though they weren't even close to doing the same thing?

There is always going to be some fuzziness when trying to market technologically complex features to the general public. Even more so when that public has been getting fed marketing BS terms their entire life and have no idea how the things they buy work while at the same time you're competing against another company that measures and defines things differently than you.

So yes, hyperthreading is a different ballgame compared to multi-core, but there's a big difference between misleading consumers and technology no longer fitting established definitions that were made before that technology existed.

Even today, multi-core cpu's still share many internal things that a true multi-chip smp setup would not. Are both intel and AMD pulling marketing BS on the public? No. Are they playing marketing BS when they talk about their 7nm / 10nm etc die tech? No, not really... because a fully proper way to describe it can't be reduced to 1 number because it's not a uniform thing and focussing on 1 number doesn't give you an overall picture of things. Should they describe it as the thinnest anywhere in the wafer? What if some transistors are done at 7nm and some are larger? The tech is too complex to market so all of the marketing verbiage can be analyzed and torn apart this way from all the companies working at producing it. But anyone who actually cares should know enough to tell the difference and not be so thrown off by things that are generalized because they can't be described to an adequate satisfaction without completely explaining the ins and outs of cpu hardware design.
 
If they had the same base silicone cards and said. "This card is optimized for VR and gaming, this card is optimized for Raster, Ray Tracing and Ai, and this card is optimized for Ai and Hetero memory apps...

It .. oh wait.. short of VR they already do that. never mind.

AMD does video processing well with hetero memory access. (Large data sets)
NVIDIA does AI well with Tensor cores
NVIDIA is focusing on RTX instead of Rasterizing which may become their Achilles heel if they start losing on the raw speed aspect.

The big question mark is RTX's future and how it performs...Remember how focused they were on PhysX and look where that went? Then the focus was on 3D monitors...again fizzled.

Maybe AMD will have an answer to DX12 RTX, maybe not. Maybe the tech isn't prime time ready. Maybe people aren't willing to shell out for RTX due to poor performance. If this is the case (and that is a BIG IF) then tI might build something that is just as fast rasterizing as NVIDIA's fastest offering. (Again a big IF) Despite being Out of stock, people seem to be taking issue with the additional cost associated with that big unknown.
 
The reason GPU's are about adding cores is that graphics related tasks are easily parallelized where you can split up the work to as many 'cores' as we you want, while many tasks traditionally for CPUs just aren't able to be broken down and processed beyond one thread, such as pathfinding etc.

The reason is the same for both, it just lends itself much more to the type of operations gpu's were doing at the time than cpus.

The reason we add more cores vs frequency is because of heat. We can't cool such a tiny surface area down fast enough, and these circuits see massive temperature fluctuations and associated mechanical stress. Further, as density increases, we lose the physical access to the circuits that need cooling and all the fun quantum effects that begin to impact the function of circuits at such tiny sizes (yay quantum electron tunneling) that change with temperature. So since there's been no great breakthru on the room temp superconductor front, and no real new breakthru in any other method of cooling at the microscopic level, the only real option is parallelization.

We'll need to see breakthru's in optical computing or microscopic cooling tech before we see a return to the frequency push. Any other increase in frequency when it comes to cpu's or gpu's from the 7nm /10nm level will come at the expense of parallel performance (turning other cores off to boost a single core frequency).

The article was pretty much saying something along those lines...that manufacturers (of gpu's and cpu's even) will begin to reach a parity in hardware tech. Where I split from the article though is that it seems to suggest that software will be the deciding factor between manufacturers at that point. While i think it completely glosses over the importance of how the limits of hardware tech are implemented, rather than just their utilization. In other words, intel and AMD may come out with cpu's with the exact same instruction set, the same lithography tech, but have two completely different performing products. The same would be true for nvidia and amd. In that sense, the hardware between manufacturers will never reach parity. They'll just be closer than they currently are.

Not that software wont be important as well. I just dont see them being shoe-ins for success. I buy my hardware based on what will work the best with the software that's currently released not on software that may be written in the future. Though i'm not in the same boat as windows users since that statement is not only about optional games and software, but regarding the required hardware drivers as well. So nvidia can put out every new buzzword graphics tech they want, but it wont lead to me choosing it over amd so long as their software is limited to whatever hardware driver (and associated kernel you have to link it against) that they've decided to support. Superior hardware wont even factor into that decision. If i have to wait for a closed source driver then it's a no-go. People like me dont determine the market though, people like those on this forum dont even determine the market. It's the people who dont even know who makes their chipsets that determine the market, and they'll be buying whatever is shoved down their throats .....something that is determined by business dealings that almost never have anything to do with better tech or performance. So the debate over parity between manufacturers is kind of moot. It will be, and has been, a business contest that leverages far more than just the product they're selling to succeed.
 
My understanding is that around Maxwell nvidia deployed some advanced internal hardware compression that helped reduce data that needed crunching. If AMD has something along those lines I've never heard it referenced.

If picture quality is 98ish% but performance is +30% I think it's a good deal.
 
Isn't there a significant amount of ip owned by both amd and Nvidia that would prevent them from ever having a very simular design?
 
My understanding is that around Maxwell nvidia deployed some advanced internal hardware compression that helped reduce data that needed crunching. If AMD has something along those lines I've never heard it referenced.

If picture quality is 98ish% but performance is +30% I think it's a good deal.

This isn't really something new in GPU's. It's been a thing since consumer GPU's have existed starting with the first Voodoo, and it's how the Voodoo was able to deliver the graphics quality that it did with what little compute it was actually capable of.
 
Moore's Law & Order GPU > Moore's Law & Order CPU

better cast, more drama, "fresh" cinematography, etc...
 
There's a bunch of smart folks in this thread. I've read every word. I like to think I'm a smart guy but after reading what I have to this point, I am humbled.
 
So with out reading anything this is implying that AMD's only chance to reach nVidia at this point is for them to both get to the point where the silicon is the literal bottleneck in their processes.
 
So with out reading anything this is implying that AMD's only chance to reach nVidia at this point is for them to both get to the point where the silicon is the literal bottleneck in their processes.

It sounds like the video thinks the only way for AMD to compete is to wait for it to become default but then insinuates that Nvidia will have every developer in their pocket and lock out AMD somehow.
 
AMD does video processing well with hetero memory access. (Large data sets)
NVIDIA does AI well with Tensor cores
NVIDIA is focusing on RTX instead of Rasterizing which may become their Achilles heel if they start losing on the raw speed aspect.

The big question mark is RTX's future and how it performs...Remember how focused they were on PhysX and look where that went? Then the focus was on 3D monitors...again fizzled.

Maybe AMD will have an answer to DX12 RTX, maybe not. Maybe the tech isn't prime time ready. Maybe people aren't willing to shell out for RTX due to poor performance. If this is the case (and that is a BIG IF) then tI might build something that is just as fast rasterizing as NVIDIA's fastest offering. (Again a big IF) Despite being Out of stock, people seem to be taking issue with the additional cost associated with that big unknown.
Nvidia focussing on Ray Tracing for one reason: performance balance. The reality is more rasterization performance is useless if there is no CPU in the world can keep up with such fast performance. Just look at RTX2080Ti itself. Even at 1440p the likes of 8700K at 4.8Ghz becoming the bottleneck factor. So they need to introduce something that can hit performance real hard to justify the existence of high end GPU. not just now but also in the future. Back in 2010 when even mid range GPU starts becoming powerful enough to handle 1080p what did AMD do? They push eyefinity so it looks like high end GPU is still needed.
 
My understanding is that around Maxwell nvidia deployed some advanced internal hardware compression that helped reduce data that needed crunching. If AMD has something along those lines I've never heard it referenced.

If picture quality is 98ish% but performance is +30% I think it's a good deal.

nVidia has started to leverage a tile based design. It isn't solely about compression but removing unnecessary work from the pipeline.

https://www.realworldtech.com/tile-based-rasterization-nvidia-gpus/
 
Nvidia focussing on Ray Tracing for one reason: performance balance. The reality is more rasterization performance is useless if there is no CPU in the world can keep up with such fast performance. Just look at RTX2080Ti itself. Even at 1440p the likes of 8700K at 4.8Ghz becoming the bottleneck factor. So they need to introduce something that can hit performance real hard to justify the existence of high end GPU. not just now but also in the future. Back in 2010 when even mid range GPU starts becoming powerful enough to handle 1080p what did AMD do? They push eyefinity so it looks like high end GPU is still needed.

Maybe. Five years ago a 7970 was all you would need for 1080p gaming @ 60fps and max settings. That isn't true anymore for the same settings as games have become more complex. So a fast raster engine today would be an acceptable one tomorrow. VR is also growing in complexity in leaps and bounds as well as the rez. So that's another reason to pour into raster improvements.

Add to this the fact that NVIDIAs RTX is rumored to maybe give you 30-60fps @1080p. That screams quick obsolesence. Do you think that RTX will be useful on next gen games that use Ray tracing? That means you'll need to upgrade in two years. $600/year is a lot to swallow. I paid $300 for my 7970 and it's over 5 years old now. While it's getting very long in the tooth, that breaks down to $60/year or an order of factor cheaper.

Others have said it, and I agree. The 2080ti/2080/2070 RTX cards are an interim solution only. The fact they released a ti model so early suggest that 7nm will come quicker than expected.
 
Moore's law has been dead for awhile now, which said transistor count, per area X, doubles every 2 years. It's slowed down. Around 2005 it started slowing.

And there is a power wall that's already been hit. Dennard's Law, says as the area shrinks, the power density increases, as leakage current and threshold voltage do not scale with transistor size. This has limited transistor switching frequency to about 4Ghz since 2006.

So, another guy with a crystal ball? Performance parity? For that you would need IPC parity (and identical transistor counts?) , which is not really exactly comparable as they are different designs... in the logic that makes up the circuits...
 
Last edited:
Back
Top