AMD reveal next-gen ‘Nvidia killer’ graphics card at CES?

Which Nvidia card will AMD be killing in 2020? The over a year old 2080 Ti or some yet unknown Ampere flagship?

My guess is neither and we’re just back in the hype stage of AMD disappointment syndrome.
Maybe another Nvidia pricier disappointment syndrome will occur again. Folks in general amougst other things were very dissapointed in Turing. Not to say even more dissappointed in that AMD did not have a good product to go against it until lately. AMD 57xx line was a good counter, the 55xx -> meh -> Basically pick the best deal between the two.

https://www.wepc.com/news/amd-nvidia-killer-navi-23/

More recent new on the Nvida Card Killer
 
Last edited:
  • Like
Reactions: Auer
like this
For the last 10 years, the only company to release a card that is significantly faster than the previous top-end solution (from any company) is Nvidia.

AMD doesn't shoot for the stars, they aim for what's already on the market, and often fall short.
 
For the last 10 years, the only company to release a card that is significantly faster than the previous top-end solution (from any company) is Nvidia.

AMD doesn't shoot for the stars, they aim for what's already on the market, and often fall short.
Not at all -> The severely underclocked 7970 when it came out pretty much decimated the GTX 580 in a number of games, only SLI or two chip solutions as in the GTX 590 could beat it in general
The GTX 680 (next generation for Nvidia) was quickly counter by upping the clocks and over time it too was decimated by the 7970. That was late 2011 early 2012 - so no ground for reasoning and has zero bearing on what AMD or Nvidia will release next.
 
  • Like
Reactions: N4CR
like this
For the last 10 years, the only company to release a card that is significantly faster than the previous top-end solution (from any company) is Nvidia.

AMD doesn't shoot for the stars, they aim for what's already on the market, and often fall short.

But conditions are now much more favorable for AMD releasing such a card, than just about any time since the 7970. AMD are in a better position both financially and technologically.

I really expect AMD to release a big Navi Chip with at least 64 CUs, which should make a decent run at the 2080Ti.

But it is also possible they go for it this time, and bring out an 80 CU monster that spanks the 2080Ti.

Who wouldn't love it, if that happened?

It's completely possible this time.
 
But conditions are now much more favorable for AMD releasing such a card, than just about any other time in the last 10 years. AMD are in a better position both financially and technologically.

I really expect AMD to release a big Navi Chip with at least 64 CUs, which should make a decent run at the 2080Ti.

But it is also possible they go for it this time, and bring out an 80 CU monster that spanks the 2080Ti.

Who wouldn't love it, if that happened?

It's completely possible this time.
I don't see a 64Cu or 80Cu due to power unless they get the efficiency better. 7nm+ should help, RNDA2 hopefully will also and then HBM2E for the higher CU versions. A combination or all, may make a very decent competitive card. HBM2e costs and availability should be much better than the past, packaging also but I still think it will be a premium cost plus add in the more advance node. Frankly I think a 60 - 64 CU version with DDR 6 will beat a 2080 Ti, with a higher end HBM2E version with 64 CU's for HPC, professional and a Titan equivalent. Which can be downgraded to compete against Ampere if needed.
  • Beat the 2080 Ti and price it like $899 meaning AMD has also advance their pricing higher
  • First Uber card and price it between the Ti and Titan
 
Last edited:
I don't see a 64Cu or 80Cu due to power unless they get the efficiency better. 7nm+ should help, RNDA2 hopefully will also and then HBM2E for the higher CU versions. A combination or all, may make a very decent competitive card. HBM2e costs and availability should be much better than the past, packaging also but I still think it will be a premium cost plus add in the more advance node. Frankly I think a 60 - 64 CU version with DDR 6 will beat a 2080 Ti, with a higher end HBM2E version with 64 CU's for HPC, professional and a Titan equivalent. Which can be downgraded to compete against Ampere if needed.
  • Beat the 2080 Ti and price it like $899 meaning AMD has also advance their pricing higher
  • First Uber card and price it between the Ti and Titan

Problem with HBM is it puts additional heat right next to an already hot die... I hope they stick with DDR6.
 
Problem with HBM is it puts additional heat right next to an already hot die... I hope they stick with DDR6.
I always thought it was the other way around with the GPU warming up the HBM. Power wise HBM2e is rather small as in 5W around 1/2 of DDR 6.

https://www.eeweb.com/profile/schar...-memory-ready-for-ai-primetime-hbm2e-vs-gddr6

For sure I think both AMD and Nvidia will have HBM2e on their AI cards. For gaming cards maybe AMD. Maybe the question is will AMD have a decent cooler? Liquid cooled addition? The Radeon Vega 64 LC edition effectively kept both the GPU and HBM2 cool even when pumping out over 400w of ridiculas amounts of power.
 
I always thought it was the other way around with the GPU warming up the HBM. Power wise HBM2e is rather small as in 5W around 1/2 of DDR 6.

https://www.eeweb.com/profile/schar...-memory-ready-for-ai-primetime-hbm2e-vs-gddr6

For sure I think both AMD and Nvidia will have HBM2e on their AI cards. For gaming cards maybe AMD. Maybe the question is will AMD have a decent cooler? Liquid cooled addition? The Radeon Vega 64 LC edition effectively kept both the GPU and HBM2 cool even when pumping out over 400w of ridiculas amounts of power.

That makes more sense, 5W isn’t a lot.

I do know it caused issues with the Radeon VII with height differences between the die and HBM. I am sure that can be engineered around... I am just not as sold on it as some people.
 
That makes more sense, 5W isn’t a lot.

I do know it caused issues with the Radeon VII with height differences between the die and HBM. I am sure that can be engineered around... I am just not as sold on it as some people.
Yes, that was kinda cheap on AMD part to rely on a thermal pad for height differences, the original Vega tolerances were right on. Looks like it was a cost cutting decision on AMD's end.
 
RDNA is basically at parity in performance, power use and price in the tiers it competes at

They're at least one node ahead; Navi is the second 7nm GPU release, so they've even had time to reconcile any issues that could lead to undue innefficiencies. And yet they've only made it into striking distance with Nvidia's aging parts.

no matter how many NVDA shares you have.

This is trolling.

Now those automatically assuming Nvidia will just blow away AMD may need to check if they are delusional. Until the pedal hits the metal, as in released tested projects it is just a guessing game. AMD has been on a very rapid pace staped, usually companies doing this do this with all their products - wait and see while of course debates from little second hand reports go on endlessly.

Nvidia builds the largest and most complex dies in the industry, generation after generation. AMD builds large, complex GPU dies that disappoint, generation after generation.

Any assumption at this point is a guess, but it's clear that at very best, AMD might approach parity. Might.

Not at all -> The severely underclocked 7970 when it came out pretty much decimated the GTX 580 in a number of games, only SLI or two chip solutions as in the GTX 590 could beat it in general
The GTX 680 (next generation for Nvidia) was quickly counter by upping the clocks and over time it too was decimated by the 7970. That was late 2011 early 2012 - so no ground for reasoning and has zero bearing on what AMD or Nvidia will release next.

The HD7970 should have 'decimated' the GTX580, as it came well after the GTX580. The HD6970 competed directly with the GTX580, and was in turn 'decimated' by the Nvidia part. Yes, I was there, and owned parts from both companies.

When Nvidia released the GTX680, AMD's "high-end" became a joke; massive, loud, hot, for lower performance. It was the last time anyone took AMD's GPUs above the entry level seriously, and AMD has yet to prove that they can compete at process and architecture parity. They remain two years behind.
 
The HD7970 should have 'decimated' the GTX580, as it came well after the GTX580. The HD6970 competed directly with the GTX580, and was in turn 'decimated' by the Nvidia part. Yes, I was there, and owned parts from both companies.

When Nvidia released the GTX680, AMD's "high-end" became a joke; massive, loud, hot, for lower performance. It was the last time anyone took AMD's GPUs above the entry level seriously, and AMD has yet to prove that they can compete at process and architecture parity. They remain two years behind.
Hawaii XT was the last serious contender in the enthusiast space from AMD, and it was killed by the Litecoin craze.
 
Yes, that was kinda cheap on AMD part to rely on a thermal pad for height differences, the original Vega tolerances were right on. Looks like it was a cost cutting decision on AMD's end.

Do we have any information on Nvidia having this problem as well?

It should be a straightforward engineering solution, and hopefully with respect to HBM, the bill of materials related to interposer manufacturing and package assembly problems will be minimized.

Problem with HBM is it puts additional heat right next to an already hot die... I hope they stick with DDR6.

As noted above it's really not that bad. Think of using HBM similarly to how AMD is using chiplets for Zen 2, where the arrangement makes cooling even easier. Given that the power consumption and cooling requirements of RAM in general are quite low, with HBM the surface area for heat transfer is significantly enlarged as a byproduct of the design, and a byproduct that would be very difficult to replicate with traditional RAM connected through the PCB.

The only real disadvantage of HBM for graphics (and most compute)* applications is the cost of assembly, and that's directly related to defect rates at the various stages of production and assembly. Get defect rates sufficiently knocked down and unit cost would approach parity with traditional memory configurations.


[*HBM differs by using many more memory bus connections through a silicon interposer than are possible / feasible to run through a PCB, but data rates are within the same order of magnitude as clockspeeds are reduced; this has the benefit of providing greater bandwidth in a smaller package as well as reducing power usage and easing heat removal, but it also has the effect of increasing latency; this isn't an issue for most applications, but for compute workloads that do need low latency, GPU cache memory must be able to assist, or performance will suffer a bit]
 
Hawaii XT was the last serious contender in the enthusiast space from AMD, and it was killed by the Litecoin craze.

Agreed, but it wasn't taken very seriously by the community because followed the same track AMD had been on with respect to being hotter and louder, which Nvidia broke from with the GTX680 before. People were cutting holes in the exhaust grate to try and get the thing to cool down!
 
Agreed, but it wasn't taken very seriously by the community because followed the same track AMD had been on with respect to being hotter and louder, which Nvidia broke from with the GTX680 before. People were cutting holes in the exhaust grate to try and get the thing to cool down!

290X sold like hotcakes, no one gave a crap about the thermals because it performed. Nvidia eventually came up with a answer for it and sold well due to the fact that miners were buying all the 290X cards. 290X was a great card for me but I bought at launch, many waited a bit or wanted non blower cards and well they had a hard time getting one due to miner demand. Nvidia cards didnt have that same issue as they didn't mine very well at all.
 
290X sold like hotcakes, no one gave a crap about the thermals because it performed. Nvidia eventually came up with a answer for it and sold well due to the fact that miners were buying all the 290X cards. 290X was a great card for me but I bought at launch, many waited a bit or wanted non blower cards and well they had a hard time getting one due to miner demand. Nvidia cards didnt have that same issue as they didn't mine very well at all.

The 290x was definitely a card that competed. Pretty popular too.
 
So if AMD is not using Samsung, for whatever reason, makes Nvidia look a little bit smatter. My GUESS, is AMD will also use Samsung.
TSMC
Taiwan...
Nothing to do with China except some of the early 3000 series Zen2s were assembled there. Now it's not done in China. notarat has two 3900xs one earlier than the other and it was made in CN and the other is not... AMD is already on to it ;)
 
Not at all -> The severely underclocked 7970 when it came out pretty much decimated the GTX 580 in a number of games, only SLI or two chip solutions as in the GTX 590 could beat it in general
The GTX 680 (next generation for Nvidia) was quickly counter by upping the clocks and over time it too was decimated by the 7970. That was late 2011 early 2012 - so no ground for reasoning and has zero bearing on what AMD or Nvidia will release next.

My ref 7970 did 1.3GHz out of the box. If they'd had more samples like that Nvidia could have been in for some real asspain. It was only barely slower than my DCUII 290X... and the 290X beat on launch even the Titan when it was released but NO AMD DIDNT BEAT NVIDIA FOR 50 YEARS GUYS lmao because 6 months doesn't count apparently.
 
My ref 7970 did 1.3GHz out of the box. If they'd had more samples like that Nvidia could have been in for some real asspain. It was only barely slower than my DCUII 290X... and the 290X beat on launch even the Titan when it was released but NO AMD DIDNT BEAT NVIDIA FOR 50 YEARS GUYS lmao because 6 months doesn't count apparently.

If it was released with not a blower cooler it would have been recieved a bit better. Given the overall package I’d never buy that card.
 
This is trolling.
They remain two years behind.

You said it yourself.

5700 is at performance parity, it's not two years behind unless your NVDA shares are blinding you and you can't read a graph. So you stop trolling mr 'x years behind'. Doesn't matter what node they use, the end result is they have a card that's same price or less and it's just as fast and uses about the same power.
Sure they need better memory compression but it doesn't hurt the 5700 series.
 
If it was released with not a blower cooler it would have been recieved a bit better. Given the overall package I’d never buy that card.
Yeah agreed that blower was loud as all fuck. But it was the first vapour chamber design and did okay at stock clocks.

In winter..
lol.
It got me to 1.3Ghz with headphones on and for that I'm grateful. Never bothered shimming the Accelero III onto it.
 
You said it yourself.

5700 is at performance parity, it's not two years behind unless your NVDA shares are blinding you and you can't read a graph. So you stop trolling mr 'x years behind'. Doesn't matter what node they use, the end result is they have a card that's same price or less and it's just as fast and uses about the same power.
Sure they need better memory compression but it doesn't hurt the 5700 series.

I think the concern here is the rest of the posts in this thread that say "AMD is going to have a competitive card BECAUSE OF THE NODE" and then a few posts later we get "WHO CARES ABOUT THE NODE, THEY COMPETITVE".

That's the issue. AMD is a node ahead and still can't compete against the top end. Which is pretty much the premise of 90% of this conversation.
 
I think the concern here is the rest of the posts in this thread that say "AMD is going to have a competitive card BECAUSE OF THE NODE" and then a few posts later we get "WHO CARES ABOUT THE NODE, THEY COMPETITVE".

That's the issue. AMD is a node ahead and still can't compete against the top end. Which is pretty much the premise of 90% of this conversation.
They have not released a design to compete at the high end, it's nothing to do with that they can't do it at all as they clearly can as shown by the 5700s. BUT It's probably not cost effective due to yield on a newer process - if they had released big Navi, they definitely could have competed. But would it have been profitable? Or worth it for that tiny few percent at best of the market? Nope. So it makes sense they went for mid range - performance bracket.
And I have never said they will have an NVIDIA killer... nor that it's because of the node. If they'd run standard GCN on this node, some shitty polaris rehash #9, Navi would have been Vega level at best. Which is what I was expecting them to do.

They might be able to get close to the Ti is about all I've said, maybe a hair ahead depending what you do and it'll likely use similar power. And of course Ampere will whisk that advantage away.
AMD is still a year or two away from competing at high end at the same time and for that they will almost certainly need an MCM solution out before nvidia.

So because others are saying that, it doesn't lump me in with it. They are competitive on a price/perf/power perspective, that's all I care about at this point. If we were talking nodes which I think is a silly argument, you could use the same argument vs Intel. AMD suddenly suxx and doesn't have the fastest CPUs in the world now because they are a node ahead? Do they have to wait for people to catch up on node to compare them now?
Use what you bought.

Nvidia is still top dog and will be for a while till the financial incentive is there for AMD to do it. At this point, throwing high defect rate chips at a 1-2% chunk of the market for the sake of some halo bullshit market wank is not a sound financial decision for a small GPU division. I'm happy to see them compete where it makes sense.
I also don't think Nvidia will have a smooth ride through the next node either. 7nm isn't a wonder node, power density is a problem with it.
 
I also don't think Nvidia will have a smooth ride through the next node either. 7nm isn't a wonder node, power density is a problem with it.
Not sure why it wouldn’t be smooth. Ever since the debacle on 40nm Nvidia has relied more on architecture than on manufacturing to drive performance gains. They’re more conservative now and spend more time evaluating a new node before bringing it to market.

Ampere is likely making an appearance in the second half of 2020 at which point both Samsung and TSMC 7nm will be quite mature.

Navi’s relatively high power draw is clearly an architecture issue and not due to any fault of 7nm manufacturing.
 
They have not released a design to compete at the high end

They haven't since they bought ATi.

You bring up cards where AMD released their competitor a year after the last Nvidia release -- and for a few months, their card was competitive with Nvidia's last generation. While also being hotter and louder, because that's how AMD rolls.

Now you're trying to compare AMD with a significant node advantage not only not showing performance leadership but also not showing efficiency leadership either.

Jumping a node ahead in order to reach parity with your competitor's older products isn't winning, let alone leading. It's barely keeping up.


And make no mistake (fuck I'm getting tired of repeating this to The Faithful): I'm all for AMD paying more than lip service to the gaming community and kicking Nvidia's teeth in every once in a while like ATi used to do.

I just haven't seen it, and no one here that's honest with themselves would support a claim that AMD's about to do what they've chosen not to do since they bought ATi.
 
They have not released a design to compete at the high end, it's nothing to do with that they can't do it at all as they clearly can as shown by the 5700s. BUT It's probably not cost effective due to yield on a newer process - if they had released big Navi, they definitely could have competed. But would it have been profitable? Or worth it for that tiny few percent at best of the market? Nope. So it makes sense they went for mid range - performance bracket.
And I have never said they will have an NVIDIA killer... nor that it's because of the node. If they'd run standard GCN on this node, some shitty polaris rehash #9, Navi would have been Vega level at best. Which is what I was expecting them to do.

They might be able to get close to the Ti is about all I've said, maybe a hair ahead depending what you do and it'll likely use similar power. And of course Ampere will whisk that advantage away.
AMD is still a year or two away from competing at high end at the same time and for that they will almost certainly need an MCM solution out before nvidia.

So because others are saying that, it doesn't lump me in with it. They are competitive on a price/perf/power perspective, that's all I care about at this point. If we were talking nodes which I think is a silly argument, you could use the same argument vs Intel. AMD suddenly suxx and doesn't have the fastest CPUs in the world now because they are a node ahead? Do they have to wait for people to catch up on node to compare them now?
Use what you bought.

Nvidia is still top dog and will be for a while till the financial incentive is there for AMD to do it. At this point, throwing high defect rate chips at a 1-2% chunk of the market for the sake of some halo bullshit market wank is not a sound financial decision for a small GPU division. I'm happy to see them compete where it makes sense.
I also don't think Nvidia will have a smooth ride through the next node either. 7nm isn't a wonder node, power density is a problem with it.

Correct me if I miss something, or misinterpreted something BUT..

You're saying you do not care what node they use, as long as they are putting out the same performance, correct?

I see your point, I really do, but if both companies are putting out new cards nearly simultaneously each cycle, and you KNOW the new Nvidia card will be ahead of it for the first nearly year, even if that Nvidia card is expected to be released 2 months after the AMD card, why wouldn't you just wait for the better card?

Second question: you regularly state that HW RT is useless on the Nvidia cards. Do you mean in wasted die size or in another metric? (I think this was you, anyway).
 
I would say, sorta, the 5700/5700 XT kicked some teeth and nudged Nvidia to reduce their card prices some. Go team RED! :D

Something to think about, the next world's fastest super computer will use AMD CPU's and GPU's, I guess Nvidia, Intel, IBM . . . couldn't muster the top spot on that one. AMD has the ability, 2020 should be very interesting from both AMD and Nvidia. Jan 6 is right around the corner for CES in Las Vegas so not far off we will see if AMD kicks some Nvidia ASS or visa versa. Maybe Nvidia will release their next big update on the Shield during CES ;)

Some keep mentioning 16nm is the same as 12nm -> if that is the case maybe that was kinda stupid for Nvidia to pay more for it -> reality is it has higher density and power savings so really not the same, maybe not a big boost but it is better.

With Vega 10 at 14nm GF went to Vega 20 at TSMC 7nm, which had roughly 6% more transistors for more FP64 and a 4096 memory controller etc., Vega 20 size only reduced to 68% compared to Vega 10. Same number of shaders for the full chip. I am pretty sure 12nm TSMC is better than the original 14nm GF. Please think about that and consider:
  • If TU 102 (Titan RTX and 2080Ti GPU) at 754mm^2 was just shrunk down and roughly followed that ratio: it would be 754mm^ x .68 = 512mm^2 -> A very large chip for 7nm, what kind of yields would that have on 7nm?
  • To get that 30+% performance increase from GP 102 471mm^2 die (1080Ti, TitanXp) to TU 102 plus RT - Transistor count went up 1.58x more and size went up 1.6x more for the 2080 Ti GPU over the 1080 Ti GPU
    • It was a very huge transistor increase and more shaders -> Architecture wise I am not sure if it really improved much over Pascal besides supporting Vulcan and DX12 API's much better and of course the infamous RT which has 6 games after over a year, maybe a few more now
  • MY POINT, OPINION,
    • Nvidia will have to increase transistor count once again to improve performance meaningfully
    • Going to 7nm or even 7nm+ or 7nm UEV and adding even more transistors will make it a very huge chip as in over 600mm^2 -> I don't see that happening anytime soon, Nvidia blew everything they had, maxing out transistors, die size on TU 102. To have a meaningful upgrade chip besides power saving, a minor bump resulting in a huge cost to produce. Something has to give.
    • Either Nvidia will be very late next year once yields get better or Nvidia does something very radical such as using multiple chips (very cool! would be funny Nvidia beating AMD in using multiple chips)
    • I would expect Nvidia to produce their mid size 7nm chip with good performance for gaming and keeping any larger size GPU's for the professional very high end market for awhile, something they have done before several times
So where does that leave AMD -> AMD GPU's comparably are much smaller and have room to grow, as in Navi 10 2560 shaders is at 251mm^2 . With 7nm+ with increase density potential, + RT? should keep it less than 425mm^2 for a 4096 shader version and around 500mm^2 for the Uber version 80CU. Maybe I am missing something, no problem please point it out - just some thoughts which can be improved upon with good input.
 
Last edited:
  • Like
Reactions: Mega6
like this
They haven't since they bought ATi.

You bring up cards where AMD released their competitor a year after the last Nvidia release -- and for a few months, their card was competitive with Nvidia's last generation. While also being hotter and louder, because that's how AMD rolls.

Now you're trying to compare AMD with a significant node advantage not only not showing performance leadership but also not showing efficiency leadership either.

Jumping a node ahead in order to reach parity with your competitor's older products isn't winning, let alone leading. It's barely keeping up.


And make no mistake (fuck I'm getting tired of repeating this to The Faithful): I'm all for AMD paying more than lip service to the gaming community and kicking Nvidia's teeth in every once in a while like ATi used to do.

I just haven't seen it, and no one here that's honest with themselves would support a claim that AMD's about to do what they've chosen not to do since they bought ATi.
AMD is not the only one providing cards, I had a 290x ASUS CU II, quiet and beat the snot out of a 780 during its time of service. Just to ridicule constantly the 290x bad cooler and ignore all the cards that had great cooling gets old my friend. Yes AMD reference version sucked ass, the rest kicked ass.

Nvidia well defined wins occurred with Maxwell and Pascal over AMD getting the mobile and OEM markets in a big way with Maxwell and then Pascal following with Turing.

Turing maxing out die size, transistor count, features that are poorly implemented or performs bad with a lot of marketing BS, initial hardware failures, to be stabbed with the worst price increase in history, makes Ampere rather questionable for me. I really think Nvidia is the one in a tight spot now depending upon if AMD can deliver or not.

Dealing with Ampere, if it was ready, engineering samples out etc. I would have expected some form of indication starting with drivers and so on. AMD seems to have some leaks dealing with RNDA2. Still I have no idea if AMD or Nvidia will be releasing any new card the 1st half of next year but if I would bet on it- it would be AMD first.
 
Last edited:
AMD is not the only one providing cards, I had a 290x ASUS CU II, quiet and beat the snot out of a 780 during its time of service. Just to ridicule constantly the 290x bad cooler and ignore all the cards that had great cooling gets old my friend. Yes AMD reference version sucked ass, the rest kicked ass.

Nvidia well defined wins occurred with Maxwell and Pascal over AMD getting the mobile and OEM markets in a big way with Maxwell and then Pascal following with Turing.

Turing maxing out die size, transistor count, features that are poorly implemented or performs bad with a lot of marketing BS, initial hardware failures, to be stabbed with the worst price increase in history, makes Ampere rather questionable for me. I really think Nvidia is the one in a tight spot now depending upon if AMD can deliver or not.

Dealing with Ampere, if it was ready, engineering samples out etc. I would have expected some form of indication starting with drivers and so on. AMD seems to have some leaks dealing with RNDA2. Still I have no idea if AMD or Nvidia will be releasing any new card the 1st half of next year but if I would bet on it- it would be AMD first.

This is going way back - but I believe the non-reference coolers took way too long. Also from a marketing perspective the damage was already done... the general public correlated the 290x to hot, loud, 95C beasts. The vast majority of people go with what hear and once the initial impression is set it's really hard to change.
 
They haven't since they bought ATi.

You bring up cards where AMD released their competitor a year after the last Nvidia release -- and for a few months, their card was competitive with Nvidia's last generation.

You are making yourself look biased with comments like this. Are you claiming AMD releasing a faster part than NVidia only counts if they release it simultaneously?

HD7970 was faster than NVidias best at the time. I think a great many people would be quite happy if Big Navi if faster 2080Ti, even if it is only for few months, while you just think it doesn't count.


Now you're trying to compare AMD with a significant node advantage not only not showing performance leadership but also not showing efficiency leadership either.
Jumping a node ahead in order to reach parity with your competitor's older products isn't winning, let alone leading. It's barely keeping up.

A node shift isn't a miracle. It looks like most of AMD's improvement is in it's new architecture, not the new node, but yes some comes from that. If it were only the node, we wouldn't have seen power consumption like this for Radeon 7:
power_peak.png


Radeon 7 jumped to the new node, but power issues continued to plague it. So it seems that Navi architecture is in a large way responsible for AMD getting power under control.
 
Last edited:
Some keep mentioning 16nm is the same as 12nm -> if that is the case maybe that was kinda stupid for Nvidia to pay more for it -> reality is it has higher density and power savings so really not the same, maybe not a big boost but it is better.

"12nm" is the same density, but a more refined stepping. Back in the old days, Process names didn't get a + or a new node name, when the density didn't change. But you could see that later steppings had improvement in lower power, and better overclocking. NVidias 12nm is a very refined, high yield (lower cost), slightly better power and clocking version on the 16nm process (same density). It was the best choice for NVidia, especially since they were coming to market sooner than AMD's 7nm parts. It probably would have cost NVidia a lot more money to deliver it's RTX 20 parts on 7nm than on 12nm, and perhaps even performance (clock speed) advantages over less mature 7nm. Look at how Intel 14nm out clocks everyone, that's a very mature process benefit.

OTOH, 7nm TSMC was the best choice for AMD. Coming to market later, when 7nm was a bit more mature, and likely offered performance advantages over GF 14nm/12nm.

Both AMD and NVidia are run by sharp people these days, and both made the best process choice for their particular situation.


  • MY POINT, OPINION,
    • Nvidia will have to increase transistor count once again to improve performance meaningfully
    • Going to 7nm or even 7nm+ or 7nm UEV and adding even more transistors will make it a very huge chip as in over 600mm^2 -> I don't see that happening anytime soon, Nvidia blew everything they had, maxing out transistors, die size on TU 102. To have a meaningful upgrade chip besides power saving, a minor bump resulting in a huge cost to produce. Something has to give.
    • Either Nvidia will be very late next year once yields get better or Nvidia does something very radical such as using multiple chips (very cool! would be funny Nvidia beating AMD in using multiple chips)
    • I would expect Nvidia to produce their mid size 7nm chip with good performance for gaming and keeping any larger size GPU's for the professional very high end market for awhile, something they have done before several times
So where does that leave AMD -> AMD GPU's comparably are much smaller and have room to grow, as in Navi 10 2560 shaders is at 251mm^2 . With 7nm+ with increase density potential, + RT? should keep it less than 425mm^2 for a 4096 shader version and around 500mm^2 for the Uber version 80CU. Maybe I am missing something, no problem please point it out - just some thoughts which can be improved upon with good input.


I get closer to 470 mm^2 on a 2080Ti die shrink, but I think you have the AMD number about right.

Though you seem to be a tad more optimistic on AMD performance than warranted, when assuming that the 64 CU (4096 shader) version would beat a 2080Ti.

2070 Super and 5700 XT are both 2560 Shader/Core parts, and NVidia has the slight lead.

2080 Ti is a 4352 shader/core part, vs 4096 Navi shader/Cores, so 2080 Ti should come out ahead, unless AMD gets some favorable extra boosts.

Now the 2080Ti part doesn't seem to be gaining the full benefit of all those shaders as you would expect, probably clock speed lower to keep heat and power in check and Memory BW is relatively low compared to the rest of the Turing lineup.

It seems AMD might likewise have to reign in clockspeed to keep power in check, and we don't know what their memory solution looks like. 64 CU part may trade blows with a 2080 TI, if there are more performance tweaks for RDNA2/7nm+, but I would not expect a significant lead.

OTOH 80 CU is 5120 shaders at about 500mm^2, that would be a clear and signifcant lead.

When it comes to NVidias 7nm, I expect they will have the lead again, so the Window is small for Big Navi leading, though a win is a win, in my books.

NVidia sticking with 12nm longer gave them breathing room to work with both TSMC and Samsung, and they will choose the best of competing processes for their flagship parts. IMO, Samsung is NOT a lock for flagship Ampere. The best process will win.
 
"12nm" is the same density, but a more refined stepping. Back in the old days, Process names didn't get a + or a new node name, when the density didn't change. But you could see that later steppings had improvement in lower power, and better overclocking. NVidias 12nm is a very refined, high yield (lower cost), slightly better power and clocking version on the 16nm process (same density). It was the best choice for NVidia, especially since they were coming to market sooner than AMD's 7nm parts. It probably would have cost NVidia a lot more money to deliver it's RTX 20 parts on 7nm than on 12nm, and perhaps even performance (clock speed) advantages over less mature 7nm. Look at how Intel 14nm out clocks everyone, that's a very mature process benefit.

OTOH, 7nm TSMC was the best choice for AMD. Coming to market later, when 7nm was a bit more mature, and likely offered performance advantages over GF 14nm/12nm.

Both AMD and NVidia are run by sharp people these days, and both made the best process choice for their particular situation.





I get closer to 470 mm^2 on a 2080Ti die shrink, but I think you have the AMD number about right.

Though you seem to be a tad more optimistic on AMD performance than warranted, when assuming that the 64 CU (4096 shader) version would beat a 2080Ti.

2070 Super and 5700 XT are both 2560 Shader/Core parts, and NVidia has the slight lead.

2080 Ti is a 4352 shader/core part, vs 4096 Navi shader/Cores, so 2080 Ti should come out ahead, unless AMD gets some favorable extra boosts.

Now the 2080Ti part doesn't seem to be gaining the full benefit of all those shaders as you would expect, probably clock speed lower to keep heat and power in check and Memory BW is relatively low compared to the rest of the Turing lineup.

It seems AMD might likewise have to reign in clockspeed to keep power in check, and we don't know what their memory solution looks like. 64 CU part may trade blows with a 2080 TI, if there are more performance tweaks for RDNA2/7nm+, but I would not expect a significant lead.

OTOH 80 CU is 5120 shaders at about 500mm^2, that would be a clear and signifcant lead.

When it comes to NVidias 7nm, I expect they will have the lead again, so the Window is small for Big Navi leading, though a win is a win, in my books.

NVidia sticking with 12nm longer gave them breathing room to work with both TSMC and Samsung, and they will choose the best of competing processes for their flagship parts. IMO, Samsung is NOT a lock for flagship Ampere. The best process will win.

Great unbiased post.
 
"12nm" is the same density, but a more refined stepping. Back in the old days, Process names didn't get a + or a new node name, when the density didn't change. But you could see that later steppings had improvement in lower power, and better overclocking. NVidias 12nm is a very refined, high yield (lower cost), slightly better power and clocking version on the 16nm process (same density). It was the best choice for NVidia, especially since they were coming to market sooner than AMD's 7nm parts. It probably would have cost NVidia a lot more money to deliver it's RTX 20 parts on 7nm than on 12nm, and perhaps even performance (clock speed) advantages over less mature 7nm. Look at how Intel 14nm out clocks everyone, that's a very mature process benefit.

OTOH, 7nm TSMC was the best choice for AMD. Coming to market later, when 7nm was a bit more mature, and likely offered performance advantages over GF 14nm/12nm.

Both AMD and NVidia are run by sharp people these days, and both made the best process choice for their particular situation.





I get closer to 470 mm^2 on a 2080Ti die shrink, but I think you have the AMD number about right.

Though you seem to be a tad more optimistic on AMD performance than warranted, when assuming that the 64 CU (4096 shader) version would beat a 2080Ti.

2070 Super and 5700 XT are both 2560 Shader/Core parts, and NVidia has the slight lead.

2080 Ti is a 4352 shader/core part, vs 4096 Navi shader/Cores, so 2080 Ti should come out ahead, unless AMD gets some favorable extra boosts.

Now the 2080Ti part doesn't seem to be gaining the full benefit of all those shaders as you would expect, probably clock speed lower to keep heat and power in check and Memory BW is relatively low compared to the rest of the Turing lineup.

It seems AMD might likewise have to reign in clockspeed to keep power in check, and we don't know what their memory solution looks like. 64 CU part may trade blows with a 2080 TI, if there are more performance tweaks for RDNA2/7nm+, but I would not expect a significant lead.

OTOH 80 CU is 5120 shaders at about 500mm^2, that would be a clear and signifcant lead.

When it comes to NVidias 7nm, I expect they will have the lead again, so the Window is small for Big Navi leading, though a win is a win, in my books.

NVidia sticking with 12nm longer gave them breathing room to work with both TSMC and Samsung, and they will choose the best of competing processes for their flagship parts. IMO, Samsung is NOT a lock for flagship Ampere. The best process will win.
Thanks for input and good points.

One aspect I did not mention which propelled Nvidia to the stratosphere with Pascal, is clock speeds. An Ace in the hole if Nvidia can achieve once again higher clocks, keeping power reasonable. Higher clocks means better performance for the same number of transistors. One obvious way is multiple chips, the other is what Nvidia use to do and had X2 for the shaders or double speed which would be rather complex now with the number of shaders and added latency. But, for the RT cores, having a much higher speed then other parts of the chip crosses my mind.

As for the 2560 vs 2560, as in the 5700XT and 2700 Super - die size of the Super is 545mm^2 vice the 251mm^2 of the 5700XT. The Super would have a much better heat transfer rate and keeping it cooler will allow higher clocks, circuits can be better separated for noise and heat, higher voltages is one possible advantage and since it is a cut down chip from a 2080 or 2080 Super
add in an additional cache benefits for the added size. In other words not a cut and dry comparison but a very good one none the less.

The rather small Navi 10 chip is doing quite well overall, AMD was very successful in getting 7nm working right for it. Look at this way, shrink down Turing 104 to the same size or close, would it be able to maintain clock speeds? Faster? Slower? How proficient is Nvidia at this Node? Anyways I expect Ampere to be adding transistors mainly to have a faster product and this appears to be a potential size wall for them to do that. If they have some speed magic or really be cool with a great multiple chip solution then that would be great!
 
This is going way back - but I believe the non-reference coolers took way too long. Also from a marketing perspective the damage was already done... the general public correlated the 290x to hot, loud, 95C beasts. The vast majority of people go with what hear and once the initial impression is set it's really hard to change.
Of course to remove all that heat :D, AMD cards where very long indeed! Who cares by the way what people think? Really? People had options and the better performing one was the 290x and even today you can use with good gaming results, around RX 580 speeds. Take the 680, 780 and host of other cards from Nvidia from that era and they are junk today that don't play well with Vulkan or DX12.
 
Of course to remove all that heat :D, AMD cards where very long indeed! Who cares by the way what people think? Really? People had options and the better performing one was the 290x and even today you can use with good gaming results, around RX 580 speeds. Take the 680, 780 and host of other cards from Nvidia from that era and they are junk today that don't play well with Vulkan or DX12.

I can’t tell if you misunderstood me or are making a joke. I was saying non-reference coolers took too long in time to become available. They should have launched with them off the bat and I think AMD would have faired much better.

I personally don’t care what people think, but releasing a very poor performing blower was their “shotgun to the foot”, AMD likes to do for every launch, for the 290x; tainting the whole line.
 
You are making yourself look biased with comments like this. Are you claiming AMD releasing a faster part than NVidia only counts if they release it simultaneously?

Nope, just that lining up generation to generation, AMD has remained behind overall; even when they've briefly leaped forward into a margin of error in performance, they've had significantly worse overall products in terms of power draw, heat, and noise -- and up until Polaris, condemnable driver support.

For those AMD stock owners / The Faithful, showing AMD support when AMD's GPU game has been at their best meant earplugs and another 100w of heat in the room. At least until third-party solutions arrived, and then it was 125w of extra heat in the room, because they were more efficient at getting the heat away from the GPU ;).

HD7970 was faster than NVidias best at the time.

Yes, it released against the aging GTX580. If there is one moment that AMD appeared to pull ahead, that was it. I generally don't count that point as again, third-party coolers were slow to come, drivers were ass, and the GTX580 was old and mostly competed with the HD6970, and the GTX580's replacement just embarrassed AMD. I.e., by the time you'd actually want to buy the AMD GPU, better things were on the horizon.

I think a great many people would be quite happy if Big Navi if faster 2080Ti, even if it is only for few months, while you just think it doesn't count.

I'd count it if AMD were a few months after Nvidia. But for nearly all of the 2080Ti's retail life, it will be the top GPU by a wide margin. I'm not going to split hairs on timetables: AMD just getting something competitive out the door would be a win for them period, but if Big Navi doesn't exceed the 2080Ti by 20% across the board and rock the hell out of RT, it's going to get stomped back to irrelevance almost immediately by Ampere.

And that's if they beat Nvidia to market. Let The Faithful throw their Pocket-Protector Luau on the forums, I'm not attacking their self-esteem, but just catching up for a few months really isn't providing real competition to Nvidia.

A node shift isn't a miracle. It looks like most of AMD's improvement is in it's new architecture, not the new node, but yes some comes from that. If it were only the node, we wouldn't have seen power consumption like this for Radeon 7:

We should be clear that yes, AMD did update the architecture, but also that Navi is not a compute-spec'd part either. And as power consumption optimization on GPUs is something that AMD has literally never had success at, we should expect the Radeon VII to pull more power for the same amount of work as it is definitely a more substantial product.

So it seems that Navi architecture is in a large way responsible for AMD getting power under control.

...except that it's round two for them on 7nm, and they're still only reaching parity with their competitor's "12nm" part. There's a reason AMD doesn't get the mobile GPU design wins.
 
I can’t tell if you misunderstood me or are making a joke. I was saying non-reference coolers took too long in time to become available. They should have launched with them off the bat and I think AMD would have faired much better.

I personally don’t care what people think, but releasing a very poor performing blower was their “shotgun to the foot”, AMD likes to do for every launch, for the 290x; tainting the whole line.
:D means kidding or joking. Yes your right, the coolers where long and some folks had issues with their cases putting in a non-reference cooler -> which in fact I did, I had to remove a hard drive cage for the 290x Asus CU II for it to fit. Still a whopping long term viable card in the end. Most here don't keep cards that long but many actually do. I usually pass down cards from one system to the next myself so that 3-5 year long term usage comes into play for me.
 
Take the 680, 780 and host of other cards from Nvidia from that era and they are junk today that don't play well with Vulkan or DX12.

Care to back that one up?

I game on a variety of systems that range from Intel integrated up through generations of AMD and Nvidia GPUs, and well, I don't really see a disadvantage here. Older cards are slower, water is wet. Lower the settings and game on.
 
:D means kidding or joking. Yes your right, the coolers where long and some folks had issues with their cases putting in a non-reference cooler -> which in fact I did, I had to remove a hard drive cage for the 290x Asus CU II for it to fit. Still a whopping long term viable card in the end. Most here don't keep cards that long but many actually do. I usually pass down cards from one system to the next myself so that 3-5 year long term usage comes into play for me.

On a humorous note, I gifted one of my HD6950s to my brother -- it had the stock blower, so the drive cage on his case had to be 'adjusted' with a hammer before it would fit ;)
 
  • Like
Reactions: noko
like this
Nope, just that lining up generation to generation, AMD has remained behind overall; even when they've briefly leaped forward into a margin of error in performance, they've had significantly worse overall products in terms of power draw, heat, and noise -- and up until Polaris, condemnable driver support.

For those AMD stock owners / The Faithful, showing AMD support when AMD's GPU game has been at their best meant earplugs and another 100w of heat in the room. At least until third-party solutions arrived, and then it was 125w of extra heat in the room, because they were more efficient at getting the heat away from the GPU ;).



Yes, it released against the aging GTX580. If there is one moment that AMD appeared to pull ahead, that was it. I generally don't count that point as again, third-party coolers were slow to come, drivers were ass, and the GTX580 was old and mostly competed with the HD6970, and the GTX580's replacement just embarrassed AMD. I.e., by the time you'd actually want to buy the AMD GPU, better things were on the horizon.



I'd count it if AMD were a few months after Nvidia. But for nearly all of the 2080Ti's retail life, it will be the top GPU by a wide margin. I'm not going to split hairs on timetables: AMD just getting something competitive out the door would be a win for them period, but if Big Navi doesn't exceed the 2080Ti by 20% across the board and rock the hell out of RT, it's going to get stomped back to irrelevance almost immediately by Ampere.

And that's if they beat Nvidia to market. Let The Faithful throw their Pocket-Protector Luau on the forums, I'm not attacking their self-esteem, but just catching up for a few months really isn't providing real competition to Nvidia.



We should be clear that yes, AMD did update the architecture, but also that Navi is not a compute-spec'd part either. And as power consumption optimization on GPUs is something that AMD has literally never had success at, we should expect the Radeon VII to pull more power for the same amount of work as it is definitely a more substantial product.



...except that it's round two for them on 7nm, and they're still only reaching parity with their competitor's "12nm" part. There's a reason AMD doesn't get the mobile GPU design wins.
I would say AMD was behind at the time for Maxwell and Pascal generations, prior to that they were leap frogging each other. Fermi was a very hot chip, not if Nvidia did not have some big losers over the years. Maxwell DX 12 and Vulkan performance is pathetic while Pascal improved things, it is not stellar either. Turing DX 12 and Vulkan performance is outstanding and maybe even better than AMD's in general. What comes next may not reflect previous standings, AMD looks to be on a roll, Nvidia? Turing came with a lot of baggage, mainly huge price hikes.
 
Back
Top