AMD reveal next-gen ‘Nvidia killer’ graphics card at CES?

Maxwell DX 12 and Vulkan performance is pathetic while Pascal improved things, it is not stellar either.

This is what I'd like to see some numbers for; not to refute your statement, but rather, to provide a grounding point of sorts.

My biggest issue with Vulkan and DX12 in general is that there are very few excellent implementations of both. Many / most implementations are slower than DX11. EA / DICE / Frostbyte, I'm looking at you.

So we'd want to look at very good implementations, first, and then find a comparison with GPUs from both vendors in comparable classes, but with the latest drivers.


And I think that this ask is both a bit of a reach in terms of feasibility and also applicability, as the potential tests will likely run slowly on all subjects. But if the data is available, it'd be interesting to see, right?

What comes next may not reflect previous standings, AMD looks to be on a roll, Nvidia? Turing came with a lot of baggage, mainly huge price hikes.

That's the thing: AMD doesn't really look to be on a roll in the GPU space. Navi doesn't look any better now than Polaris did at release, even had power / BIOS / cooler problems of its own.

While Nvidia has efficient parts on an older node, all the way up to the largest complex dies ever made, and not only leads in performance in traditional raster rendering but also leads in RT support.

Yeah, Nvidia's technology push came with a cost -- but that's not at all unprecedented when pushing the envelope, and I'm glad they did it, even as every complaint about pricing represents fewer sales for them.

They moved the market forward while competing only with themselves.
 
Care to back that one up?

I game on a variety of systems that range from Intel integrated up through generations of AMD and Nvidia GPUs, and well, I don't really see a disadvantage here. Older cards are slower, water is wet. Lower the settings and game on.
Sad to hear that.

Why of course on backing that up with a specific test using Intell 9900K, DX 12 - could not find any results for the 780 Ti using a 9900K so went with 7700K so just look at the graphics score and test for that one. This would more reflect current state of those cards on modern titles. I took the second best result on each, you can search as well for this particular DX 12 test.

290x780andTi.png
 
This is what I'd like to see some numbers for; not to refute your statement, but rather, to provide a grounding point of sorts.

My biggest issue with Vulkan and DX12 in general is that there are very few excellent implementations of both. Many / most implementations are slower than DX11. EA / DICE / Frostbyte, I'm looking at you.

So we'd want to look at very good implementations, first, and then find a comparison with GPUs from both vendors in comparable classes, but with the latest drivers.


And I think that this ask is both a bit of a reach in terms of feasibility and also applicability, as the potential tests will likely run slowly on all subjects. But if the data is available, it'd be interesting to see, right?



That's the thing: AMD doesn't really look to be on a roll in the GPU space. Navi doesn't look any better now than Polaris did at release, even had power / BIOS / cooler problems of its own.

While Nvidia has efficient parts on an older node, all the way up to the largest complex dies ever made, and not only leads in performance in traditional raster rendering but also leads in RT support.

Yeah, Nvidia's technology push came with a cost -- but that's not at all unprecedented when pushing the envelope, and I'm glad they did it, even as every complaint about pricing represents fewer sales for them.

They moved the market forward while competing only with themselves.
Actually I do believe appearance wise you are correct with Navi and Turing performance. 55xx line is an utter wash with AMD having a node advantage. The 5700 XT gives better perf/$ while Turing has some unique features which some may find very useful not just for gaming. This time around though AMD also has another Node update and RNDA2 to go with it, not as if RNDA is it for the next two years. Hence the discussion, Nvidia I think is in a tighter spot than AMD at this stage. Which one will be better will most likely be more user needs and wants.
 
The 5700 XT gives better perf/$ while Turing has some unique features which some may find very useful not just for gaming.

I'd definitely find RT useful if it were available in games I'm actually playing at the moment, but I'd also be using at least a 2080, and well, AMD doesn't compete there. They're also behind on perf/watt progress.

This time around though AMD also has another Node update and RNDA2 to go with it, not as if RNDA is it for the next two years.

AMD has a history of not really progressing on perf / watt. "5nm" or "7nm+" are both refinements of the same thing at TSMC, so unless AMD does something they've never done, RDNA2 is mostly going to be about adding RT.

Hence the discussion, Nvidia I think is in a tighter spot than AMD at this stage. Which one will be better will most likely be more user needs and wants.

Nvidia has the performance advantage, efficiency advantage, clear technology lead -- that's 'tighter' to you? Remember that Nvidia also gets their pick of the best fabrication processes available on the market. Even just a shrunk Turing would make for an impossible target for AMD to hit, and historically, Nvidia has brought significant performance improvements with each architecture too.
 
I'd definitely find RT useful if it were available in games I'm actually playing at the moment, but I'd also be using at least a 2080, and well, AMD doesn't compete there. They're also behind on perf/watt progress.



AMD has a history of not really progressing on perf / watt. "5nm" or "7nm+" are both refinements of the same thing at TSMC, so unless AMD does something they've never done, RDNA2 is mostly going to be about adding RT.



Nvidia has the performance advantage, efficiency advantage, clear technology lead -- that's 'tighter' to you? Remember that Nvidia also gets their pick of the best fabrication processes available on the market. Even just a shrunk Turing would make for an impossible target for AMD to hit, and historically, Nvidia has brought significant performance improvements with each architecture too.
Well that is it, we do not know at this stage on any of this. Nvidia has not done anything as far as we know on 7nm, now I would expect Nvidia to use maybe 6nm TSMC and still looking at very large chips to exceed Turing performance in a meaningful way. That is the issue, the larger size chips with the smaller nodes. That is mainly where I have the big question mark with Nvidia with Ampere. As for RNDA 2, yeah maybe just adding RT but most likely efficiency improvement upgrading the performance as well. AMD had a hard time with RNDA and it was delayed, I am confident they learnt and applied lessons learned to RNDA2. Still I really don't know but look forward to both AMD and Nvidia next generations. I probably end up with both anyways.
 
Also is another aspect with AMD, Sony and Microsoft helped the development of RNDA2 so this is more than just AMD, multi-billion dollar companies mainly dealing with game consoles going to be used for the next 5 years or so have a lot of stake into this. So the R&D costs is much higher per se then just AMD budget and while specific to Sony and Microsoft it should carry over to RNDA2. If the new game consoles become the developer standard to go by, then AMD GPU's should be very applicable for gaming for a very long time.
 
Also is another aspect with AMD, Sony and Microsoft helped the development of RNDA2 so this is more than just AMD, multi-billion dollar companies mainly dealing with game consoles going to be used for the next 5 years or so have a lot of stake into this. So the R&D costs is much higher per se then just AMD budget and while specific to Sony and Microsoft it should carry over to RNDA2. If the new game consoles become the developer standard to go by, then AMD GPU's should be very applicable for gaming for a very long time.

As much as we've wanted to correlate tech from GPU manufacturers being used in consoles to how their discrete parts do, this has yet to ever play out. If anything, Sony and Microsoft's contributions will represent customizations that depart from desktop API conventions and generally won't be that useful.

And that's if AMD ports those customizations over to their discrete parts. Remember that performance is dependent on all rendering stages, and when moving to discrete parts, there are plenty of variables that muck things up. There may be no point in keeping features that won't get used, something AMD has a long history of doing, and ATi before them.

Nvidia has not done anything as far as we know on 7nm

I highly doubt they'd let anything they've done escape NDA. I'd also highly doubt that they haven't already done test runs to prepare for mass production. What we know is what both of these companies have done in the past up till today, and really, there's very little evidence showing that we should expect a departure from a linear forecast, i.e., Nvidia will do on 7nm with Ampere what they've done on previous nodes, and AMD won't produce top-end part that competes. AMD might produce a 2080Ti competitor, but like the HD7970 vs. GTX580 example brought above, that AMD GPU would be released waiting for Nvidia's next generation in everyone's mind.
 
Yes, it released against the aging GTX580. If there is one moment that AMD appeared to pull ahead, that was it. I generally don't count that point as again, third-party coolers were slow to come, drivers were ass, and the GTX580 was old and mostly competed with the HD6970, and the GTX580's replacement just embarrassed AMD. I.e., by the time you'd actually want to buy the AMD GPU, better things were on the horizon.

I wouldn't say the GTX 680 embarased the 7970. AMD was able to raise the clock speed and keep up:
https://www.anandtech.com/show/6025/radeon-hd-7970-ghz-edition-review-catching-up-to-gtx-680

The Radeon HD 7970 GHz Edition isn’t quite fast enough to outright win, but it is unquestionably fast enough to tie the GeForce GTX 680 as the fastest single-GPU video card in the world today. With that said there’s a lot of data to go through, so let’s dive in.

Now if Big Navi did this well, that would be huge. Imagine if Big Navi beats the 2080Ti, then 3080Ti temporarily retakes the crown, then a tweaked Big Navi (Call it Big Navi Super ;) ) returns and ties the 3080Ti. That would be VERY damn good, and nothing to crap on like you are doing.
 
I wouldn't say the GTX 680 embarased the 7970. AMD was able to raise the clock speed and keep up:
https://www.anandtech.com/show/6025/radeon-hd-7970-ghz-edition-review-catching-up-to-gtx-680

The Radeon HD 7970 GHz Edition isn’t quite fast enough to outright win, but it is unquestionably fast enough to tie the GeForce GTX 680 as the fastest single-GPU video card in the world today. With that said there’s a lot of data to go through, so let’s dive in.

Now if Big Navi did this well, that would be huge. Imagine if Big Navi beats the 2080Ti, then 3080Ti temporarily retakes the crown, then a tweaked Big Navi (Call it Big Navi Super ;) ) returns and ties the 3080Ti. That would be VERY damn good, and nothing to crap on like you are doing.

Same delusional hope since Tahiti. So adorable.
 
I wouldn't say the GTX 680 embarased the 7970. AMD was able to raise the clock speed and keep up:

...with a massive increase in power draw, noise, and heat, over the GTX680 ;). That was probably the worst the disparity has been in terms of performance per watt between the two companies' top consumer products. The HD7970 looked absolutely archaic when cranked to the gills just to provide competition to a part using Nvidia's mid-range die.

Now if Big Navi did this well, that would be huge. Imagine if Big Navi beats the 2080Ti, then 3080Ti temporarily retakes the crown, then a tweaked Big Navi (Call it Big Navi Super ;) ) returns and ties the 3080Ti. That would be VERY damn good, and nothing to crap on like you are doing.

I regularly maintain that AMDs lack of GPU leadership isn't because they don't have the technology or skill, but because they choose not to take the chances that could get them there given the costs of failure. AMD has been playing it safe.

And why do you assume that a tweaked Big Navi would be 'the end of it'? Nvidia always leaves room for refinements in yield, and almost always releases a refresh with more cores enabled per tier and with higher clockspeeds to boot.

Finally, I'm not crapping on anything -- history is what it is. AMD is absolutely capable of producing real GPU competition if they're willing to take the necessary chances in my opinion. They just never have.
 
Same delusional hope since Tahiti. So adorable.

Did you miss the "if" qualifiers? I am not saying it will do that, just pointing out that 7970 was very competitive, held the lead for a few month, and was able hold it's own again the GTX 680 with a few tweaks.

And why do you assume that a tweaked Big Navi would be 'the end of it'? Nvidia always leaves room for refinements in yield, and almost always releases a refresh with more cores enabled per tier and with higher clockspeeds to boot.

What assumption? did you also miss the "if"?
 
As much as we've wanted to correlate tech from GPU manufacturers being used in consoles to how their discrete parts do, this has yet to ever play out. If anything, Sony and Microsoft's contributions will represent customizations that depart from desktop API conventions and generally won't be that useful.

And that's if AMD ports those customizations over to their discrete parts. Remember that performance is dependent on all rendering stages, and when moving to discrete parts, there are plenty of variables that muck things up. There may be no point in keeping features that won't get used, something AMD has a long history of doing, and ATi before them.



I highly doubt they'd let anything they've done escape NDA. I'd also highly doubt that they haven't already done test runs to prepare for mass production. What we know is what both of these companies have done in the past up till today, and really, there's very little evidence showing that we should expect a departure from a linear forecast, i.e., Nvidia will do on 7nm with Ampere what they've done on previous nodes, and AMD won't produce top-end part that competes. AMD might produce a 2080Ti competitor, but like the HD7970 vs. GTX580 example brought above, that AMD GPU would be released waiting for Nvidia's next generation in everyone's mind.
na, will pretty much be the same with minor twicks at least for the XBox. Microsoft wants the console and PC to sell games with. RNDA2 will have the most applicable tech gained from those two companies plus developers would rather have a more consistent platform to develope from making the incentive for Microsoft to combine their console and Windows gaming more to attract more developers while Sony may have to make more incentives money wise for exclusives. Consoles uniqueness is more in the software now days for Microsoft and Sony and any exclusives which for Microsoft is XBox/Windows.
 
Good grief Kepler is crap compared to Tahiti, you can actually play modern games with the 7970 (HD), good luck with a 680. Good on day 1 does not mean good overall.
 
...with a massive increase in power draw, noise, and heat, over the GTX680 ;). That was probably the worst the disparity has been in terms of performance per watt between the two companies' top consumer products. The HD7970 looked absolutely archaic when cranked to the gills just to provide competition to a part using Nvidia's mid-range die.



I regularly maintain that AMDs lack of GPU leadership isn't because they don't have the technology or skill, but because they choose not to take the chances that could get them there given the costs of failure. AMD has been playing it safe.

And why do you assume that a tweaked Big Navi would be 'the end of it'? Nvidia always leaves room for refinements in yield, and almost always releases a refresh with more cores enabled per tier and with higher clockspeeds to boot.

Finally, I'm not crapping on anything -- history is what it is. AMD is absolutely capable of producing real GPU competition if they're willing to take the necessary chances in my opinion. They just never have.
BS!
https://www.guru3d.com/articles-pages/radeon-hd-7970-ghz-edition-review,8.html
Massive power draw, you may need to do a little research, your brain memory (like mine) faulters. 680 173w, 7970 195w. 7970ghz edition 228w. The performance of the 7970ghz over time made the 680 look like crap.
 
na, will pretty much be the same with minor twicks at least for the XBox. Microsoft wants the console and PC to sell games with. RNDA2 will have the most applicable tech gained from those two companies plus developers would rather have a more consistent platform to develope from making the incentive for Microsoft to combine their console and Windows gaming more to attract more developers while Sony may have to make more incentives money wise for exclusives. Consoles uniqueness is more in the software now days for Microsoft and Sony and any exclusives which for Microsoft is XBox/Windows.

It's possible, sure; the question is whether it will be useful with desktop porting, desktop operating systems, desktop APIs, and desktop drivers come into play. AMD may leave the features for a variety of reasons, chief likely to be cost of redesign if they do, but again, unless support is built in throughout the stack, and that is a huge 'if', there will be no advantage. At best, we can hope that there is no disadvantage.

Good grief Kepler is crap compared to Tahiti, you can actually play modern games with the 7970 (HD), good luck with a 680. Good on day 1 does not mean good overall.

You keep repeating this point without providing sources for the claimed delta.

I'm actually interested to see myself, pretty sure my brother is still rocking an old GTX670 of mine somewhere.
 
Same delusional hope since Tahiti. So adorable.
Nothing delusional about it, this is a speculating thread with discussion - Tahiti spanks Kepler today - One who is delusional maybe someone else as in look in the mirror.
 
It's possible, sure; the question is whether it will be useful with desktop porting, desktop operating systems, desktop APIs, and desktop drivers come into play. AMD may leave the features for a variety of reasons, chief likely to be cost of redesign if they do, but again, unless support is built in throughout the stack, and that is a huge 'if', there will be no advantage. At best, we can hope that there is no disadvantage.



You keep repeating this point without providing sources for the claimed delta.

I'm actually interested to see myself, pretty sure my brother is still rocking an old GTX670 of mine somewhere.
You can do your own research, Tachiti is first GCN and it aged extremely well over time while Kepler was good for a narrow window of time. Really at this point it has nothing to do with this discussion other then a false attempt to think the next generation has anything to do with previous old generation. Nvidia Killer card is coming, which I doubt but it may be rather good.
 
You can do your own research

You're making the claim. Do you retract it?

Tachiti is first GCN and it aged extremely well over time while Kepler was good for a narrow window of time. Really at this point it has nothing to do with this discussion other then a false attempt to think next generation has anything to do with previous old generation.

And aside from worshiping the ground that Lisa Su walks on / hollowly trying to do your part to pump up confidence in AMD for stock purposes or perhaps just for your own self-esteem, what proof do you have that AMD will suddenly do better than they ever have, and that Nvidia will also do much worse?

Nvidia Killer card is coming, which I doubt really but it may be rather good.

I'm with you on Big Navi 'maybe' being rather good. I hope it's even better than that. But I cannot support for myself an argument that they'd exceed Nvidia's next offering.
 
You're making the claim. Do you retract it?



And aside from worshiping the ground that Lisa Su walks on / hollowly trying to do your part to pump up confidence in AMD for stock purposes or perhaps just for your own self-esteem, what proof do you have that AMD will suddenly do better than they ever have, and that Nvidia will also do much worse?



I'm with you on Big Navi 'maybe' being rather good. I hope it's even better than that. But I cannot support for myself an argument that they'd exceed Nvidia's next offering.



280x = hd7970 GHz edition

It beat it in the majority of titles and has similar power consumption numbers to the 680. Hwu concludes "performance per watt is very good"

Seems your memory is tinted in a shade of green ;)
 


280x = hd7970 GHz edition

It beat it in the majority of titles and has similar power consumption numbers to the 680. Hwu concludes "performance per watt is very good"

Seems your memory is tinted in a shade of green ;)

Yes, a number of more frequent reviews are readily availalbe to establish fact from fiction. The numbers even slants way more AMD if Vulkan and DX12 titles are looked at.

In anycase this thread is about next generation from AMD and comparing to what possibly Nvidia can do. I think AMD has more info to go by, Nvidia Ampere is basically a black box at this point in time. Nvidia success or not on 7nm? How large will Nvidia need to be if similar to Turing architecture efficiency? So I don't know but it does look good for us at least for AMD products coming. Not sure about Nvidia.
 
Last edited:
In anycase this thread is about next generation from AMD and comparing to what possibly Nvidia can do. I think AMD has more info to go by, Nvidia Ampere is basically a black box at this point in time. Nvidia success or not on 7nm? How large and Nvidia will need large chips if similar to Turing architecturej. So I don't know but it does look good for us at least for AMD products coming. Not sure about Nvidia.

It all comes down to how big AMD is will to go.

I think it's extremely likely NVidia will launch a >500mm^2 7nm part for Ampere Titan/3080Ti.

But will AMD go >500mm^2? Then it will be a crown taking beast, out in front until NVdia launches Ampere Titan/3080Ti, which could be quite as ways in the future.

OTOH If big Navi is under 400mm^2, that is going to be disappointing, probably falling behind 2080ti.

Between 400 and 500, there is lots of room for an interesting product.

Really looking forward to that CES announcement, hopefully there are some solid details.
 
It all comes down to how big AMD is will to go.

I think it's extremely likely NVidia will launch a >500mm^2 7nm part for Ampere Titan/3080Ti.

But will AMD go >500mm^2? Then it will be a crown taking beast, out in front until NVdia launches Ampere Titan/3080Ti, which could be quite as ways in the future.

OTOH If big Navi is under 400mm^2, that is going to be disappointing, probably falling behind 2080ti.

Between 400 and 500, there is lots of room for an interesting product.

Really looking forward to that CES announcement, hopefully there are some solid details.
Pretty much agree if that falls down that way. I am having a hard time finding an overwhelming incentive for AMD for gaming to go large - for AI, HPC etc. I think they have to if they want to remain relevant. AMD win now is more broader in my opinion, as in also profits per GPU sold and less on a halo card. For example a 2080Ti challenger but less than $800 which for most would just buy the lower cost one unless Ampere comes in beating it which for that price range I doubt Nvidia could. Nvidia will probably have a over the top greater than $1000 card - good for them - it better be much more worth it than the 2080 Ti (my opinion). I can see the next key area will be RT vs RT performance :) like the previous DX 12 vs Dx 12.

CES, looking forward to it as well and hopefully not another letdown.
 
The 290x was definitely a card that competed. Pretty popular too.


Not on price. Miners made sure of that. AMD didn't get it anywhere near MSRP until a year later, and then it had the GTX 970 to compete with.

AMD lost money on every R9 290 they sold at $250, even though it was still slower than the 970. AMD learned a little something there with memory compression (and still took two more years to add it to their whole lineup).

AMD's Navi parts are between Maxwell and Pascal on compression. Good, but not Turning level. And if they want to own the high-end, they will need some expensive memory bandwidth to power it. The best I can see them doing is RTX 2080 Super performance on 384-bit bus.

They could also use HBM2, but 4 stacks means the thing would cost $800, and have no RTX. They're not dealing with the poor perf/watt of Polaris, so they could easily handle the added power consumeption of 384-bit GDDR6.

But it's going to consume at least 300w to do this. The full 5700 XT is already a 220w card.
 
Last edited:
Two stacks, HBM2E, 2096bit bus - > 832gb/s, that should be enough even without any improvement in compression. Leaves option for 16gb and 8gb cards for pricing but even 48gb with two stacks are possible. I expect AMD to up the memory on the high end to either 12gb for DDR6 or 16GB for HBM23.

Hynix, Samsung and Gloflo are all producing HBM2E -> not sure for whom though.

https://www.tomshardware.com/news/samsung-hmb2e-12-layer-3d-stack-memory,40569.html
 
I would say, sorta, the 5700/5700 XT kicked some teeth and nudged Nvidia to reduce their card prices some. Go team RED! :D

Something to think about, the next world's fastest super computer will use AMD CPU's and GPU's, I guess Nvidia, Intel, IBM . . . couldn't muster the top spot on that one. AMD has the ability, 2020 should be very interesting from both AMD and Nvidia. Jan 6 is right around the corner for CES in Las Vegas so not far off we will see if AMD kicks some Nvidia ASS or visa versa. Maybe Nvidia will release their next big update on the Shield during CES ;)

Some keep mentioning 16nm is the same as 12nm -> if that is the case maybe that was kinda stupid for Nvidia to pay more for it -> reality is it has higher density and power savings so really not the same, maybe not a big boost but it is better.

With Vega 10 at 14nm GF went to Vega 20 at TSMC 7nm, which had roughly 6% more transistors for more FP64 and a 4096 memory controller etc., Vega 20 size only reduced to 68% compared to Vega 10. Same number of shaders for the full chip. I am pretty sure 12nm TSMC is better than the original 14nm GF. Please think about that and consider:
  • If TU 102 (Titan RTX and 2080Ti GPU) at 754mm^2 was just shrunk down and roughly followed that ratio: it would be 754mm^ x .68 = 512mm^2 -> A very large chip for 7nm, what kind of yields would that have on 7nm?
  • To get that 30+% performance increase from GP 102 471mm^2 die (1080Ti, TitanXp) to TU 102 plus RT - Transistor count went up 1.58x more and size went up 1.6x more for the 2080 Ti GPU over the 1080 Ti GPU
    • It was a very huge transistor increase and more shaders -> Architecture wise I am not sure if it really improved much over Pascal besides supporting Vulcan and DX12 API's much better and of course the infamous RT which has 6 games after over a year, maybe a few more now
  • MY POINT, OPINION,
    • Nvidia will have to increase transistor count once again to improve performance meaningfully
    • Going to 7nm or even 7nm+ or 7nm UEV and adding even more transistors will make it a very huge chip as in over 600mm^2 -> I don't see that happening anytime soon, Nvidia blew everything they had, maxing out transistors, die size on TU 102. To have a meaningful upgrade chip besides power saving, a minor bump resulting in a huge cost to produce. Something has to give.
    • Either Nvidia will be very late next year once yields get better or Nvidia does something very radical such as using multiple chips (very cool! would be funny Nvidia beating AMD in using multiple chips)
    • I would expect Nvidia to produce their mid size 7nm chip with good performance for gaming and keeping any larger size GPU's for the professional very high end market for awhile, something they have done before several times
So where does that leave AMD -> AMD GPU's comparably are much smaller and have room to grow, as in Navi 10 2560 shaders is at 251mm^2 . With 7nm+ with increase density potential, + RT? should keep it less than 425mm^2 for a 4096 shader version and around 500mm^2 for the Uber version 80CU. Maybe I am missing something, no problem please point it out - just some thoughts which can be improved upon with good input.


The problem with what you're assuming is that Ampere will simply be Turing but shrunk down but that simply won't be the case. One thing everyone is forgetting is that with DX 12 finally taking off and likely to be a huge factor with XBOX Series X's imminent 2020 release is that AMD will likely catch up to NVIDIA in a lot of games and maybe even beat it. In the past with DX 9/10/11 NVIDIA had their awesome driver team helping propel them ahead of AMD but now that won't be possible anymore. With PS 5 and Series X being AMD based machines, we'll probably start seeing AMD reap the benefits of it on the PC side. Honestly, if I were NVIDIA, I'd be very worried for the future. Whoever comes up with a superior MCM solution for GPUs will inevitably win the GPU wars but nobody should count Intel out either, those guys have a vested interest in the GPU market now and will likely pour billions into it.
 
Last edited:
The problem with what you're assuming is that Ampere will simply be Turing but shrunk down but that simply won't be the case. One thing everyone is forgetting is that with DX 12 finally taking off and likely to be a huge factor with XBOX Series X's imminent 2020 release is that AMD will likely catch up to NVIDIA in a lot of games and maybe even beat it. In the past with DX 9/10/11 NVIDIA had their awesome driver team helping propel them ahead of AMD but now that won't be possible anymore. With PS 5 and Series X being AMD based machines, we'll probably start seeing AMD reap the benefits of it on the PC side. Honestly, if I were NVIDIA, I'd be very worried for the future. Whoever comes up with a superior MCM solution for GPUs will inevitably win the GPU wars but nobody should count Intel out either, those guys have a vested interest in the GPU market now and will likely pour billions into it.
Not sure what will be Ampere, if efficiency is about the same as Turing I think Nvidia will have issues with size, yield etc. Would love to see MCM but does not appear that will be in Ampere so no idea what magic Nvidia will deliver if any. As for PS 5 and XBOX Xs, RT maybe rather interesting if used and how much and how it compares to Nvidia solutions. Will there be a battle of RT methods? Will some RT work with one hardware and the other with others? I hope not. Jan 6 could be very interesting or like last year, no RNDA cards but VegaII, RNDA2? Hopefully AMD can give us some interesting and good news.
 
One thing everyone is forgetting is that with DX 12 finally taking off and likely to be a huge factor with XBOX Series X's imminent 2020 release is that AMD will likely catch up to NVIDIA in a lot of games and maybe even beat it.

DX12 on consoles is not DX12 on the desktop, and almost no one does DX12 well on the desktop.

If this were a thing, we'd have already seen it, as DX12 was introduced to bring down API overhead, at the cost of significant developer investment.
 
DX12 on consoles is not DX12 on the desktop, and almost no one does DX12 well on the desktop.

If this were a thing, we'd have already seen it, as DX12 was introduced to bring down API overhead, at the cost of significant developer investment.
The major Game Engines I would have thought would have optimize DX 12, seems like Vulkan is being better optimize then DX 12 in those game engines. Maybe, even being only two major players for GPU's that there is sufficient differences in the GPU's, even from the same company to fully use the hardware performance due to many differences in capability. On the console it would be easier but then again they too are having minor and major updates. So is DX 12 falling short of the promises of better hardware level access? Or are Developers still tied to DX 11 enough for non or poor performing DX 12 cards preventing full investment?
 
So is DX 12 falling short of the promises of better hardware level access?

Eh, not really -- DX12 is extremely similar to Vulcan in its approach to being a low-overhead graphics API, and that's to be expected given the goal of both technologies as well as their common origins.

Or are Developers still tied to DX 11 enough for non or poor performing DX 12 cards preventing full investment?

This is more likely. "DX12" on consoles isn't DX12 on the desktop as nice as that would be. It makes sense as the console API solidified long before the desktop side, and on the desktop side more provisions must be made to support a much wider array of hardware.

That means that developers likely look at developing for a DX12 PC target as like developing for a dozen different console baselines in parallel, whereas DX11 still presents more or less a single target. Thus, DX11 is the easy way out, as it's already done and done well.

And most likely, nearly all of those developers that are doing DX12 titles just aren't putting in the effort to provide the broader set of hardware optimizations that the API demands.


The major Game Engines I would have thought would have optimize DX 12, seems like Vulkan is being better optimize then DX 12 in those game engines.

Vulcan is a bit of a separate thing in terms of developer investment in ways that I can't really say I understand. To me, it seems like the split between developers that previously focused on OpenGL before DX; but today, more like they're focusing solely on Vulcan.

What I will say, because I absolutely agree with you here, is that a much larger percentage of titles using Vulcan use it well when compared to the percentage of DX12 titles that use DX12 well, and I see this not just cross-vendor but also cross-platform, where Vulcan titles are making a very nice showing on Linux too.
 
I really expect AMD to release a big Navi Chip with at least 64 CUs, which should make a decent run at the 2080Ti.

I wonder how high they could push the power draw on a halo tier product. What's the most watts we've ever seen sucked down by a duel gpu monstrosity?
 
Correct me if I miss something, or misinterpreted something BUT..

You're saying you do not care what node they use, as long as they are putting out the same performance, correct?

I see your point, I really do, but if both companies are putting out new cards nearly simultaneously each cycle, and you KNOW the new Nvidia card will be ahead of it for the first nearly year, even if that Nvidia card is expected to be released 2 months after the AMD card, why wouldn't you just wait for the better card?

Second question: you regularly state that HW RT is useless on the Nvidia cards. Do you mean in wasted die size or in another metric? (I think this was you, anyway).

1st point: The 1650/5700 were out very similar times unless you want to compare to a +$100 70 class nvidia card? I don't get what you are trying to say here.

HW RT is useless on the lower end cards by the sheer metric that most don't consider 1080p/60 acceptable in this day and age especially for the type of games it has been integrated with (almost all FPS). Sure, I slog away with 60Hz for now but it's not optimal for those types of titles.
Next gen yes we might see something a bit more useful. Don't get me wrong. RT is great but 60hz RT isn't exciting for me looking to upgrade my rig. I personally wouldn't give a crap as I'm waiting for TVs to hit 120Hz before upgrading screen/GPU but the absolute lack of titles currently (they are forever Soon™ for the last year) is not really tempting me.

Snowdog has said everything and more than I need to say for everything else.
 
I wonder how high they could push the power draw on a halo tier product. What's the most watts we've ever seen sucked down by a duel gpu monstrosity?

Biggest issue is going to be cooling; you get 150w per 8-pin GPU power cable and we've seen up to three of those, so that's 450w before you count the 75w from the slow.

Getting that amount from a PSU isn't a challenge. Getting that amount of heat away from the GPU(s) and out of the enclosure? Actually also not a problem, but there will be mounting limitations involved.
 
Biggest issue is going to be cooling; you get 150w per 8-pin GPU power cable and we've seen up to three of those, so that's 450w before you count the 75w from the slow.

Getting that amount from a PSU isn't a challenge. Getting that amount of heat away from the GPU(s) and out of the enclosure? Actually also not a problem, but there will be mounting limitations involved.
As a reference, Vega 64 Liquid Cool could go past 450w and the cooler was very effective in removing the heat. AMD has shown a willingness to go above 300w, except in this case it did not give much.
 
NVidia sticking with 12nm longer gave them breathing room to work with both TSMC and Samsung, and they will choose the best of competing processes for their flagship parts. IMO, Samsung is NOT a lock for flagship Ampere. The best process will win.

Just a followup where I said Samsung is not a lock and the best of TSMC/Samsung will win. It's looks like it's TSMC again, finally killing all the wacky theories that NVidia was shut out of TSMC.

Recent Interview:
https://finance.technews.tw/2019/12/20/nvidia-7nm-for-tsmc/
translate:
http://translate.google.com/translate?u=https://finance.technews.tw/2019/12/20/nvidia-7nm-for-tsmc/&hl=en&langpair=auto|en&tbb=1&ie=UTF-8
"South Korean media have reported that South Korean NVIDIA executives said that the company's next-generation GPU graphics chip will be transferred to South Korean Samsung and will be built using 7nm process technology with extreme ultraviolet lithography (EUV) technology. . NVIDIA founder Huang Renxun clarified in a media interview a few days ago that in the future, most orders for 7-nanometer process products will still be produced by TSMC, and Samsung will only receive a small number of orders."
 
NVIDIA founder Huang Renxun

That's one hell of a transliteration :playful:

And it looks like the real situation was the same as we thought -- Nvidia is using both manufacturers, probably trying to slowly ramp up production at Samsung, though their low production likely is reflective of a more careful strategy. Samsung may not be ready for the chart-topping behemoths that Nvidia is fond of producing.
 
Just a followup where I said Samsung is not a lock and the best of TSMC/Samsung will win. It's looks like it's TSMC again, finally killing all the wacky theories that NVidia was shut out of TSMC.

Recent Interview:
https://finance.technews.tw/2019/12/20/nvidia-7nm-for-tsmc/
translate:
http://translate.google.com/translate?u=https://finance.technews.tw/2019/12/20/nvidia-7nm-for-tsmc/&hl=en&langpair=auto|en&tbb=1&ie=UTF-8
"South Korean media have reported that South Korean NVIDIA executives said that the company's next-generation GPU graphics chip will be transferred to South Korean Samsung and will be built using 7nm process technology with extreme ultraviolet lithography (EUV) technology. . NVIDIA founder Huang Renxun clarified in a media interview a few days ago that in the future, most orders for 7-nanometer process products will still be produced by TSMC, and Samsung will only receive a small number of orders."
Good to know, makes me wonder what those small orders from Samsung will be? Big chips or small and the many orders from TSMC? 7nm+ or 6nm? TSMC better open up more foundries fast! Except don't those take a rather long time to do, more like years?
 
Good to know, makes me wonder what those small orders from Samsung will be? Big chips or small and the many orders from TSMC? 7nm+ or 6nm? TSMC better open up more foundries fast! Except don't those take a rather long time to do, more like years?

I expect it will be a fairly low end part. Flagship parts will remain TSMC, because they can likely wring more performance from TSMC. On the low end where it doesn't matter they can throw Samsung a bone.

Is this an accurate article dealing with Samsung and 5500 XT? If so then AMD is using both TSMC and Samsung 7nm process successfully.

https://www.fudzilla.com/news/graphics/49991-amd-went-to-samsung-for-rx-5500-xt-navi-14-gpu

I see no evidence that FudZ is right. It's just aimless click-baiting IMO.
 
  • Like
Reactions: noko
like this
I am getting the impression that if Big Navi is announced at CES, it won't be shipping for quite a while. Pretty much every recent AMD release has been telegraphed by appearances in some of the online benchmark tools.

We have seen zilch on Big Navi, which leads me to believe it's launch isn't anytime soon.

Long lead pre-announcing works for AMD, because they don't have a product competing in the upper tier to stall, so they can attempt to stall sales of NVidia 2080-2080Ti parts, by talking up something coming in the future...
 
Back
Top