HWUB/Techspot adds more games, 5700XT falls further behind 2070 Super.

Gamer X

didn't you say 5700xt was getting faster with each driver release? :ROFLMAO::ROFLMAO::ROFLMAO:

You obviously don't know what you're talking about. Let me present you with some cold hard FACTS not nvidiot bias: AMD engineers have designed a revolutionary RDNA architecture that will not only get faster than 2080 Ti over time but also cure cancer. Dr. Lisa Su oversaw the development of RDNA and it's soon going to power the PS 5 and next Xbox. Once those consoles come online, RDNA will network itself and come to life and solve world hunger and bring about world peace--that's how amazing those Shanghai, China gpu engineers are! All of them were trained by Dr. Lisa Su who never exaggerates or lies.

I don't post often but when I do, people listen.
 
Last edited:
You have to understand that Navi is less efficient at 7nm than NVIDIA is at 12nm.
Not according to that hitman data you posted. Same performance in that title according to your article yet it uses less power than the 2060.
 
Not according to that hitman data you posted. Same performance in that title according to your article yet it uses less power than the 2060.

You seem to have missed the two NVIDIA cards above it that are not only faster, but use less power than the AMD 5700XT does. The only reason why the AMD cards are priced the way they are is because AMD wouldn't sell any of them if they didn't slash prices. I'm assuming that price is the reason your ignoring the other, faster video cards.

When AMD built those, they didn't intend to barely compete with NVIDIA's mid-range that's already a year old.
 
Last edited:
You make a ton of assumptions about how well Navi will scale. The fact of the matter is...
-snip-

o_O ^^
The fact of the matter is there are more than one way to scale performance...!



Dan, it seems you went the long way... to look past this simple fact. <--

It is known that GCN was bound, but RDNA is not... it is 100% scalable in many directions (smartphones incoming...) You are dismissing the FACT, that AMD can afford to offer twice as many CORES as Nvidia (Like AMD is doing to intel), so do understand that all those psuedo concerns you facetiously thought out^, have already been discussed on other forums, elsewhere. And it is obviously that AMD's "big-navi" GPUs do NOT have to reach higher clocks, to gain more performance..!

Secondly, how does 5700xt blow the Vega64 out of the water when Navi10 has only 2,560 SPs (Stream Processors) and Vega10 has 4,096...? :vulcan: -Or- how the 5700xt beats the Radeon VII is some games, even though they are both at 7nm and one is much larger than the other, with the R7 having 3,840 SPs. Do you see where this is going..? AMD can afford to pack more CUs and SPs into their chips and sell them to us cheaper than Nvidia. RDNA uArch is much more efficient than GCN uArch Radeon 7. So even if it doesn't reach R7's exact clocks, it doesn't need to if RDNA is free from the 4,096 limit. That is simple math.

5800 Performance will come in many ways, but it will have more expensive GDDR6 VRAM and more bandwidth. And knowing (for a fact) that Navi10 is bandwidth starved, faster VRAM will also make RDNA scaled up, that much more quicker.




Also why are you showing me charts, when you were discussing "SPECS"..? Just look at the die size of Vega10 vs Vega20 vs Navi10.
 
Last edited:
o_O ^^
The fact of the matter is there are more than one way to scale performance...!



Dan, it seems you went the long way... to look past this simple fact. <--

It is known that GCN was bound, but RDNA is not... it is 100% scalable in many directions (smartphones incoming...) You are dismissing the FACT, that AMD can afford to offer twice as many CORES as Nvidia (Like AMD is doing to intel), so do understand that all those psuedo concerns you facetiously thought out^, have already been discussed on other forums, elsewhere. And it is obviously that AMD's "big-navi" GPUs do NOT have to reach higher clocks, to gain more performance..!

Secondly, how does 5700xt blow the Vega64 out of the water when Navi10 has only 2,560 SPs (Stream Processors) and Vega10 has 4,096...? :vulcan: -Or- how the 5700xt beats the Radeon VII is some games, even though they are both at 7nm and one is much larger than the other, with the R7 having 3,840 SPs. Do you see where this is going..? AMD can afford to pack more CUs and SPs into their chips and sell them to us cheaper than Nvidia. RDNA uArch is much more efficient than GCN uArch Radeon 7. So even if it doesn't reach R7's exact clocks, it doesn't need to if RDNA is free from the 4,096 limit. That is simple math.

5800 Performance will come in many ways, but it will have more expensive GDDR6 VRAM and more bandwidth. And knowing (for a fact) that Navi10 is bandwidth starved, faster VRAM will also make RDNA scaled up, that much more quicker.




Also why are you showing me charts, when you were discussing "SPECS"..? Just look at the die size of Vega10 vs Vega20 vs Navi10.

Radeon VII appears to be a cut down chip from the pro line. The die size and transistor count listed can’t be used since some of it is inactivated. It was basically a place for dies with defects to go.

Also directly comparing die size for differnet fabs doesn’t make sense unless you take into account 7nm is around twice as costly as 12nm per an area. Transistor count would make way more sense, but this has been discussed...
 
Radeon VII appears to be a cut down chip from the pro line. The die size and transistor count listed can’t be used since some of it is inactivated. It was basically a place for dies with defects to go.

Also directly comparing die size for differnet fabs doesn’t make sense unless you take into account 7nm is around twice as costly as 12nm per an area. Transistor count would make way more sense, but this has been discussed...
I wouldn't bother wasting my time arguing with him, he has shown many times he has clear bias and will say whatever he wants to fit his own little narratives, whether out right lying or using speculative as facts.
 
I wouldn't bother wasting my time arguing with him, he has shown many times he has clear bias and will say whatever he wants to fit his own little narratives, whether out right lying or using speculative as facts.


the FACT of the matter here is that rx5800 will consume less power than the 2080ti and be able to move electrons at superluminal velocities thanks to 7nm.....
You would see this if you looked beyond your Nvidia bias.

We should just respond with these sorts of posts.
 
The only reason why the AMD cards are priced the way they are is because AMD wouldn't sell any of them if they didn't slash prices. I'm assuming that price is the reason your ignoring the other, faster video cards.

What? Didn't you get the memo? AMD tricked nvidia into pricing the Super cards at $399/$499, simply because AMD loves gamers and is willing to loose up to $100 per card :D:D:rolleyes::rolleyes:. Not becuase it would have lost any price/performance advantage. Just ask AMD. :rolleyes::rolleyes:
 
the FACT of the matter here is that rx5800 will consume less power than the 2080ti and be able to move electrons at superluminal velocities thanks to 7nm.....
You would see this if you looked beyond your Nvidia bias.

We should just respond with these sorts of posts.
Lol you telling me it will render an image on screen before the GPU render does the rendering?? Holy shit, so not only the RX5800 will run my games at Uber 4k at 240hz but also break causality, imagine the power saving on that!
 
RDNA's efficiency compared to GCN isn't in dispute. What the hell does that have to do with anything? No one is denying that RDNA is better than the out going GCN stuff. You bring up all this crap about RDNA vs. the old GCN stuff but completely gloss over the fact that NVIDIA's midrange offerings are faster and more efficient at 12nm than RDNA / AMD is at 7nm today. You also seem to think AMD can just add more CU's to the GPU and scale it perfectly. The data I showed proves this is factually incorrect. The 5700XT has 4 CU's (256 stream processors) more than the 5700. It also has slightly higher clocks and offers very little additional performance over the 5700, but consumes 80w more power. This isn't linear scaling. The additional CU's do very little in this case and the clocks aren't raised all that much but the power consumption increase is massive to say the least. This tells us that AMD has to add allot of CU's and still potentially increase clocks to challenge NVIDIA. That's going to come at a large cost in the form of power consumption.

And I don't know where you get the idea that AMD can afford to offer twice as many streaming processors as NVIDIA does CUDA cores. You do realize that would amount to around 9,000 streaming processors? That's nearly 4x what the 5700XT has. The RTX 2080 Ti has 4352 CUDA cores. I'm not sure where you get your math from, but its wrong. You've also glossed over the fact that AMD doesn't have to beat Turing. Even if it did, that's a very short lived victory. AMD has to beat the upcoming 7nm Ampere. NVIDIA has done amazing things in terms of efficiency at 28nm, 12nm etc. At 7nm, we could see an increae in efficiency and performance that AMD may not be able to challenge for years to come.
 
RDNA's efficiency compared to GCN isn't in dispute. What the hell does that have to do with anything? No one is denying that RDNA is better than the out going GCN stuff. You bring up all this crap about RDNA vs. the old GCN stuff but completely gloss over the fact that NVIDIA's midrange offerings are faster and more efficient at 12nm than RDNA / AMD is at 7nm today. You also seem to think AMD can just add more CU's to the GPU and scale it perfectly. The data I showed proves this is factually incorrect. The 5700XT has 4 CU's (256 stream processors) more than the 5700. It also has slightly higher clocks and offers very little additional performance over the 5700, but consumes 80w more power. This isn't linear scaling. The additional CU's do very little in this case and the clocks aren't raised all that much but the power consumption increase is massive to say the least. This tells us that AMD has to add allot of CU's and still potentially increase clocks to challenge NVIDIA. That's going to come at a large cost in the form of power consumption.

And I don't know where you get the idea that AMD can afford to offer twice as many streaming processors as NVIDIA does CUDA cores. You do realize that would amount to around 9,000 streaming processors? That's nearly 4x what the 5700XT has. The RTX 2080 Ti has 4352 CUDA cores. I'm not sure where you get your math from, but its wrong. You've also glossed over the fact that AMD doesn't have to beat Turing. Even if it did, that's a very short lived victory. AMD has to beat the upcoming 7nm Ampere. NVIDIA has done amazing things in terms of efficiency at 28nm, 12nm etc. At 7nm, we could see an increae in efficiency and performance that AMD may not be able to challenge for years to come.


If RDNA efficiency is not in dispute, then why are you unable to compare it to Radeon 7..?

Secondly, "efficiency" and "more performance per card" are two different metrics you keep mixing up, If the RTX2070 SUPER was 7nm, it would still be larger than even Vega64. You are not understanding that it is TURING that doesn't scale well. And you are trying to compare TURING's RTX2070 SUPER to a super small (half it's size) 251mm^2 RDNA chip.

It doesn't matter what Nvidia is offer as "mid-range" solution, that is how they are marketing (ie: Selling their GPU), and not based on economy of scale or chip and wafer size, etc.





Size for size, even Turing at 7nm does not equal the performances that is RDNA. As such, even a bigger-navi will still be more efficient than radeon 7, we all know this as a fact. Why don't you understand this simple fact...? If little navi outperforms Radeon 7, than Navi10 the size of radeon 7 will be much faster. (!!)

And what is more, is that RDNA scaled up to R7 (7nm) size is roughly 40% more than Navi10 and about 64 CUs. And even if AMD went full-monty with a massive 80 CU chip, it would still be smaller than the RTX2070 Super shrunk to 7nm. These are facts. It is obvious you don't understand how more CU's and less freq = more gaming performance. GPU do not operate like CPUs. And that GPU are highly parallel and you can have 80CU running at 1,300Hz that out performs a GPU with 64 CU at 1,600Hz. Downclocking yields incredible efficiencies for AMD hardware. (ie: see bit mining, plz).

As such, Turing's size & inefficiency can't compete with RDNA's scaling. Do you honestly think Nvidia is going to come out with a 250mm^2 chip that is faster than the RTX2080 Super when they move to 7nm in a years time...??





Lastly, yes AMD can afford to give Gamers more CORES because AMD is not constrained by chip size, because they are using 7nm and have economy of scale on their side. Meaning that AMD can sell more powerful chip cheaper than Nvidia can.

Navi10 with 40 CU's is 251mm^2 and has 2560 SPs... <--- look at that
-and-
Navi21 with 60 CU's would be about 315mm^2 and have 3,840 SP's.
Navi23 with 80 CU's would be about 380mm^2 and have 5,120 SP's.

NaviXX with 120 CU's would be about 445mm^2 and have 7,680 SP's.


All are doable under RDNA.
 
Last edited:
And what is more, is that RDNA scaled up to R7 (7nm) size is roughly 40% more than Navi10 and about 64 CUs. And even if AMD went full-monty with a massive 80 CU chip, it would still be smaller than the RTX2070 Super shrunk to 7nm. These are facts. It is obvious you don't understand how more CU's and less freq = more gaming performance. GPU do not operate like CPUs. And that GPU are highly parallel and you can have 80CU running at 1,300Hz that out performs a GPU with 64 CU at 1,600Hz. Downclocking yields incredible efficiencies for AMD hardware. (ie: see bit mining, plz).

As such, Turing's size & inefficiency can't compete with RDNA's scaling. Do you honestly think Nvidia is going to come out with a 250mm^2 chip that is faster than the RTX2080 Super when they move to 7nm in a years time...??

Lastly, yes AMD can afford to give Gamers more CORES because AMD is not constrained by chip size, because they are using 7nm and have economy of scale on their side. Meaning that AMD can sell more powerful chip cheaper than Nvidia can.

Navi10 with 40 CU's is 251mm^2 and has 2560 SPs... <--- look at that
-and-
Navi21 with 60 CU's would be about 315mm^2 and have 3,840 SP's.
Navi23 with 80 CU's would be about 380mm^2 and have 5,120 SP's.

NaviXX with 120 CU's would be about 445mm^2 and have 7,680 SP's.


All are doable under RDNA.

Those are NOT facts, that's speculation.
 
Yes, adults are using facts, to speculate.
You can try and present your facts, but don't dismiss that we know what Navi10 is and does in games... and we know what Vega20 is and does in games, and both at 7nm.
 
Yes, adults are using facts, to speculate.
You can try and present your facts, but don't dismiss that we know what Navi10 is and does in games... and we know what Vega20 is and does in games, and both at 7nm.

What do you expect the performance and wattage to be for Navi21, Navi23, and NaviXX?
 
Yes, adults are using facts, to speculate.
You can try and present your facts, but don't dismiss that we know what Navi10 is and does in games... and we know what Vega20 is and does in games, and both at 7nm.
So it IS speculation after all :D:D:rolleyes::rolleyes:
 
If RDNA efficiency is not in dispute, then why are you unable to compare it to Radeon 7..?

Yes, RDNA is faster than GCN. We know that. Any commentary beyond that is meaningless. This thread is literally about the 5700XT falling behind the 2070 Super in more and more games. All the other stuff your talking about is irrelevant and off topic.

Secondly, "efficiency" and "more performance per card" are two different metrics you keep mixing up, If the RTX2070 SUPER was 7nm, it would still be larger than even Vega64. You are not understanding that it is TURING that doesn't scale well. And you are trying to compare TURING's RTX2070 SUPER to a super small (half it's size) 251mm^2 RDNA chip.

It doesn't matter what Nvidia is offer as "mid-range" solution, that is how they are marketing (ie: Selling their GPU), and not based on economy of scale or chip and wafer size, etc.


Size for size, even Turing at 7nm does not equal the performances that is RDNA. As such, even a bigger-navi will still be more efficient than radeon 7, we all know this as a fact. Why don't you understand this simple fact...? If little navi outperforms Radeon 7, than Navi10 the size of radeon 7 will be much faster. (!!)

And what is more, is that RDNA scaled up to R7 (7nm) size is roughly 40% more than Navi10 and about 64 CUs. And even if AMD went full-monty with a massive 80 CU chip, it would still be smaller than the RTX2070 Super shrunk to 7nm. These are facts. It is obvious you don't understand how more CU's and less freq = more gaming performance. GPU do not operate like CPUs. And that GPU are highly parallel and you can have 80CU running at 1,300Hz that out performs a GPU with 64 CU at 1,600Hz. Downclocking yields incredible efficiencies for AMD hardware. (ie: see bit mining, plz).

No it isn't. Not only is the NVIDIA architecture more efficient, but its also faster. At the end of the day, performance in games is all that matters. No one gives two squirts of piss that the Turing GPU's are physically larger than RDNA. This is something only you give a damn about. As such, Turing's size & inefficiency can't compete with RDNA's scaling. Do you honestly think Nvidia is going to come out with a 250mm^2 chip that is faster than the RTX2080 Super when they move to 7nm in a years time...?





Lastly, yes AMD can afford to give Gamers more CORES because AMD is not constrained by chip size, because they are using 7nm and have economy of scale on their side. Meaning that AMD can sell more powerful chip cheaper than Nvidia can.

Navi10 with 40 CU's is 251mm^2 and has 2560 SPs... <--- look at that
-and-
Navi21 with 60 CU's would be about 315mm^2 and have 3,840 SP's.
Navi23 with 80 CU's would be about 380mm^2 and have 5,120 SP's.

NaviXX with 120 CU's would be about 445mm^2 and have 7,680 SP's.


All are doable under RDNA.

Turing is generally more efficient in terms of performance per watt. No one gives two shits about the die size. But even so, adding 4 CU's and minimal clock speed makes the 5700XT a pig compared to the 5700. You can't just keep adding CU's and not have to deal with power and heat. What the hell are you smoking? At the end of the day, performance in games is all that matters. No one gives two squirts of piss that the Turing GPU's are physically larger than RDNA based GPU's. Lastly, you have no idea what Ampere will bring to the table. We have no information about it other than its a new architecture at 7nm.
 
Last edited:
Lastly, yes AMD can afford to give Gamers more CORES because AMD is not constrained by chip size, because they are using 7nm and have economy of scale on their side. Meaning that AMD can sell more powerful chip cheaper than Nvidia can.

Navi10 with 40 CU's is 251mm^2 and has 2560 SPs... <--- look at that

This kind of thing, is what is known as arguing in bad faith.

At one point it might have been simple ignorance, but you have been corrected enough, with actual facts, that you should know what you are saying is NOT true.

The chip size argument only works when you are on the same process. Then if one fully active die is significantly larger than another fully active die, for the same performance, then that is indicative of an actual advantage.

But that is not the case here. As has been pointed out to you multiple times, 7nm process is nearly double the cost, of the 14nm class that NVidia is using. AMD has been shouting this from the rooftops. Most recently in Lisa Su's Keynote at Hot Chips 31:
https://www.anandtech.com/show/1476...ay-1-dr-lisa-su-ceo-of-amd-live-blog-145pm-pt
04:54PM EDT - Cost per transistor increases with the newest processes
04:54PM EDT - 7nm is 4x the cost of 45nm
IMG_20190819_135435_575px.jpg


Note how Lisa Su is pointing out that the exact die size you are harping on, cost ~4X what it did in 45nm. From that same AMD graph she is standing right in front of you can see it's about ~2X what it does for 14nm class die of the same size.

Now the math may be hard for you to follow, but if it costs about twice as much as the process NVidia is on, that means NVidia on the cheaper process can build a die twice as large for the same price.

Or look at another thing she said above: "Cost per transistor increases with the newest processes".

At best you are looking at about even costs for the transistors used to make these GPUs.

Your small die argument is completely groundless.

Continuing to use it, only makes you look disingenuous at best.

Look at the transistor counts for a reasonable point of comparison. For that, the best point of comparison for fully active dies, is 5700XT vs RTX 2070. It only shows more bad faith when you try to use the 2070 Super die, as it's a partially disabled die.

5700XT and 2070 perform similarly and have similar transistor counts. So there is no longer a significant advantage for either side, and both will face similar issues scaling larger, so it's nothing but a fantasy that AMD will have some kind of massive Scaling advantage.

If you want to be taking seriously, stop arguing in bad faith, with arguments that have been refuted many times over already.
 
Yes, RDNA is faster than GCN. We know that.

Any commentary beyond that is meaningless. This thread is literally about the 5700XT falling behind the 2070 Super in more and more games. All the other stuff your talking about is irrelevant and off topic.


Turing is generally more efficient in terms of performance per watt. <------ lol

No one gives two shits about the die size. But even so, adding 4 CU's and minimal clock speed makes the 5700XT a pig compared to the 5700. You can't just keep adding CU's and not have to deal with power and heat. What the hell are you smoking? At the end of the day, performance in games is all that matters. No one gives two squirts of piss that the Turing GPU's are physically larger than RDNA based GPU's. Lastly, you have no idea what Ampere will bring to the table. We have no information about it other than its a new architecture at 7nm.


Nobody cares about performance per watt... they care about performance per dollar...! (That is how Human's shop)

Unless you are populating a datafarm & worried about $1m electric bill every month, then don't pretend to use a metric that does not matter to GAMERS. I suspect, that is why you are trying to focus away from die size.... because die size is directly related to price and performance.

And if you want to compare performance between Turing and RDNA, it is better suited to use the 2070 and the 5700xt. But I will give you the "SPECS" and break down of the facts.

Turing TU-106 is 445mm^2:
  • with 1920 Cores = 2060
  • with 2176 Cores = 2060 Super
  • with 2304 Cores = 2070

Turing TU-104 is 545mm^2:
  • with 2560 Cores = 2070 SUPER
  • with 2944 Cores = 2080
  • with 3072 Cores = 2080 SUPER

Turing TU-102 is 754mm^2":
  • with 4352 Cores = 2080ti

RDNA Navi10 is 251mm^2: (ed: for reference)
  • with 2560 SPs = RX 5700xt
  • with 2304 SPs = RX 5700

When you compare the full Turing TU-106 to Navi10 you have a near direct comparison. That is the RTX2070 vs 5700XT... but Jensen knows this is too obvious, so he dropped the 2070 and came out with SUPERs and discontinued the full TU-106.
(Why?)



Also, going from 12nm to 7nm with a 445mm^2 TU-106 chip doesn't reduce the chip to half it size.... and the TU-106 reduced would still be at least 360mm^2 sized chip. Compare the performance of Turing's TU-106 compared to navi10 @ 251mm^2...

Nvidia had to release the RTX2070 SUPER to stay competitive in the market place. That is a $800 chip burnt-down to being a $499 chip @ 545mm^2.... and Nvidia is loosing money on Selling such a massive chip, for so little... and only came out with the SUPERs, to save face and mindshare.



All of this proves that RDNA is much better at games than Turing. It also means that RDNA scaled up will still be much better at games than anything Nvidia has, or can offer... and Nvidia is going to have to design a whole new gaming architecture to compete with RDNA chips. Nvidia is about a year away from releasing their new gaming architecture…. but that Turing is EOL and dead.

So why consider it..?
 
Last edited:

Yeah, I ignored him for quite a while, hoping others would do the same, but when his nonsense quote bled through the ignore filter on Dans Quote last night I thought I would make it more clear.

He is clearly trolling when he keeps repeating a groundless "small die" argument that has been refuted (more than once)with the actual facts.
 
Yeah, I ignored him for quite a while, hoping others would do the same, but when his nonsense quote bled through the ignore filter on Dans Quote last night I thought I would make it more clear.

He is clearly trolling when he keeps repeating a groundless "small die" argument that has been refuted (more than once)with the actual facts.

Yeah, it’s just hard for me to leave complete nonsense unaddressed.

He never addresses the actual post and usually attacks the poster.... or at least the community at large.

Like we can say the 251mm^2 chip is almost as much as a 2080 die, if not as much, and nVidia has 60% gross margin so they are obviously not hurting; but he’d just ignore it.

He also says who cares about power but the NaviXX chip he predicted would be a ~800 watt board at stock, totally reasonable. What would that be? 6 or 7, 8 pin PCIe power connectors? I would totally buy a 800 watt GPU that is RT capable, don’t get me wrong, but no one will make that.
 
Last edited:
Don't forget, that all of that^ is only in defense of what I stated was the obvious, that RDNA and Navi10 is better architecture and has longer legs than the EOL Turing cards. And many took "offense" to that... and now have to come and admit, it was a fact and they are just interrupting threads, to hate on me. Which is what you call as threadcrapping. I think some of you have come to respect me, because they know I am not a fanboy of anything and just buy whatever. But also aren't going to be strong-armed by viral farms with 30k psosts strong. Obvious is more than obvious... I'll expose all the cheerleaders.

You can't fight logic... and Jensen can't afford to keep up his Virals…




I think you will learn to understand, that RDNA is premium Gamer's stuff.

It is going into the new two-tier Xbox, and the new PlayStation and on Samsung phones.... but that it isn't the point, it is what these CEO's who have seen RDNA say afterwards... That is what caught my attention. So I began to read and educate myself. If I can do it so can you. So why are some of you unwilling to admit, or learn...? Because you are the dolts who are being organized as cheerleaders. I doubt any of you would meet up for drinks, or an interview...

Fake News can only be stomached so much. I swear, coming in from the outside, some of you are like a bunch of Kids m0cking logic... doing nothing but stirring up every thread because online rebellion is so progressive & hip. Or it just that they don't take other's facts, or use logic themselves, is because they are being payed to be perpetually dumb.



I come here to build a new rig and start discussions and low-&-behold I run into biased a-holes who do nothing but mock people and pretend that buying and discussing hardware is about teams, or cheerleading.
 
You obviously don't know what you're talking about. Let me present you with some cold hard FACTS not nvidiot bias: AMD engineers have designed a revolutionary RDNA architecture that will not only get faster than 2080 Ti over time but also cure cancer. Dr. Lisa Su oversaw the development of RDNA and it's soon going to power the PS 5 and next Xbox. Once those consoles come online, RDNA will network itself and come to life and solve world hunger and bring about world peace--that's how amazing those Shanghai, China gpu engineers are! All of them were trained by Dr. Lisa Su who never exaggerates or lies.

I don't post often but when I do, people listen.

o brother.
 
Don't forget, that all of that^ is only in defense of what I stated was the obvious, that RDNA and Navi10 is better architecture and has longer legs than the EOL Turing cards. And many took "offense" to that... and now have to come and admit, it was a fact and they are just interrupting threads, to hate on me. Which is what you call as threadcrapping. I think some of you have come to respect me, because they know I am not a fanboy of anything and just buy whatever. But also aren't going to be strong-armed by viral farms with 30k psosts strong. Obvious is more than obvious... I'll expose all the cheerleaders.

You can't fight logic... and Jensen can't afford to keep up his Virals…




I think you will learn to understand, that RDNA is premium Gamer's stuff.

It is going into the new two-tier Xbox, and the new PlayStation and on Samsung phones.... but that it isn't the point, it is what these CEO's who have seen RDNA say afterwards... That is what caught my attention. So I began to read and educate myself. If I can do it so can you. So why are some of you unwilling to admit, or learn...? Because you are the dolts who are being organized as cheerleaders. I doubt any of you would meet up for drinks, or an interview...

Fake News can only be stomached so much. I swear, coming in from the outside, some of you are like a bunch of Kids m0cking logic... doing nothing but stirring up every thread because online rebellion is so progressive & hip. Or it just that they don't take other's facts, or use logic themselves, is because they are being payed to be perpetually dumb.



I come here to build a new rig and start discussions and low-&-behold I run into biased a-holes who do nothing but mock people and pretend that buying and discussing hardware is about teams, or cheerleading.

Don't forget, that all of that^ is only in defense of what I stated was the obvious, that RDNA and Navi10 is better architecture and has longer legs than the EOL Turing cards. And many took "offense" to that... and now have to come and admit, it was a fact and they are just interrupting threads, to hate on me. Which is what you call as threadcrapping. I think some of you have come to respect me, because they know I am not a fanboy of anything and just buy whatever. But also aren't going to be strong-armed by viral farms with 30k psosts strong. Obvious is more than obvious... I'll expose all the cheerleaders.

You can't fight logic... and Jensen can't afford to keep up his Virals…




I think you will learn to understand, that RDNA is premium Gamer's stuff.

It is going into the new two-tier Xbox, and the new PlayStation and on Samsung phones.... but that it isn't the point, it is what these CEO's who have seen RDNA say afterwards... That is what caught my attention. So I began to read and educate myself. If I can do it so can you. So why are some of you unwilling to admit, or learn...? Because you are the dolts who are being organized as cheerleaders. I doubt any of you would meet up for drinks, or an interview...

Fake News can only be stomached so much. I swear, coming in from the outside, some of you are like a bunch of Kids m0cking logic... doing nothing but stirring up every thread because online rebellion is so progressive & hip. Or it just that they don't take other's facts, or use logic themselves, is because they are being payed to be perpetually dumb.



I come here to build a new rig and start discussions and low-&-behold I run into biased a-holes who do nothing but mock people and pretend that buying and discussing hardware is about teams, or cheerleading.
 
Yeah, I ignored him for quite a while, hoping others would do the same, but when his nonsense quote bled through the ignore filter on Dans Quote last night I thought I would make it more clear.

He is clearly trolling when he keeps repeating a groundless "small die" argument that has been refuted (more than once)with the actual facts.


Those are the facts. (y)

You can no longer keep saying that Turing is better, when we have just established that Turing TU-106 NORMALIZED to 7nm, is still MUCH LARGER than Navi10's small size....! Why shun from facts? I was scaling Turing down to RDNA's node size to prove a point of logic and to dismiss other's claims about superiority. And how things actually scale.

Likewise, if you scale RDNA up to TURING die size, you have so much die space to play with, that you can easily make a bigger RDNA chip than 251mm^2. Seeing that AMD's other 7nm GPU (Mi60/Mi50) was 331mm^2.




Many People's argument here is (not this thread, but those within these forums) was that they don't understand how AMD can make a big-navi chip... they are confused about how it can be done and use mocking tones for giggles.

These same people can't refute math & logic, so they attack the messenger.
 
Those are the facts. (y)

You can no longer keep saying that Turing is better, when we have just established that Turing TU-106 NORMALIZED to 7nm, is still MUCH LARGER than Navi10's small size....! Why shun from facts? I was scaling Turing down to RDNA's node size to prove a point of logic and to dismiss other's claims about superiority. And how things actually scale.

Likewise, if you scale RDNA up to TURING die size, you have so much die space to play with, that you can easily make a bigger RDNA chip than 251mm^2. Seeing that AMD's other 7nm GPU (Mi60/Mi50) was 331mm^2.




Many People's argument here is (not this thread, but those within these forums) was that they don't understand how AMD can make a big-navi chip... they are confused about how it can be done and use mocking tones for giggles.

These same people can't refute math & logic, so they attack the messenger.

Those are the facts. (y)

You can no longer keep saying that navi is better, when we have just established that NaviNORMALIZED to 7nm, is still MUCH LARGER than turing's small size....! Why shun from facts? I was scaling navi up to Turing's node size to prove a point of logic and to dismiss other's claims about superiority. And how things actually scale.

Likewise, if you scale turing down to navi die size, you have so much die space to play with, that you can easily make a bigger turing chip than 251mm^2. Seeing that AMDs other 7nm GPU (Mi60/Mi50) was 331mm^2.




Many People's argument here is (not this thread, but those within these forums) was that they don't understand how AMD can make a big-navi chip... they are confused about how it can be done and use mocking tones for giggles.

These same people can't refute math & logic, so they attack the messenger.
 
I literally laughed out loud. Thanks for that. The lack of self awareness is unreal.

You are not aware of who I am, I am. So laugh all you want.... you are wrong!

There are post histories my friend. With date stamps and time lines, etc. I can easily exposed many "lifers" here with endless single-track (single point) minds ... so narrow & shallow, they can not be real humans.
 
You are not aware of who I am, I am. So laugh all you want.... you are wrong!

There are post histories my friend. With date stamps and time lines, etc. I can easily exposed many "lifers" here with endless single-track (single point) minds ... so narrow & shallow, they can not be real humans.

Serious question, why do you sound like an AI chat bot
 
Very hard to tell the difference between trolling and mental illness. Or is trolling a mental illness?

Oh it's definitely trolling and definitely intentional. Why else would he rave about the importance of math and facts while wilfully and consistently ignoring the cost of 7nm transistors when discussing big Navi? Or it's already high power consumption. The entire forum is being played for his entertainment.
 
Those are the facts. (y)

You can no longer keep saying that Turing is better, when we have just established that Turing TU-106 NORMALIZED to 7nm, is still MUCH LARGER than Navi10's small size....! Why shun from facts? I was scaling Turing down to RDNA's node size to prove a point of logic and to dismiss other's claims about superiority. And how things actually scale.

I know math isn't your strong suit (nor logic, nor self awareness, nor reality awareness, nor ...).

But TU-106 is 10.8 billion Transistors, Navi 10 is 10.3 Billion Transistors.

TU-106 on the same process as Navi would be (10.8/10.3)*251 = 263mm^2.

263 is NOT much larger than 251. It's a little larger. Only about 5% larger.

Also you have to consider that NVidia is also including Ray Tracing HW in it's die, that AMD will need to include in bigger Navi, and that probably uses more than 5% of the die, so in reality once AMD includes Ray Tracing Hardware, it's minuscule advantage will turn into a minuscule disadvantage.

By the numbers, AMD has no advantage at all.

RDNA is great step forward for AMD, it erases the deficit they were in with GCN chips, but it does not pass NVidia, it merely catches up to parity.
 
Oh it's definitely trolling and definitely intentional. Why else would he rave about the importance of math and facts while wilfully and consistently ignoring the cost of 7nm transistors when discussing big Navi? Or it's already high power consumption. The entire forum is being played for his entertainment.

I am willing to discuss any of that.... have you tried to rebuttal me...?



Dr Lisa Su signed the 7nm Wafer Contract with TSMC over 4 years ago at a cost estimated at over $1B. (A ballsy move and she stands proud of it now). And that is the cost of moving to 7nm. After you are up and running (which AMD has been doing for a few years now), & wafer are wafers... so there is much cost, or premium on 7nm wafers. AMD has had over 20 7nm products already...

New process nodes usually have bad yields, thus making the initial move to a new node process very costly. That is bad for the Company, but a big deal for the consumers. That is because ROI (Return on Investment) is spread out over a time span, not on the first day a chip get etched.


It is now Years later and AMD & TSMC is doing really so well with 7nm and were on the forefront working with each other since the very beginning of their 7nm adventure. 7nm yields are better than expected and TSMC's 7nm node is maturing real well, with TSMC transitioning to new processing soon.

A GPU cost come down to one thing: The more chips per wafer, the lower the cost.... (thus size of chip = cost)




And AMD can beat up on Nvidia using Economy of Scale, like they are currently doing to Intel.... by offering more for less.
 
A GPU cost come down to one thing: The more chips per wafer, the lower the cost.... (thus size of chip = cost)

So you’re just ignoring the much higher cost of a 7nm wafer vs 12/14/16nm? That’s convenient.

And AMD can beat up on Nvidia using Economy of Scale, like they are currently doing to Intel.... by offering more for less.

You really should try to understand concepts before using them in an argument. How can AMD beat up on Intel using economies of scale when they have a much smaller share of the market? Do you know how economies of scale work?
 
Those are the facts. (y)

You can no longer keep saying that Turing is better, when we have just established that Turing TU-106 NORMALIZED to 7nm, is still MUCH LARGER than Navi10's small size....! Why shun from facts? I was scaling Turing down to RDNA's node size to prove a point of logic and to dismiss other's claims about superiority. And how things actually scale.

Likewise, if you scale RDNA up to TURING die size, you have so much die space to play with, that you can easily make a bigger RDNA chip than 251mm^2. Seeing that AMD's other 7nm GPU (Mi60/Mi50) was 331mm^2.




Many People's argument here is (not this thread, but those within these forums) was that they don't understand how AMD can make a big-navi chip... they are confused about how it can be done and use mocking tones for giggles.

These same people can't refute math & logic, so they attack the messenger.

We haven't stablished anything except that you try to pass your best witshes, desires, rumors and specutations as "facts"
 
I know math isn't your strong suit (nor logic, nor self awareness, nor reality awareness, nor ...).

But TU-106 is 10.8 billion Transistors, Navi 10 is 10.3 Billion Transistors.

TU-106 on the same process as Navi would be (10.8/10.3)*251 = 263mm^2.

263 is NOT much larger than 251. It's a little larger. Only about 5% larger.

Also you have to consider that NVidia is also including Ray Tracing HW in it's die, that AMD will need to include in bigger Navi, and that probably uses more than 5% of the die, so in reality once AMD includes Ray Tracing Hardware, it's minuscule advantage will turn into a minuscule disadvantage.

By the numbers, AMD has no advantage at all.

RDNA is great step forward for AMD, it erases the deficit they were in with GCN chips, but it does not pass NVidia, it merely catches up to parity.

:unsure:
No my friend, we are talking about what happens when Nvidia transitions from 12nm to 7nm.

And theoretically, if Nvidia's Turing was already at 7nm node size, instead of a 12nm node.... for illustrative purposed of how big it will be. And it seems you do not understand area scaling, that is why you have no idea the simple math involved.



Look at this die shot of a 14nm to 7nm shrink:
Important to notice that even though this is 14nm to 7nm, it is not twice as small. See..?

dice[1].png




Now with that^ understanding, Nvidia isn't at 14nm, they are at 12nm. So their shrink to 7nm UAV is not going to be as big of a shrink as you see here. Thus, going to 7nm won't give that much more shrinkage to their dies.
 
So you’re just ignoring the much higher cost of a 7nm wafer vs 12/14/16nm? That’s convenient.
You really should try to understand concepts before using them in an argument. How can AMD beat up on Intel using economies of scale when they have a much smaller share of the market? Do you know how economies of scale work?

Ignoring?

I just explained that you do not know wtf you are talking about, and the cost is already baked into the contract she signed, not into your GPU you buy. And that there is no 7nm wafer cost.

You can try and refute me, but all your research will lead back to the facts I already presented. If AMD get 300 GPU's per each wafer and Nvidia (because their GPU's are much bigger) gets only 134 GPUs per wafer. (And wafers are same cost) what company has more product to sell..?


That is how economy of scale works. Did any of you attend college?
 
Last edited:
I am willing to discuss any of that.... have you tried to rebuttal me...?



Dr Lisa Su signed the 7nm Wafer Contract with TSMC over 4 years ago at a cost estimated at over $1B. (A ballsy move and she stands proud of it now). And that is the cost of moving to 7nm. After you are up and running (which AMD has been doing for a few years now), & wafer are wafers... so there is much cost, or premium on 7nm wafers. AMD has had over 20 7nm products already...

New process nodes usually have bad yields, thus making the initial move to a new node process very costly. That is bad for the Company, but a big deal for the consumers. That is because ROI (Return on Investment) is spread out over a time span, not on the first day a chip get etched.


It is now Years later and AMD & TSMC is doing really so well with 7nm and were on the forefront working with each other since the very beginning of their 7nm adventure. 7nm yields are better than expected and TSMC's 7nm node is maturing real well, with TSMC transitioning to new processing soon.

A GPU cost come down to one thing: The more chips per wafer, the lower the cost.... (thus size of chip = cost)




And AMD can beat up on Nvidia using Economy of Scale, like they are currently doing to Intel.... by offering more for less.

We don't know (ok, surely you do) how much navi costs to produce nor Turing. We do know its almost twice the cost of 14nm. Does that makes it significantly cheaper than Turing?, its anyones guess (ok, your guess).
 
Back
Top