HWUB/Techspot adds more games, 5700XT falls further behind 2070 Super.

Ignoring?

I just explained that you do not know wtf you are talking about, and the cost is already baked into the contract she signed, not into your GPU you buy. And that there is no 7nm wafer cost.

You can try and refute me, but all your research will lead back to the facts I already presented. If AMD get 300 GPU's per wafer and Nvidia only gets 134 GPU per wafer... and that wafer is the same cost... who has more GPUs to sell..?

That is how economy of scale works. Did any of you attend college?

Amazing. You heard it here first folks. There is no 7nm wafer cost.
 
:unsure:
No my friend, we are talking about what happens when Nvidia transitions from 12nm to 7nm.

And theoretically, if Nvidia's Turing was already at 7nm node size, instead of a 12nm node.... for illustrative purposed of how big it will be. And it seems you do not understand area scaling, that is why you have no idea the simple math involved.



Look at this die shot of a 14nm to 7nm shrink:
Important to notice that even though this is 14nm to 7nm, it is not twice as small. See..?

View attachment 184004



Now with that^ understanding, Nvidia isn't at 14nm, they are at 12nm. So their shrink to 7nm UAV is not going to be as big of a shrink as you see here. Thus, going to 7nm won't give that much more shrinkage to their dies.



So that bunch of random nonsense, has convinced you that NVidia Transistors will magically be bigger than AMD Transistors on the same process?

:rolleyes:

I don't have words...

Do we really need this nonsense disrupting the community?
 
Ignoring?

I just explained that you do not know wtf you are talking about, and the cost is already baked into the contract she signed, not into your GPU you buy. And that there is no 7nm wafer cost.

You can try and refute me, but all your research will lead back to the facts I already presented. If AMD get 300 GPU's per each wafer and Nvidia (because their GPU's are much bigger) gets only 134 GPUs per wafer. (And wafers are same cost) what company has more product to sell..?


That is how economy of scale works. Did any of you attend college?

That’s not actually economy of scale....
 
We don't know (ok, surely you do) how much navi costs to produce nor Turing. We do know its almost twice the cost of 14nm. Does that makes it significantly cheaper than Turing?, its anyones guess (ok, your guess).

It was YOUR problem.
The rest of the world already knows the relative costs of moving to a new node... it's been done througnout semi-conductor history. But now that you do know... you can't pretend not to understand anymore.
 
So that bunch of random nonsense, has convinced you that NVidia Transistors will magically be bigger than AMD Transistors on the same process?
I don't have words...
Do we really need this nonsense disrupting the community?

It seems math is magic for you?

You are only putting words into your own mouth, because nobody said any such thing. It is really amazing that I am giving 5th grade lessons on PERCENTILES%
 
It seems math is magic for you?

You are only putting words into your own mouth, because nobody said any such thing. It is really amazing that I am giving 5th grade lessons on PERCENTILES%

Please explain your "logic" on how NVidias ~10 Billion transistor part (TU106) would be significantly larger than AMDs ~10 Billion transistor part, on the same 7nm process.

Equal transistors on the same process, means the equal size.

A couple of pictures of AMD dies, doesn't change simple math facts.
 
I have an idea... why don't you guys try and refute me..?

Go ahead, do some research and find out much die space a 7nm Nvidia Turing GPU would be. Try and combat/debate/argue my facts, with your own counter facts. Instead of attacking the messenger.


Because I laid out a whole bunch of fact you are not trying to refute, so it must mean I am winning the debate... and the cheerleaders are left name calling..
 
Are you asking this for real..? Or you don't understand that it takes more transistors for Nvidia, than AMD..? It was just like the point of showing you (Someone) that even Navi10 being smaller than Vega20 (Radeon 7) and has less transistors and 84mm^2 less die space.... but Navi10 does more in games. (How is that possible?)

dERP!




I've just illustrating to you, how Turing suffers from what people call transistor bloat, with million of transistors not meant for Gaming.... do you understand now?

Why are you using the Radeon VII’s full die size when it’s a cut down card?
 
Apparently not.

Here are what is known, as wafer yields.
Imagine the one on the left as the big TU-102 and the one on the right as Navi10. (Obviously those aren't exact, but close enough to where you will understand how size of GPU matter.

Because AMD pays per wafer....not per chip. And so does Nvidia.


Wafer_die's_yield_model_(10-20-40mm)_-_Version_2_-_DE[1].png

Yellow is bad chips, the grey are good chips.

Do you now understand how Economy of Scale comes into play here? AMD can mass produce their navi10 & navi20 chips to a scale that would be mass market. Not niche.
 
Last edited:
Here are what is know as wafer yields. Imagine the one on the left as the big TU-102 and the one on the right as Navi10. (Obviously those aren't exact, but close enough to where you will understand how size of GPU matter.

Because AMD pays per wafer....not per chip. And so does Nvidia.


View attachment 184011


Do you now understand how Economy of Scale comes into play here? AMD can mass produce their navi10 & navi20 chips to a scale that would be mass market. Not niche.

That’s not what economy of scale means. That’s just yield.
 
Here are what is known, as wafer yields.
Imagine the one on the left as the big TU-102 and the one on the right as Navi10. (Obviously those aren't exact, but close enough to where you will understand how size of GPU matter.

Because AMD pays per wafer....not per chip. And so does Nvidia.


View attachment 184011


Do you now understand how Economy of Scale comes into play here? AMD can mass produce their navi10 & navi20 chips to a scale that would be mass market. Not niche.

Sorry, I guess your sarcasm detector is broken. As others have told you already, you don’t understand what economies of scale means. And it has nothing to do with chips per (expensive 7nm) wafer.

It’s a very intriguing combination of excessive arrogance and bewildering ignorance.
 
the prices of gpu's these days is fucking depressing.


With the shrinking dGPU market each quarter, these companies have to maintain their high margins to satisfy investors so the prices will probably continue to creep up. Now with 3 players crowding the space, someone will inevitably get squeezed out and I'm guessing that will be AMD as they don't have the money to compete with either NVIDIA or Intel. In the future (3+ years from now), I can see the dGPU market looking something like this: NVIDIA (55%), INTEL (40%), AMD (5%). Techspot has a really nice article about Intel's upcoming GPU and how they can scale even the existing Gen 11 architecture up to compete at 2080 Ti specs: https://www.techspot.com/article/1898-intel-xe-graphics-preview/ If Intel uses EMIB, they will already have a leg up on both NVIDIA and AMD:

"In April Intel confirmed to Anandtech that they intended to use EMIB to support their GPUs soon, so that is something to look forward to."

"May 1, 2019: Jim Jeffers, senior principal engineer and director of the rendering and visualization team, announces Xe's ray tracing capabilities at FMX19. In addition, Intel has continued to hire talent away from the competition."

2019-08-22-image-4.png

From Anandtech: https://www.anandtech.com/show/14211/intels-interconnected-future-chipslets-emib-foveros

"Xe will range from integrated graphics all the way up to enterprise compute acceleration, covering through the consumer graphics and gaming markets as well.

2018-12-11%2014.21.17_575px.jpg


Intel stated at the time that the Xe range will be built on two different architectures, one of which is called Arctic Sound, and the other has not yet been made public. The goal is to create a platform for Xe relating the hardware, the software, the drivers, the platform, and the APIs all into a single mission, which Intel calls 'The Odyssey'. Introducing EMIB and Foveros technologies as part of the Xe strategy seems to be very much part of Intel's plan, and it will be interesting to see how it develops."


If anyone should be shitting their pants, it's Dr. Lisa Su and her very late and unimpressive Chinese designed RDNA. If we want to talk about economy of scale, Intel has their own fabs and we know their in-house 10nm will be used for CPUs, FPGA, GPU etc so they will be able to pump out higher volume and lower prices than both NVIDIA and AMD while maintaining higher margins than AMD. We also know NVIDIA will have Ampere ready in 2020 which will likely surpass the current 2080 Ti easily by at least 30% if not more at 7nm so AMD will be in Intel's crosshairs if Intel decides to push out anything remotely like what's speculated above.'

If the 5800/5900 do not have ray tracing and can't match/exceed 2080 Ti in performance, AMD will be dead in the water in 2020. Intel will have a dGPU for AIBs and they'll stick Xe in every desktop/notebook they can and where will that leave AMD? Dead. Hell even NVIDIA is in trouble in the notebook market because Intel will be selling a full ecosystem to manufacturers. RDNA isn't even a factor, it's an architecture that should've been released in 2016, at 7nm it isn't impressive one bit.
 
Last edited:
IIRC the move from 14nm to 7nm gave the Radeon VII about 25% increased performance on the same power.

So lets assume just the die shrink on Turing would give it between 20 and 30% performance increase. (taking into account that turing is already very efficient on 12nm)

That alone makes me pretty excited about Ampere since its designed for 7nm. I think nvidia could at least repeat the 30+ % increase (more likely substantially exceed) that the 2080Ti has over the 1080Ti.
 
IIRC the move from 14nm to 7nm gave the Radeon VII about 25% increased performance on the same power.

So lets assume just the die shrink on Turing would give it between 20 and 30% performance increase. (taking into account that turing is already very efficient on 12nm)

That alone makes me pretty excited about Ampere since its designed for 7nm. I think nvidia could at least repeat the 30+ % increase (more likely substantially exceed) that the 2080Ti has over the 1080Ti.

I don't even think Ampere will be a die shrink of Turing but something new and more efficient. So if you take into account a new more efficient architecture + 7 nm, I can see it hitting 40%+ over 2080 Ti at the top end. AMD is completely fucked in 2020 and I'm going to laugh when the viral cheerleader on this forum quietly disappears around that time. What I find even more hilarious reading over this thread is that a certain someone seems to think TSMC has some special relationship with AMD for 7nm. If anything, AMD HAD to go to 7nm just to compete with Intel and NVIDIA who are using older and cheaper processes and now that TSMC has refined 7 nm, NVIDIA can take advantage of it for their GPUs. Intel will be coming with an in-house 10nm which should be as good/better than TSMCs 7nm so that leaves AMD as a huge target.
 
IIRC the move from 14nm to 7nm gave the Radeon VII about 25% increased performance on the same power.

So lets assume just the die shrink on Turing would give it between 20 and 30% performance increase. (taking into account that turing is already very efficient on 12nm)

That alone makes me pretty excited about Ampere since its designed for 7nm. I think nvidia could at least repeat the 30+ % increase (more likely substantially exceed) that the 2080Ti has over the 1080Ti.

I would argue the majority of the performance gain from VII came from increased memory bandwidth, not the 7nm shrink. Vega also clocked well and could hit 1700+ MHz but performance did not scale with core clock, it scaled with bandwidth. VII also has 4 less CUs than vega 64, further evidence that bandwidth was the primary contributor to VII's performance increase.

I don't even think Ampere will be a die shrink of Turing but something new and more efficient. So if you take into account a new more efficient architecture + 7 nm, I can see it hitting 40%+ over 2080 Ti at the top end. AMD is completely fucked in 2020 and I'm going to laugh when the viral cheerleader on this forum quietly disappears around that time. What I find even more hilarious reading over this thread is that a certain someone seems to think TSMC has some special relationship with AMD for 7nm. If anything, AMD HAD to go to 7nm just to compete with Intel and NVIDIA who are using older and cheaper processes and now that TSMC has refined 7 nm, NVIDIA can take advantage of it for their GPUs. Intel will be coming with an in-house 10nm which should be as good/better than TSMCs 7nm so that leaves AMD as a huge target.

You're guessing ampere will be a huge step over turing.

TSMC will likely have 7nm euvbut the time intel actually has functional 10nm.

I will also argue that TSMC producing AMD's chips is a huge reason why they are currently competitive in the manner they are, global foundries was trash.

At the risk of sounding like gamerx (shoot me please) Navi likely has a lot of room left on the table with an increase in memory bandwidth/CUs. I think it would be silly to assume that Navi can't hit 2080ti performance. A further refined navi will compete with ampere/intel stuff fine, IMHO.
 
Last edited:
I don't even think Ampere will be a die shrink of Turing but something new and more efficient. So if you take into account a new more efficient architecture + 7 nm, I can see it hitting 40%+ over 2080 Ti at the top end. AMD is completely fucked in 2020 and I'm going to laugh when the viral cheerleader on this forum quietly disappears around that time. What I find even more hilarious reading over this thread is that a certain someone seems to think TSMC has some special relationship with AMD for 7nm. If anything, AMD HAD to go to 7nm just to compete with Intel and NVIDIA who are using older and cheaper processes and now that TSMC has refined 7 nm, NVIDIA can take advantage of it for their GPUs. Intel will be coming with an in-house 10nm which should be as good/better than TSMCs 7nm so that leaves AMD as a huge target.

I don't think we should count any chickens before they hatch, not from Intel/NVdia, nor AMD.

Ampere could mainly be a process bump, just like Pascal was. Intel is complete unknown until they deliver something. Big Navi has unknown parameters and date, and will need RT HW, so it will pay some form of RT tax.
 
I don't think we should count any chickens before they hatch, not from Intel/NVdia, nor AMD.

Ampere could mainly be a process bump, just like Pascal was. Intel is complete unknown until they deliver something. Big Navi has unknown parameters and date, and will need RT HW, so it will pay some form of RT tax.

Nvidia would LOVE for Ampere to get the performance gains pascal got from maxwell.
 
I don't think we should count any chickens before they hatch, not from Intel/NVdia, nor AMD.

Ampere could mainly be a process bump, just like Pascal was. Intel is complete unknown until they deliver something. Big Navi has unknown parameters and date, and will need RT HW, so it will pay some form of RT tax.

I don't think 5800/5900 will have RT and that will leave AMD at a large disadvantage. Furthermore, if they only hit 2080 Ti performance in 2020 with Ampere out, they will be behind a generation again and will need to compete with low prices and margins. I think Ampere will be a sizeable jump over Turing given it will be at 7 nm + likely refined/new architecture. We got a 30-40% gain over 1080 Ti with 2080 Ti so I don't think this is a stretch at all. With regards to Intel, they are coming out with some really impressive technologies in 2020 so I see them being competitive right out of the gate. They're still keeping a lot of information secret and what they've let out so far with foveros and emib seems very promising, not to mention Xe will have RT so AMD is absolutely required to have it in 2020.

I would argue the majority of the performance gain from VII came from increased memory bandwidth, not the 7nm shrink. Vega also clocked well and could hit 1700+ MHz but performance did not scale with core clock, it scaled with bandwidth. VII also has 4 less CUs than vega 64, further evidence that bandwidth was the primary contributor to VII's performance increase.

You're guessing ampere will be a huge step over turing.

TSMC will likely have 7nm euvbut the time intel actually has functional 10nm.

I will also argue that TSMC producing AMD's chips is a huge reason why they are currently competitive in the manner they are, global foundries was trash.

At the risk of sounding like gamerx (shoot me please) Navi likely has a lot of room left on the table with an increase in memory bandwidth/CUs. I think it would be silly to assume that Navi can't hit 2080ti performance. A further refined navi will compete with ampere/intel stuff fine, IMHO.

Yeah as I said above, I think Ampere will be a pretty large jump over Turing. With Intel's first gen Xe, even if they don't match NVIDIAs top end, they will certainly compete with AMD in the midrange notebook/desktop market and given their marketing $$ + in-house fabs, they will be super competitive which is why I say 2020+ is going to be a dark time for AMD.

If I seem like I'm cheerleading Intel, it's because I'm genuinely excited about what they will be doing in the market late 2020 and on. They will be shaking up the CPU/GPU market IMO and if they deliver with Foveros/EMIB tech for consumer GPUs in the future (whether it's first gen Xe or later), it could rattle both AMD and NVIDIA if they don't have their own MCMs ready. We know Intel has been siphoning engineering talent from both AMD and NVIDIA so I can't wait to see what they bring us in these next few years--I predict the GPU market will look completely different than it does today. I'll certainly be buying Intel stock pretty soon with the bet they will see a $10+ jump in 2020-2021.
 
Last edited:
Irrelevant drivel, conjecture and speculation.

No one gives a shit about die size. As long as a GPU comes to us at a reasonable price and we can deal with the heat and power consumption, no one cares. What we do care about is price and performance. AMD is way behind on performance and only moves units based on cost in certain price points. I don't see that changing any time soon but go ahead and keep living in your fantasy world.

Even I've grown tired of this thread and I'll normally go round and round with people like you for hours on end for my own entertainment.
 
I don't think we should count any chickens before they hatch, not from Intel/NVdia, nor AMD.

Ampere could mainly be a process bump, just like Pascal was. Intel is complete unknown until they deliver something. Big Navi has unknown parameters and date, and will need RT HW, so it will pay some form of RT tax.

Yeah Nvidia may have something up their sleeves but we have to wait and see. RDNA addressed a lot of GCNs shortcomings and now perf/flop is in the same ballpark as Turing.

If AMD manages to get power consumption under control it could be a very competitive generation. Of course there’s still the wildcard of what AMD will do with DXR. Fun times ahead.
 
Just some push-back reaction to certain over the top AMD fanboys around here.



A couple of months of gamepass is negligible. You get 3 months for $5 right now. So the monetary value is $5. Control alone is brand new and selling for $60. I never mentioned Wolf:Youngblood, because it looks kind of lame. But control seems like a really good game.




Sure. I just think, the 2060S is an overlooked comparison to the 5700xt. For the same price, you get similar performance, a richer features set, and at least one good current pack in game, and ultimately I am a value buyer. The 2060S looks like more for the money to me.

You are not a push back, you are at the forefront. :) No big deal, your claims will change nothing, since AMD is now doing better than Nvidia.
 
I wish i was after reading this thread.

LOL! Yeah, AMD has a solid product with a solid future built on that architecture and good pricing and it makes a difference. The fact is, I have a RX 5700 and it is working great as a 1440p card on my Freesync 1440p 144hz monitor. I also saved at least $250 over what ti would have cost for a 2070S and I am better off for it.
 
LOL! Yeah, AMD has a solid product with a solid future built on that architecture and good pricing and it makes a difference. The fact is, I have a RX 5700 and it is working great as a 1440p card on my Freesync 1440p 144hz monitor. I also saved at least $250 over what ti would have cost for a 2070S and I am better off for it.

No one said the 5700 isn’t a good product, it’s just not the second coming of jesus.

Did you... read the thread?
 
No one said the 5700 isn’t a good product, it’s just not the second coming of jesus.

Did you... read the thread?

No need to read the thread and I did not say it was the second coming either. However, the OP insinuates that the 5700 is not a good product but hey....... $250 in my wallet saved and a card that is not far off from a 2070s either.
 
No need to read the thread and I did not say it was the second coming either. However, the OP insinuates that the 5700 is not a good product but hey....... $250 in my wallet saved and a card that is not far off from a 2070s either.

It seems a tad disingenuous to compare the 5700 (not XT) pricing to the 2070S, when even the 2060S outperforms it, and is obviously the more appropriate comparison.

What other questionable details are you hiding to manufacture that $250 price difference?
 
It seems a tad disingenuous to compare the 5700 (not XT) pricing to the 2070S, when even the 2060S outperforms it, and is obviously the more appropriate comparison.

What other questionable details are you hiding to manufacture that $250 price difference?

It is not disingenious at all, and there is no manufactured price difference, I paid $250 or so less than a 2070S would usually cost and it is not far off from the 2070S, which is the subject of this thread. Just because you do not like my assessment does not mean I am wrong with said assessment.
 
It is not disingenious at all, and there is no manufactured price difference, I paid $250 or so less than a 2070S would usually cost and it is not far off from the 2070S, which is the subject of this thread. Just because you do not like my assessment does not mean I am wrong with said assessment.

The 5700 is 25% off from a 2070S. That’s like me comparing a 5700 to a 1660, saying the performance isn’t far off and you can save $100-150.
 
Back
Top