HWUB/Techspot adds more games, 5700XT falls further behind 2070 Super.

It is not disingenious at all, and there is no manufactured price difference, I paid $250 or so less than a 2070S would usually cost and it is not far off from the 2070S, which is the subject of this thread. Just because you do not like my assessment does not mean I am wrong with said assessment.

The only reason you wouldn't compare it to the card closest in performance, is because you are purposefully trying to mislead.

I guess you somehow got the 5070 for $250, which is a fantastic deal. At $250, I would certainly recommend the 5700 over the 1660 Ti which is the only NVidia card in that price range, though I doubt many could replicate that pricing.
 
The only reason you wouldn't compare it to the card closest in performance, is because you are purposefully trying to mislead.

I guess you somehow got the 5070 for $250, which is a fantastic deal. At $250, I would certainly recommend the 5700 over the 1660 Ti which is the only NVidia card in that price range, though I doubt many could replicate that pricing.

No, I said $250 less than a 2070S, try to keep up. :)
 
2070s is $500. You claimed to get a 5700, a much slower card, for 250 less. That would mean you supposedly paid 250. Basic math. Do try to keep up!

2070S is typically around $530. I paid $284 for my barely different 5700 so you do the math. Oh, and I took a look at the Guru3d review and as I thought, very little difference, especially at the 1440p resolution I play at. Keep trying though and nice play. At most, the difference between the 5700 and the 2070S was about 7fps on average at 1440p, which is what I game at. Therefore, it is not a much slower card. I did not claim anything, I stated a fact jack.
 
  • Like
Reactions: noko
like this
2070S is typically around $530. I paid $284 for my barely different 5700 so you do the math. Oh, and I took a look at the Guru3d review and as I thought, very little difference, especially at the 1440p resolution I play at. Keep trying though and nice play. At most, the difference between the 5700 and the 2070S was about 7fps on average at 1440p, which is what I game at. Therefore, it is not a much slower card. I did not claim anything, I stated a fact jack.
Cool and all but the 2070s is $500. Nice try spinning. P. S. The 1 percent frame times are awful on your 5700 too.
 
2070S is typically around $530. I paid $284 for my barely different 5700 so you do the math. Oh, and I took a look at the Guru3d review and as I thought, very little difference, especially at the 1440p resolution I play at. Keep trying though and nice play. At most, the difference between the 5700 and the 2070S was about 7fps on average at 1440p, which is what I game at. Therefore, it is not a much slower card. I did not claim anything, I stated a fact jack.

Even $284 is a great deal for the 5700.

You are comparing a special deal you got (open box?), vs the average price, not for the actual competing product, but one tier up on that...

Have you been taking lessons from GamerX on how to make tilted comparison?
 
Last edited:
Who cares about an average when you can just choose to get a $499 reference which has a nice cooler on nVidia nowadays? Your one off special doesn't make any sense to debate since it isn't replicable. And the right product to compare to is the 2060s, not the 2070s.

Then why are you debating something and this thread is about the 2070S but hey......LOL! :D $250 pocketed, I call that a plus in my book. :) Oh, and the 5700 is typically faster than teh 2060S so there is that, if you really need to go there. I care and most other folks do as well.
 
Then why are you debating something and this thread is about the 2070S but hey......LOL! :D $250 pocketed, I call that a plus in my book. :) Oh, and the 5700 is typically faster than teh 2060S so there is that, if you really need to go there. I care and most other folks do as well.
Wrong. You're doing a great gamer x imitation though!
 
They go to the same school.

So, I see an AMD postulation thread has turned into finger pointing now from the envy team. Oh well, at least I paid for my hardware from my own hard earned money. Oh, that and my 5700 Reference model is only a few frames per second off from a 2070S for a lot less money.
 
So, I see an AMD postulation thread has turned into finger pointing now from the envy team. Oh well, at least I paid for my hardware from my own hard earned money. Oh, that and my 5700 Reference model is only a few frames per second off from a 2070S for a lot less money.

It's even fewer frames per second away from a 2060S which is the fair comparison to make.

Your unwillingness to make the fair comparison, and comparing your open box deal, to the average prices no less, is obviously a tilted biased comparison.

Tilted comparisons don't make your case, they just make you look biased.
 
It's even fewer frames per second away from a 2060S which is the fair comparison to make.

Your unwillingness to make the fair comparison, and comparing your open box deal, to the average prices no less, is obviously a tilted biased comparison.

Tilted comparisons don't make your case, they just make you look biased.

Actually, it is a few frames per second ahead of the 2060S but then again, you might want to look at the video you posted in the original link, it says 2070S.
 
Actually, it is a few frames per second ahead of the 2060S but then again, you might want to look at the video you posted in the original link, it says 2070S.

Sure it's ahead if you cherry pick the games. Average many games, the 5700 is behind the 2060S.
https://tpucdn.com/review/amd-radeon-rx-5700/images/relative-performance_2560-1440.png

5700 100%
2060S 108% (8% faster)
2070S 127% (27% faster)


Sure the video says 2070S, It also says 5700XT. If you step down one step on one side, it makes sense to do the same on the other.
 
Actually, it is a few frames per second ahead of the 2060S but then again, you might want to look at the video you posted in the original link, it says 2070S.

The 5700 is great for the money especially the aftermarket ones. It's not faster than the 2060S though. For a few dollars less you get a few fps less.

Cheaper cards are usually better value than more expensive ones so no surprise there. Enjoy your new card.
 
The 5700 is great for the money especially the aftermarket ones. It's not faster than the 2060S though. For a few dollars less you get a few fps less.

Cheaper cards are usually better value than more expensive ones so no surprise there. Enjoy your new card.

I checked, the games I play ard faster than a 2060s and only a few was framed per second off from a 2070s. Then factor in the cost and a really good freesync monitor and it is golden.
 
I checked, the games I play ard faster than a 2060s and only a few was framed per second off from a 2070s. Then factor in the cost and a really good freesync monitor and it is golden.

No one is saying you made the wrong choice.

If sticking strictly to price points. Even the regular priced $350 5700 wins it's price range for me. Obviously if you get it for less than that, I consider it an even bigger win.

I just don't consider it realistic competition for 2070S.
 
That’s not what economy of scale means. That’s just yield.

FFS I know this.

I was using that "wafer yield illustration" to illuminate how GPU dies are priced. And exactly how a GPU size, is factored into pricing, and thus, how AMD can use the advantage of 7nm and smaller chips to saturate the market & tilt the economic scale in their favor. Exactly like they are doing to Intel, overwhelming them with tons and tons of small-inexpensive chips.

And yes more chips per wafer, is the exact definition of Economy of Scale: a proportionate saving in costs gained by an increased level of production.



No one gives a shit about die size. As long as a GPU comes to us at a reasonable price and we can deal with the heat and power consumption, no one cares. What we do care about is price and performance. AMD is way behind on performance and only moves units based on cost in certain price points. I don't see that changing any time soon but go ahead and keep living in your fantasy world.

Even I've grown tired of this thread and I'll normally go round and round with people like you for hours on end for my own entertainment.


I never said anyone cared about chip size.

I had to explain how die size determines cost and educate a bunch of people who seemed to not know (but they do now). SO we all are in the better and do not have to have that discussion anymore. RDNA is more efficient at Games than GCN and Turing. (It takes less and does more with it). Navi10 was never meant to compete with the RTX2070 Super... it was a Polaris replacement, but after taping out RDNA was so good, AMD saw it punching above it's intended weight... past the Vega64 and it's own R7. It was no wonder AMD said RDNA is the future of gaming, because GCN in Games is horrible compared to RDNA.


So the question is... is the SUPER worth $100 more?
Or.. is it worth holding on to that extra $100 until Black Friday and get a 5800 Series..
 
And yes more chips per wafer, is the exact definition of Economy of Scale

No. You really should stop repeating incorrect claims about things you don’t understand. Chips per wafer affects marginal cost of production. It has nothing to do with the fixed costs that are amortized by economies of scale.

In addition to that the marginal cost of production for 7nm is not currently an advantage given the high cost of 7nm wafers. All of this has already been repeated to you as nauseam so feel free to continue on your crusade of misinformation.

a proportionate saving in costs gained by an increased level of production.

Yes, this is the correct definition of economies of scale. Now bonus points for this next question.

Which companies currently produce more CPU and GPU chips than AMD and therefore enjoy greater economies of scale?
 
FFS I know this.

I was using that "wafer yield illustration" to illuminate how GPU dies are priced. And exactly how a GPU size, is factored into pricing, and thus, how AMD can use the advantage of 7nm and smaller chips to saturate the market & tilt the economic scale in their favor. Exactly like they are doing to Intel, overwhelming them with tons and tons of small-inexpensive chips.

Still Totally ignoring the increased wafer cost at 7nm that everyone pointed out to you repeatedly.
https://hardforum.com/threads/hwub-...ind-2070-super.1985908/page-2#post-1044315797

Note the writing on the slide. "Cost per yielded mm^2 for a 250mm"

Yielded chip cost essentially doubled at 7nm. So even if the die were half the size, you aren't ahead on cost/chip. And news flash, Navi 10, isn't half the size of TU106.

Your argument is completely groundless. Yet you continually repeat it, after it has been thoroughly debunked, multiple times, by multiple people.

And yes more chips per wafer, is the exact definition of Economy of Scale: a proportionate saving in costs gained by an increased level of production.

Just NO. It's nothing to do with chips/wafer. It's everything to do, with producing many more products than your competition, to better amortize fixed costs.
 
Still Totally ignoring the increased wafer cost at 7nm that everyone pointed out to you repeatedly.
https://hardforum.com/threads/hwub-...ind-2070-super.1985908/page-2#post-1044315797

Note the writing on the slide. "Cost per yielded mm^2 for a 250mm"

Yielded chip cost essentially doubled at 7nm. So even if the die were half the size, you aren't ahead on cost/chip. And news flash, Navi 10, isn't half the size of TU106.

Your argument is completely groundless. Yet you continually repeat it, after it has been thoroughly debunked, multiple times, by multiple people.



Just NO. It's nothing to do with chips/wafer. It's everything to do, with producing many more products than your competition, to better amortize fixed costs.

The last part has all of nothing to do with water and 7nm manufacturing costs. It is also logical that you will get more out of 7nm wafer than a 14nm wafer. Otherwise, by the math mentioned here, the cost would be 4 x that of 14 rnm chips. Basic math would indicate practically no increase in cost for the same transistor count per wafer.
 
The last part has all of nothing to do with water and 7nm manufacturing costs. It is also logical that you will get more out of 7nm wafer than a 14nm wafer. Otherwise, by the math mentioned here, the cost would be 4 x that of 14 rnm chips. Basic math would indicate practically no increase in cost for the same transistor count per wafer.

Snowdog is specifically addressing Gamer X’s incorrect statements about economies of scale. Economies of scales is basically your non-reoccuring costs and capital being spread out over a vast amount of sales. Those costs are usually somewhat fixed regardless of the qty of product. (Unless you hit breakpoints where you need an additional production line, ect.). What Gamer X is talking about is a reoccuring cost reduction, which has nearly nothing to do with economies of scale. He’s also over exaggerating the yield hit of the die size differences.

You are right though, it’s about the same per transistor and double per mm^2 of wafer for cost. Gamer X ignores the double per mm^2 part for 7nm or the same per transistor. He pretends it’s the same cost per mm^2.

For years I was technical lead that would input most of the variables into a pricing for a product that would range from 5k to hundreds of thousands of units per a year, so I have thought about these things before...

A general vent - A lot of the posts here remind me of a manager I had that used all the key catch phrases but had no idea what they meant. It was annoying.
 
Last edited:
No. You really should stop repeating incorrect claims about things you don’t understand. Chips per wafer affects marginal cost of production. It has nothing to do with the fixed costs that are amortized by economies of scale.

In addition to that the marginal cost of production for 7nm is not currently an advantage given the high cost of 7nm wafers. All of this has already been repeated to you as nauseam so feel free to continue on your crusade of misinformation.

Yes, this is the correct definition of economies of scale. Now bonus points for this next question.

Which companies currently produce more CPU and GPU chips than AMD and therefore enjoy greater economies of scale?




You are not doing your due diligence and are the one spreading mythical absurdities. You do not realize the "fixed cost" of 7nm node process, has diminished substantially by every tape out. TSMC's 7nm node is not new anymore.

7nm++ and 7nm EUVL is coming next year, with 5nm coming in 24 months time. AMD has quite a bit of 7nm in production (What is it 9 chips now @ 7nm?) and is the Worlds leader in this technology. Rome, Zen, Radeon, etc... are all on 7nm.


Nvidia..? Don't confuse experts with 7mm like AMD is, with experts at 12nm... which Nvidia is. Nvidia has not even taped out a 7nm node yet...!

It will be EXPENSIVE for them, it is no longer expensive for AMD. Matter of fact, Nvidia just signed a 7nm EUV deal with Samsung... almost 4 years after Dr Su signed her exclusive 7nm contract with TSMC. So again, TSMC's 7nm process is well up and running and yields are great and the cost of running the process has gone down significantly. Just like the cost of producing 14nm/12nm declined after the process matured after the first years. This is expected and often noted in ROI and discussed on Wall St. Nothing new.

So, do understand this FACT: that TSMC's 7mn process is no longer expensive to run, as the 7nm node is TSMC's new base and they have plenty of capacity. 7nm Wafers are TSMC's norm. AMD already ate the premium in the billions, for moving to 7nm. That again is the cost and expense of a new node, not that the 7nm node process is twice as expensive, like you keep falsely claiming.

The fixed cost = the cost of transitioning, while the process gets cheaper and cheaper and cheaper... don't argue with me, argue with history & facts. Also, as the node process matures, so do the yields. (Also, what AMD learns with Navi10, will be applied to Navi20)




Lastly, both TSMC and AMD have made public their great strides in 7nm. And have both mentioned their continued partnership and are aggressively moving forward to 7nm++, 7 EUVL and 5nm node processes. TSMC has plenty of 7nm capacity and a contract with AMD.

Dr Su is going to be relentless. And using AMD's advantage of small dies, will use Economy of Scale to undercut the GPU market.






* ED: Going to start a new thread... to continue this scaling discussion, since many here don't understand why smaller chips cost less than bigger chips.

 
Last edited:
You are not doing your due diligence and are the one spreading mythical absurdities. You do not realize the "fixed cost" of 7nm node process, has diminished substantially by every tape out. TSMC's 7nm node is not new anymore.

7nm++ and 7nm EUVL is coming next year, with 5nm coming in 24 months time. AMD has quite a bit of 7nm in production (What is it 9 chips now @ 7nm?) and is the Worlds leader in this technology. Rome, Zen, Radeon, etc... are all on 7nm.


Nvidia..? Don't confuse experts with 7mm like AMD is, with experts at 12nm... which Nvidia is. Nvidia has not even taped out a 7nm node yet...!

It will be EXPENSIVE for them, it is no longer expensive for AMD. Matter of fact, Nvidia just signed a 7nm EUV deal with Samsung... almost 4 years after Dr Su signed her exclusive 7nm contract with TSMC. So again, TSMC's 7nm process is well up and running and yields are great and the cost of running the process has gone down significantly. Just like the cost of producing 14nm/12nm declined after the process matured after the first years. This is expected and often noted in ROI and discussed on Wall St. Nothing new.

So, do understand this FACT: that TSMC's 7mn process is no longer expensive to run, as the 7nm node is TSMC's new base and they have plenty of capacity. 7nm Wafers are TSMC's norm. AMD already ate the premium in the billions, for moving to 7nm. That again is the cost and expense of a new node, not that the 7nm node process is twice as expensive, like you keep falsely claiming.

The fixed cost = the cost of transitioning, while the process gets cheaper and cheaper and cheaper... don't argue with me, argue with history & facts. Also, as the node process matures, so do the yields. (Also, what AMD learns with Navi10, will be applied to Navi20)




Lastly, both TSMC and AMD have made public their great strides in 7nm. And have both mentioned their continued partnership and are aggressively moving forward to 7nm++, 7 EUVL and 5nm node processes. TSMC has plenty of 7nm capacity and a contract with AMD.

Dr Su is going to be relentless. And using AMD's advantage of small dies, will use Economy of Scale to undercut the GPU market.






* ED: Going to start a new thread... to continue this scaling discussion, since many here don't understand why smaller chips cost less than bigger chips.


You need links to back that up. You’ve been shown slides from AMD themselves showing it is more costly. The vast manority of what you say isn’t how any of this actually works.

Lets pretend any of what you say is correct. Why isn’t the 5700xt 1/2 the price of the 2070S and AMD wiping the market out at 80% gross margin?
 
Still Totally ignoring the increased wafer cost at 7nm that everyone pointed out to you repeatedly.
https://hardforum.com/threads/hwub-...ind-2070-super.1985908/page-2#post-1044315797

Note the writing on the slide. "Cost per yielded mm^2 for a 250mm"

Yielded chip cost essentially doubled at 7nm. So even if the die were half the size, you aren't ahead on cost/chip. And news flash, Navi 10, isn't half the size of TU106.

Your argument is completely groundless. Yet you continually repeat it, after it has been thoroughly debunked, multiple times, by multiple people.

Just NO. It's nothing to do with chips/wafer. It's everything to do, with producing many more products than your competition, to better amortize fixed costs.


I am quoting this^ for posterity.
You are tripping over yourself and hypocritical in your own logic. Arguing for the sake of it, or you misstated what you wanted to say...?

Again, you are overly obsessed with that illustration I posted, but still don't even understand why I posted it. I should've just PS all the words out of it... and maybe you would've got the bigger picture. Smaller chips have an doubly increase in yield, than do larger chips.
  • 1) From being smaller, there are more per wafer, thus naturally yield more numbers of chips, sometimes 3x more. (see previous illustration)
  • 2) Since chips are smaller, they have less defects than larger chips, thus making their ratio of good -to- bad, much better than that of bigger chips. (see previous illustration)

That^ right there is why Companies are forever seeking smaller and smaller node Processes, because they want to leverage economy of scale, in the market.



To all.… because I can't stop laughing anymore at biased cheerleaders and want to clarify for all:

Economies of Scale in Semiconductor Manufacturing
How to Achieve and to Destroy
by Michael Leitner


2 Economy of Scale in Semiconductor Manufacturing
2.1 Economy of Scale - General Principles
2.2 Breakdown of Manufacturing Costs
2.3 Product CostModels and Their Importance
2.4 Capital Costs - Depreciation
2.4.1 Investments
2.4.2 Calculating Capital Costs - The Costing Model
2.4.3 Some Definitions about Tool Availability and Processing Speed

  • 2.4.4 Operating Curves
  • 2.4.5 Capacity Planning
  • 2.4.6 Granularity of Investments
  • 2.4.7 Process Complexity Factors
  • 2.4.8 Tool Complexity Factors

2.4.9 Single Technology FAB’s versus Multiple Technology FAB’s
2.5 Personal Costs
2.5.1 Operators (ProductionWorkers)
2.5.2 Maintenance
2.5.3 Process Engineering
2.6 Material Costs and Payed Services
 
Last edited:
You need links to back that up.
You’ve been shown slides from AMD themselves showing it is more costly. The vast manority of what you say isn’t how any of this actually works.

Lets pretend any of what you say is correct. Why isn’t the 5700xt 1/2 the price of the 2070S and AMD wiping the market out at 80% gross margin?

Navi10 doesn't cost AMD as much as Vega, (if they were the same size it might, but Navi10 is much smaller than Vega10). And AMD are making mad killing on the 5700 series, because the 251mm^2 margins are so great. You will see evidence of this soon after the first of the year, when the sales data and revenues come in. TSMC has plenty of capacity and AMD will be bringing out it's 3rd 7nm GPU soon.

Also, why should Dr Su sell her Navi10 chip for $199, when she can easily get $399 for it... (and it is still outperforming/competitive with cards a $100 more...). The fact is, Nvidia can't compete any lower, or they would be loosing money. Super is big and Dr Su knows they have nvidia on the ropes. I wouldn't be surprised if AMD isn't letting the AIB's have a little taste of those profits either.


Subsequently, those slides will back me up, because you guys didn't watch Dr Su's keynote and why She put that slide up there... and how AMD is overcoming the obstacles in those slides.
 
Navi10 doesn't cost AMD as much as Vega, (if they were the same size it might, but Navi10 is much smaller than Vega10). And AMD are making mad killing on the 5700 series, because the 251mm^2 margins are so great. You will see evidence of this soon after the first of the year, when the sales data and revenues come in. TSMC has plenty of capacity and AMD will be bringing out it's 3rd 7nm GPU soon.

Also, why should Dr Su sell her Navi10 chip for $199, when she can easily get $399 for it... (and it is still outperforming/competitive with cards a $100 more...). The fact is, Nvidia can't compete any lower, or they would be loosing money. Super is big and Dr Su knows they have nvidia on the ropes. I wouldn't be surprised if AMD isn't letting the AIB's have a little taste of those profits either.


Subsequently, those slides will back me up, because you guys didn't watch Dr Su's keynote and why She put that slide up there... and how AMD is overcoming the obstacles in those slides.

Because AMD only has 30% market share and nVidia is the premium brand with more features and clout. If what you are saying is true they can sell it at $250 and have 60-80% gross margin, more importantly win mind share... and actual “economies of scale.”

If facts are on your side, provide some links and/or charts.
 
Last edited:
Navi10 doesn't cost AMD as much as Vega, (if they were the same size it might, but Navi10 is much smaller than Vega10).

Troll repeats same ignorant falsehood (based on pretending that AMD slide doesn't exist). Just in a different context.

In reality, Vega and Navi dies are about the same cost.

Navi is about half the size (1/2 X), but is on a process which costs about twice as much (2 X, see AMD slide). 1/2 x 2 = 1
Or
Vega is about twice (2 X) as big, but it is on a process that costs about half the price (1/2X). 2 x 1/2 = 1
 
Because AMD only has 30% market share and nVidia is the premium brand with more features and clout. If what you are saying is true they can sell it at $250 and have 60-80% gross margin, more importantly win mind share... and actual “economies of scale.”

If facts are on your side, provide some links and/or charts.

RTG will never win mind share by severely undercutting their profits for R&D. Doing so has never caused the AMD / RTG market share to increase because then the Green Team folks just keep buying the Green stuff, just because. AMD's CEO knows what she is doing and that is why AMD / RTG is way ahead of where they were just 2.5 or so years ago.
 
RTG will never win mind share by severely undercutting their profits for R&D. Doing so has never caused the AMD / RTG market share to increase because then the Green Team folks just keep buying the Green stuff, just because. AMD's CEO knows what she is doing and that is why AMD / RTG is way ahead of where they were just 2.5 or so years ago.

Really, they’ve tried half the price like Gamer X said they are capable of with 60-80% GM? I don’t remember that. And according to his logic they’d still be making double the margin they are now. I am pretty sure most people would buy a $250 5700XT over a $500 2070S.
 
Last edited:
Troll repeats same ignorant falsehood (based on pretending that AMD slide doesn't exist). Just in a different context.

In reality, Vega and Navi dies are about the same cost.

Navi is about half the size (1/2 X), but is on a process which costs about twice as much (2 X, see AMD slide). 1/2 x 2 = 1
Or
Vega is about twice (2 X) as big, but it is on a process that costs about half the price (1/2X). 2 x 1/2 = 1

You are just white noise and I am putting you on ignore because you are unable to comprehend, nor willing to read people's full post. Please go back and read my last sentence in that post. You can't possible be this needy, can you?

You didn't listen to the reasons Dr Su showed that chart. Because nodes are getting more expensive to produce, She is going to use Economy of Scale like Dr Su did in the CPU world, to gain market share in the GPU world. We will see 2 more 7nm GPU before Nvidia releases Ampere in 14 months time.



Welcome to my ignore list, you can't even bother to read people's posts and argue with a mutual respect.

 
Troll repeats same ignorant falsehood (based on pretending that AMD slide doesn't exist). Just in a different context.

In reality, Vega and Navi dies are about the same cost.

Navi is about half the size (1/2 X), but is on a process which costs about twice as much (2 X, see AMD slide). 1/2 x 2 = 1
Or
Vega is about twice (2 X) as big, but it is on a process that costs about half the price (1/2X). 2 x 1/2 = 1

Navi and Vega dies are about the same cost? Have a proof of that, actual verifiable proof and not your opinion? Yeah, thought not.
 
The key to all this is if AMD can figure out something clever for ray tracing.

If, big IF here, they can find a way to make RT work on lower end cards (or mid/high end at reasonable prices), they will have a huge win.

Let us be honest, Nvidia's mid-range is kind of useless for RT. I tried that Star Wars demo on a 2060 and it was slide show. Even with a 2080 Ti, you're talking about just passing performance at 1080p.

So AMD has a huge opening here, but they only have maybe less than a year to pull it off. Nvidia is not going to sit still, but they are also invested in what they have built and might not want to throw it away.
 
Navi and Vega dies are about the same cost? Have a proof of that, actual verifiable proof and not your opinion? Yeah, thought not.
I don't know about the dies, but the overall cost of Navi must be cheaper if you look at the prices.

Navi starts at $350, while Vega took at least a year to fall below $400 (inventory and mining could be the cause, but the prices were inflated for a long time).
 
The key to all this is if AMD can figure out something clever for ray tracing.

If, big IF here, they can find a way to make RT work on lower end cards (or mid/high end at reasonable prices), they will have a huge win.

Let us be honest, Nvidia's mid-range is kind of useless for RT. I tried that Star Wars demo on a 2060 and it was slide show. Even with a 2080 Ti, you're talking about just passing performance at 1080p.

So AMD has a huge opening here, but they only have maybe less than a year to pull it off. Nvidia is not going to sit still, but they are also invested in what they have built and might not want to throw it away.

Yeah this is just the beginning. We don't know what tax RT will add to Navi if any and we don't know what performance will look like once AMD rolls out DXR support. It's strange that they're so quiet on the issue given all the hype.

I don't see why Nvidia would be stuck with Turing-like RT cores. Games don't care about the hardware implementation as it's all sitting behind the "standard" DXR API. It's quite possible we will see architectures in the near future achieve decent RT performance without dedicated hardware.
 
Navi and Vega dies are about the same cost? Have a proof of that, actual verifiable proof and not your opinion? Yeah, thought not.

The explanation is in the post if you can handle the math. Though it seems to challenge both you and GamerX.
 
The explanation is in the post if you can handle the math. Though it seems to challenge both you and GamerX.

Your own personal math is not proof, I mean specifically something from AMD or TSMC?
 
You didn't listen to the reasons Dr Su showed that chart. Because nodes are getting more expensive to produce, She is going to use Economy of Scale like Dr Su did in the CPU world, to gain market share in the GPU world.
The reason she posted the slide, doesn't alter the factual data of the slide. Factual data you keep ignoring in your small die rant.

7nm chips cost double what 14nm class chips cost. So there is no financial advantage to a die half the size, when it costs twice as much to produce.


Your own personal math is not proof, I mean specifically something from AMD or TSMC?

Math isn't personal. It's universal.

If you can't handle 2 x 1/2 = 1.

You really can't contribute much meaningful to any discussion here.
 
With the shrinking dGPU market each quarter, these companies have to maintain their high margins to satisfy investors so the prices will probably continue to creep up. Now with 3 players crowding the space, someone will inevitably get squeezed out and I'm guessing that will be AMD as they don't have the money to compete with either NVIDIA or Intel. In the future (3+ years from now), I can see the dGPU market looking something like this: NVIDIA (55%), INTEL (40%), AMD (5%). Techspot has a really nice article about Intel's upcoming GPU and how they can scale even the existing Gen 11 architecture up to compete at 2080 Ti specs: https://www.techspot.com/article/1898-intel-xe-graphics-preview/ If Intel uses EMIB, they will already have a leg up on both NVIDIA and AMD:

"In April Intel confirmed to Anandtech that they intended to use EMIB to support their GPUs soon, so that is something to look forward to."

"May 1, 2019: Jim Jeffers, senior principal engineer and director of the rendering and visualization team, announces Xe's ray tracing capabilities at FMX19. In addition, Intel has continued to hire talent away from the competition."

View attachment 184065

From Anandtech: https://www.anandtech.com/show/14211/intels-interconnected-future-chipslets-emib-foveros

"Xe will range from integrated graphics all the way up to enterprise compute acceleration, covering through the consumer graphics and gaming markets as well.

View attachment 184073

Intel stated at the time that the Xe range will be built on two different architectures, one of which is called Arctic Sound, and the other has not yet been made public. The goal is to create a platform for Xe relating the hardware, the software, the drivers, the platform, and the APIs all into a single mission, which Intel calls 'The Odyssey'. Introducing EMIB and Foveros technologies as part of the Xe strategy seems to be very much part of Intel's plan, and it will be interesting to see how it develops."


If anyone should be shitting their pants, it's Dr. Lisa Su and her very late and unimpressive Chinese designed RDNA. If we want to talk about economy of scale, Intel has their own fabs and we know their in-house 10nm will be used for CPUs, FPGA, GPU etc so they will be able to pump out higher volume and lower prices than both NVIDIA and AMD while maintaining higher margins than AMD. We also know NVIDIA will have Ampere ready in 2020 which will likely surpass the current 2080 Ti easily by at least 30% if not more at 7nm so AMD will be in Intel's crosshairs if Intel decides to push out anything remotely like what's speculated above.'

If the 5800/5900 do not have ray tracing and can't match/exceed 2080 Ti in performance, AMD will be dead in the water in 2020. Intel will have a dGPU for AIBs and they'll stick Xe in every desktop/notebook they can and where will that leave AMD? Dead. Hell even NVIDIA is in trouble in the notebook market because Intel will be selling a full ecosystem to manufacturers. RDNA isn't even a factor, it's an architecture that should've been released in 2016, at 7nm it isn't impressive one bit.
It's the anti gamer-x, lol. Going on about how much Intel 10nm can pump out is a really funny line as th y even said they are having so many issues they're basically going to skip it for most things. First feed the troll, then give him this kind of crap to work with? Seriously, not quite, but almost as bad as gamer x just in the opposite direction. At least you had a few valid points intermixed.
 
Back
Top