Big Navi is coming

We know...DXR is “useless”...until AMD supports it.../yawn
:rolleyes:

DXR is great and will be a great thing for Gamers. Too bad Nvidia's RTX cards are broken, or else it would be great to use "RTX On" in games. As such, Nvidia's TURING blows at ray tracing, & next year's games RTX Turing cards will still blow at ray tracing. RTX On won't get any better....

That is why Nvidia is coming out with a new card next year. But a new Nvidia GPU next year doesn't do anything for all of us who bought broken RTX2080, etc.



Factum, let us know when Nvidia has a non-broken ray tracing solution.... until then, take your pom poms off.
 
Last edited:
Tesselation is set by the developer...the "Too much tessealation" was nothing but an AMD whine over that their solution SUCKED in perofrmance compared to NVIDIA....I wrote this once, because I hate lies like that:
https://hardforum.com/threads/no-amd-300-series-review-samples.1865193/page-7#post-1041667964



We are fully aware of Nvidia's continued attempt to destroy AMD / RTG in areas outside their actual real performance. That is a documented fact and no, I am not going to post a bunch of links for you to attempt to refute them, back and forth crap. I guess GPP was a AMD fake attack as well, then, eh? Oh well, does not really matter, since AMD is not the gullible AMD of old.
 
I still don't get the crowd that keeps saying

"It has to beat nVidia ~and~ be a lot cheaper"

Easy. Unless AMD can compete across the board, to include hardware DXR acceleration, cooling, noise, and consistent performance across a broad range of games, they'll have to price lower because they're offering less.
 
We are fully aware of Nvidia's continued attempt to destroy AMD / RTG in areas outside their actual real performance.

Lol.

Nvidia doesn't have to 'attempt to destroy' AMD- AMD is doing that all by their lonesome. Nvidia competes with their own products, not AMDs.

Developers were paid by Nvidia to over include it. Or are you purposely overlooking the issue here..?

Nvidia has worked for decades with developers to get support for newer features into games. AMD does... a little. Sometimes. The difference in support that they each provide to developers is stark.

Too bad Nvidia's RTX cards are broken

You must really believe if you keep repeating nonsense that's been disproven in this thread already.

That is why Nvidia is coming out with a new card next year.

You mean, on their regular update cycle?

Talk about sensational!

Factum, let us know when Nvidia has a non-broken ray tracing solution.... until then, take your pom poms off.

He just did, and your AMD-issued glasses factured your ability to understand facts :ROFLMAO:
 
We know...DXR is “useless”...until AMD supports it.../yawn

It will probably still be useless when they release their first DXR generation as well.

It will be useless until game developers support it for real and not just cause they got some partner money for it.... and likely for another couple generations of hardware and proper Shader driven hardware. (using tensor cores is not a great long term solution)
 
It will probably still be useless when they release their first DXR generation as well.

It will be useless until game developers support it for real and not just cause they got some partner money for it.... and likely for another couple generations of hardware and proper Shader driven hardware. (using tensor cores is not a great long term solution)

To each its own, but I think rtx/tensor cores (or whatever RT alternatives intel/amd are cooking) are the best long term solution. Shaders/compute are great for what they do, but it would take performance increase in the orders of magnitude to become viable.
 
To each its own, but I think rtx/tensor cores (or whatever RT alternatives intel/amd are cooking) are the best long term solution. Shaders/compute are great for what they do, but it would take performance increase in the orders of magnitude to become viable.

Yup, Pascal (Shader based DXR) vs. Turing (RT cores based DXR) shows this quite clearly.
 
To each its own, but I think rtx/tensor cores (or whatever RT alternatives intel/amd are cooking) are the best long term solution. Shaders/compute are great for what they do, but it would take performance increase in the orders of magnitude to become viable.

Not really what is needed is a way to perform the calculations with a reduced instruction pipe. AMD has already introduced the solution. Wave32 calculation. AMDs rDNA can calculate low precision shaders at half precision 2 per clock. I don't think they have talked about it in terms of tracing yet as the software end needs to be solved. AMDs hardware team I believe has solved the issue but as we all know their GPU software team is just not staffed as well as it should be. I am pretty sure when PS5/Xboxnext start getting closer to launch we will start hearing about wave32 ray calculation.

The main issue with using tensor cores (or RT cores if we want to call them that). They have a massive hit in terms of required transistor count. That means NV is going to have to make decisions at 7nm... do they pack tensors/rt cores into those parts increasing the transistor count 4-6 billion and reducing max clocks ? The other solution is to make the tensor bits a seperate chiplet... but that has issues of its own as well... infinity fabric type setups work fine on CPUs but on GPUs the bandwidth issues may be more of a hindernce then just using shaders on the same core. NV has already enabled and shown that their 4 year old shader only designs are actually capable of DXR... with their high end pascal parts doing darn close (and perhaps even better) to the same job as the low end RTX cards. I'm pretty sure if they can get the shader clocks high enough at 7nm... and perhaps something similar to AMDs wave32 calculation they could perform DXR in shader space and do it better.

Will be interesting to see AMDs solution... as well as the direction Nvidia goes with Ampere. I can't imagine they are going to make ampere a 20 billion transistor single bit of silicon. If they do they are asking for a lot of manufacturing issues.
 
Not really what is needed is a way to perform the calculations with a reduced instruction pipe. AMD has already introduced the solution. Wave32 calculation. AMDs rDNA can calculate low precision shaders at half precision 2 per clock. I don't think they have talked about it in terms of tracing yet as the software end needs to be solved. AMDs hardware team I believe has solved the issue but as we all know their GPU software team is just not staffed as well as it should be. I am pretty sure when PS5/Xboxnext start getting closer to launch we will start hearing about wave32 ray calculation.

The main issue with using tensor cores (or RT cores if we want to call them that). They have a massive hit in terms of required transistor count. That means NV is going to have to make decisions at 7nm... do they pack tensors/rt cores into those parts increasing the transistor count 4-6 billion and reducing max clocks ? The other solution is to make the tensor bits a seperate chiplet... but that has issues of its own as well... infinity fabric type setups work fine on CPUs but on GPUs the bandwidth issues may be more of a hindernce then just using shaders on the same core. NV has already enabled and shown that their 4 year old shader only designs are actually capable of DXR... with their high end pascal parts doing darn close (and perhaps even better) to the same job as the low end RTX cards. I'm pretty sure if they can get the shader clocks high enough at 7nm... and perhaps something similar to AMDs wave32 calculation they could perform DXR in shader space and do it better.

Will be interesting to see AMDs solution... as well as the direction Nvidia goes with Ampere. I can't imagine they are going to make ampere a 20 billion transistor single bit of silicon. If they do they are asking for a lot of manufacturing issues.

Skimming their white paper Wave32 just sounds like they are catching up to nVidia's efficiency at processing not FP32, ect; not surpassing them with something new. Aka marketing mumbo jumbo.

The thing is Tensor cores absolutely destroy GPUs at FP4. The 2080ti puts out 440 teraflops at FP4.
 
Those pesky facts:
The Ray Tracing Slideshow: DXR on Nvidia Pascal Tested
https://www.techspot.com/review/1831-ray-tracing-geforce-gtx-benchmarks/

More pesky facts:

Good information. So, we can see RTX / Tensor cores added around 13% based on those links. So, doing some quick math... TU102 has 18.6 Billion transistors... @ 13 percent, we are looking at about 2.418 (lets round down since we're still not really sure) so, about 2 Billion transistors. It has 36 TPCs, so after we take away the 2 Billion for RTX we have about 16.6 Billion split 36 ways (not exact, because it's not 100% for the TPC's), but this gives us about 460 Million per TPC. If they didn't include RTX but kept it the same size, we'd be looking at LEAST 40 TPCs, or about 11% increase in general performance (all other things equal of course). This is all just rough math and I tried to under estimate when possible, so it would likely be a bit more TPCs in actuality. Fun to speculate, but obviously if it's 13% of the real estate, with perfect scaling you would expect about 13% loss of performance for the same size die or some amount (hard to say how they would fit) more die's per wafer if they were removed and everything else stayed the same.

Note: I'm not stating if this is good or bad, just trying to run some rough numbers.
 
  • Like
Reactions: ChadD
like this
Skimming their white paper Wave32 just sounds like they are catching up to nVidia's efficiency at processing not FP32, ect; not surpassing them with something new. Aka marketing mumbo jumbo.

Yep you’re absolutely right. It’s really amazing how people post complete nonsense on the internet with such confidence.
 
Not really what is needed is a way to perform the calculations with a reduced instruction pipe. AMD has already introduced the solution. Wave32 calculation. AMDs rDNA can calculate low precision shaders at half precision 2 per clock. I don't think they have talked about it in terms of tracing yet as the software end needs to be solved. AMDs hardware team I believe has solved the issue but as we all know their GPU software team is just not staffed as well as it should be. I am pretty sure when PS5/Xboxnext start getting closer to launch we will start hearing about wave32 ray calculation.

The main issue with using tensor cores (or RT cores if we want to call them that). They have a massive hit in terms of required transistor count. That means NV is going to have to make decisions at 7nm... do they pack tensors/rt cores into those parts increasing the transistor count 4-6 billion and reducing max clocks ? The other solution is to make the tensor bits a seperate chiplet... but that has issues of its own as well... infinity fabric type setups work fine on CPUs but on GPUs the bandwidth issues may be more of a hindernce then just using shaders on the same core. NV has already enabled and shown that their 4 year old shader only designs are actually capable of DXR... with their high end pascal parts doing darn close (and perhaps even better) to the same job as the low end RTX cards. I'm pretty sure if they can get the shader clocks high enough at 7nm... and perhaps something similar to AMDs wave32 calculation they could perform DXR in shader space and do it better.

Will be interesting to see AMDs solution... as well as the direction Nvidia goes with Ampere. I can't imagine they are going to make ampere a 20 billion transistor single bit of silicon. If they do they are asking for a lot of manufacturing issues.

As I said. It won't take twice the performance or 3 even times. RTX (shaders+rtx cores+tensor cores) is about 8~10+ times faster than Pascal (shader only) and even then its not enough for 4K 60. And we are not even talking about real Raytracing, but hybrid rendering.
 
Meh, you said it doesn't take twice the performance, but it really does, *especially* in the lower and mid range. Even at the upper end, check benchmarks for things like Metro that use it for GI. At 4k it is almost 40% difference between on and off (https://www.techspot.com/article/1793-metro-exodus-ray-tracing-benchmark/, "The RTX 2080 Ti sees a 38% uplift in performance at this resolution with RTX disabled"). At 1440p it was about 30%, so not as bad, but still not great. And this is the top of the top. At 1440p the 2060 was 36% difference and to low to play. The results in Battlefield were closer to 50%-60%, but that did reflections not just global illumination and had lots of issues with noise. Shadow of the tomb raider was close to 45%, so honestly.. yeah, double performance at the high end seems about right, and more so in the low/mid. So raytracing can be useful for some very subtle lighting (I honestly had to look at the stills pretty hard to tell the difference in most of them) with a pretty decent hit to performance. Personally I don't see it worth a $1200 investment, but to each his own. I don't blame people for wanting the absolute best and being on the bleeding edge, honestly if I had money laying around I'd probably get one too.

By the way, your comment about the Pascal core isn't really true.. at all. (https://www.techspot.com/review/1831-ray-tracing-geforce-gtx-benchmarks/) Just easier to use this source, but others agree. Tomb Raider @ 1080p for the 2080ti went from 144/93 (avg/low) down to 88/39, so a 64% increase avg and 138% increase in lows with RTX turned off. Titan XP was 117/72 off, and 47/21 on... about 150% increase turning it off average and 240% lows. So, it's a pretty substantial drop for Pascal, but I'm not really seeing where you got the 8-10x from unless theirs other benchmarks I should be looking at? All I know, is that's a lot of performance left on the table to turn on a feature that sometimes looks good and others doesn't add to much, and others where it actually makes it worse (mostly because poor/rushed implementations). It will be interesting to see if any games coming out do a better job and put it in a better light. It's like any other new feature though, takes a little while to figure out the best way to use it.
 
Meh, you said it doesn't take twice the performance, but it really does, *especially* in the lower and mid range. Even at the upper end, check benchmarks for things like Metro that use it for GI. At 4k it is almost 40% difference between on and off (https://www.techspot.com/article/1793-metro-exodus-ray-tracing-benchmark/, "The RTX 2080 Ti sees a 38% uplift in performance at this resolution with RTX disabled"). At 1440p it was about 30%, so not as bad, but still not great. And this is the top of the top. At 1440p the 2060 was 36% difference and to low to play. The results in Battlefield were closer to 50%-60%, but that did reflections not just global illumination and had lots of issues with noise. Shadow of the tomb raider was close to 45%, so honestly.. yeah, double performance at the high end seems about right, and more so in the low/mid. So raytracing can be useful for some very subtle lighting (I honestly had to look at the stills pretty hard to tell the difference in most of them) with a pretty decent hit to performance.

Remember that we're still comparing games that have had ray tracing 'hacked' in. The effects possible are less pronounced, and they're harder to drive than a ground-up solution could be, meaning that going forward games can use hardware DXR more efficiently.

By the way, your comment about the Pascal core isn't really true.. at all. (https://www.techspot.com/review/1831-ray-tracing-geforce-gtx-benchmarks/) Just easier to use this source, but others agree. Tomb Raider @ 1080p for the 2080ti went from 144/93 (avg/low) down to 88/39, so a 64% increase avg and 138% increase in lows with RTX turned off. Titan XP was 117/72 off, and 47/21 on... about 150% increase turning it off average and 240% lows. So, it's a pretty substantial drop for Pascal, but I'm not really seeing where you got the 8-10x from unless theirs other benchmarks I should be looking at?

You'd need to compare Quake II RTX, but even then, you're not getting the full 'picture', as that's more of a '3DMark-style' test than a gaming test using a brute-force solution in an uncomplicated codebase.

All I know, is that's a lot of performance left on the table to turn on a feature that sometimes looks good and others doesn't add to much, and others where it actually makes it worse (mostly because poor/rushed implementations). It will be interesting to see if any games coming out do a better job and put it in a better light. It's like any other new feature though, takes a little while to figure out the best way to use it.

Like most new features, demos that showcase ray tracing are fairly straightforward- making the technology work well throughout a game is art. That never comes easy, but it is most definitely coming.


The main argument here is this: it's unwise to buy a mid-range or better GPU today without hardware DXR acceleration. Granted literally the only GPU that exists in that bracket is the Radeon VII and that's not a gaming GPU, so it's hard to screw up, but at this point we can pretty much assume that AMD has enabled DXR in Big Navi, because they're not that suicidal. We hope.
 
Meh, you said it doesn't take twice the performance, but it really does, *especially* in the lower and mid range. Even at the upper end, check benchmarks for things like Metro that use it for GI. At 4k it is almost 40% difference between on and off (https://www.techspot.com/article/1793-metro-exodus-ray-tracing-benchmark/, "The RTX 2080 Ti sees a 38% uplift in performance at this resolution with RTX disabled"). At 1440p it was about 30%, so not as bad, but still not great. And this is the top of the top. At 1440p the 2060 was 36% difference and to low to play. The results in Battlefield were closer to 50%-60%, but that did reflections not just global illumination and had lots of issues with noise. Shadow of the tomb raider was close to 45%, so honestly.. yeah, double performance at the high end seems about right, and more so in the low/mid. So raytracing can be useful for some very subtle lighting (I honestly had to look at the stills pretty hard to tell the difference in most of them) with a pretty decent hit to performance. Personally I don't see it worth a $1200 investment, but to each his own. I don't blame people for wanting the absolute best and being on the bleeding edge, honestly if I had money laying around I'd probably get one too.

By the way, your comment about the Pascal core isn't really true.. at all. (https://www.techspot.com/review/1831-ray-tracing-geforce-gtx-benchmarks/) Just easier to use this source, but others agree. Tomb Raider @ 1080p for the 2080ti went from 144/93 (avg/low) down to 88/39, so a 64% increase avg and 138% increase in lows with RTX turned off. Titan XP was 117/72 off, and 47/21 on... about 150% increase turning it off average and 240% lows. So, it's a pretty substantial drop for Pascal, but I'm not really seeing where you got the 8-10x from unless theirs other benchmarks I should be looking at? All I know, is that's a lot of performance left on the table to turn on a feature that sometimes looks good and others doesn't add to much, and others where it actually makes it worse (mostly because poor/rushed implementations). It will be interesting to see if any games coming out do a better job and put it in a better light. It's like any other new feature though, takes a little while to figure out the best way to use it.




What I meant is that if you want to run RT with shaders you'll need at least 8 times the performance to get playable framerates specially at 4K. For instance. Pascal can do RT with shaders, and it can run the SW reflection demo. On my GTX 1070Ti OC it ran at like 1~3 fps at 4k while the RTX2080Ti does like 40fps. I know its not a fair comparison, but you get the idea.
 
What I meant is that if you want to run RT with shaders you'll need at least 8 times the performance to get playable framerates specially at 4K. For instance. Pascal can do RT with shaders, and it can run the SW reflection demo. On my GTX 1070Ti OC it ran at like 1~3 fps at 4k while the RTX2080Ti does like 40fps. I know its not a fair comparison, but you get the idea.

Then we may as well all forget Ray tracing.

AI tensor core hardware is not a good long term solution. I doubt AMD or Intel go that route. If you folks are all right and shaders can not be improved in anyway to be efficient doing ray calculation. Then ray tracing raster hybrid is dead. Which is very possible.

Nvidia used tensor cause frankly the chip wasn't designed for gaming only... they designed on monolithic chip to cover all their market segments. I don't think that is going to be realistic at 7nm and beyond.

The only way Ampere keeps tensor cores and hence ray tracing cores as part of its design... is if they choose to give up a ton of potential shader performance. Which is pretty damn risky with AMD seeming to be back in a position to spend money and Intel about to join the fight. Think of it this way if you bought a 2060 or 2070.... you COULD have had 2080 performance at least if Nvidia had dedicated the 1/3 of the die that is right now tensors to more shader cores. Tensors/RT cores which have zero other practical game use are a waste of die space. Nvidia would be better off shoving 20% more shaders on board... finding ways to run shaders at lower precision (which is what they did with tensor cores this gen) so they can make things practical there.

Yes I get it pascal sucks at DXR... I never claimed it didn't. (counting design time its also 5 years old) Just that it is possible to do ray calculation in shader space. It doesn't require 2-4 billion more transistors with no other use... and if NV was serious about gamers they would be figuring out how to make that happen. The reality is gamers are secondary to their main tensor core market segment. If Ampere still has Tensor hardware... then its official imo... Intel or AMD will be the new king by the end of 2020. NV seems to be sitting in the same position Intel was when they thought it was a good idea to swing for the fences with 10nm. If they really try and pack 20+ billion transistors into a 7nm die, they are looking at their own Intel 10nm disaster. Only in a funny turn it will probably be Intel sweeping in with the chips to fawn over.
 
You guys keep telling yourselves that. lol Volta is also capable of RTX without the marketing BS.

Yes its slower its a generation older and has stock tensor cores that can't be run at lower precision. Yes the pascals can perform shader only DXR but its painful... the volta based Titan V lands right in the middle. At 1440 ultra setting DXR its within 10 FPS of 2080ti. No its not as good... but it isn't magic that makes it almost as good... it's the collection of first gen stock tensor flow hardware. What makes turing special is being able to lower the FP precision on its tensor cores.... I don't care if NV marketing whats to call ita RT core at that point. Doesn't change what it is.
 

Ok yes my memory fails me... its more 10 fps at low settings and at ultra 18 FPS or 24ish %, Yes the NV PR people claiming its a caching difference is believable. lol RT core = Tensor. If your right hey the should save themselves 3 or 4 billion transistors with Ampere and leave the tensors on their AI SOC parts. Or better yet apparently more cache would be a better spend.
 
Last edited:
So in conclusion, big Navi this year, no dedicated ray tracing hardware, performance >30% 5700XT - Price? Maybe two versions, one for DDR 6 and even a bigger version with HBM2(3?). If 384bit bus with DDR 12gb (that is a very nice amount and seems right on). Price - if it handily beats the 2080 Supra I would predict it will be priced exactly the same - $699 with a lower version around $599 which will also beat the 2080 Super or tie it.
 
So in conclusion, big Navi this year, no dedicated ray tracing hardware, performance >30% 5700XT - Price? Maybe two versions, one for DDR 6 and even a bigger version with HBM2(3?). If 384bit bus with DDR 12gb (that is a very nice amount and seems right on). Price - if it handily beats the 2080 Supra I would predict it will be priced exactly the same - $699 with a lower version around $599 which will also beat the 2080 Super or tie it.



Hell yes! goodboy 2080 and crappy RTX - Welcome Navi - and thank god for the nV cheerleaders (you all know who you are) as they do wonders for the nV resale value
 
  • Like
Reactions: ChadD
like this
Hell yes! goodboy 2080 and crappy RTX - Welcome Navi

I love that your religious dedication has you excited to buy outdated hardware.

I couldn't define fanboy any more precisely if I tried!

[Oh, and AMD has hardware DXR coming- someday- so while Nvidia has already made a DXR-less Big Navi obsolete, AMD will too in quick succession :ROFLMAO:]
 
So in conclusion, big Navi this year, no dedicated ray tracing hardware

As said before, we should really expect AMD to put some form of hardware DXR support on Big Navi.

and even a bigger version with HBM2(3?).

...likely not, as die sizes increase HBM is a price / performance loser, and it would require an entirely separate fabrication run.
 
Then we may as well all forget Ray tracing.

AI tensor core hardware is not a good long term solution. I doubt AMD or Intel go that route. If you folks are all right and shaders can not be improved in anyway to be efficient doing ray calculation. Then ray tracing raster hybrid is dead. Which is very possible.

Nvidia used tensor cause frankly the chip wasn't designed for gaming only... they designed on monolithic chip to cover all their market segments. I don't think that is going to be realistic at 7nm and beyond.

The only way Ampere keeps tensor cores and hence ray tracing cores as part of its design... is if they choose to give up a ton of potential shader performance. Which is pretty damn risky with AMD seeming to be back in a position to spend money and Intel about to join the fight. Think of it this way if you bought a 2060 or 2070.... you COULD have had 2080 performance at least if Nvidia had dedicated the 1/3 of the die that is right now tensors to more shader cores. Tensors/RT cores which have zero other practical game use are a waste of die space. Nvidia would be better off shoving 20% more shaders on board... finding ways to run shaders at lower precision (which is what they did with tensor cores this gen) so they can make things practical there.

Yes I get it pascal sucks at DXR... I never claimed it didn't. (counting design time its also 5 years old) Just that it is possible to do ray calculation in shader space. It doesn't require 2-4 billion more transistors with no other use... and if NV was serious about gamers they would be figuring out how to make that happen. The reality is gamers are secondary to their main tensor core market segment. If Ampere still has Tensor hardware... then its official imo... Intel or AMD will be the new king by the end of 2020. NV seems to be sitting in the same position Intel was when they thought it was a good idea to swing for the fences with 10nm. If they really try and pack 20+ billion transistors into a 7nm die, they are looking at their own Intel 10nm disaster. Only in a funny turn it will probably be Intel sweeping in with the chips to fawn over.


Very well said^


The differences between RDNA and Turing will come to light over the next 6 months. Turing is essentially dead as a gaming GPU (like Vega20 was..), because it was not meant for gamers, but other industries and handed down as a gamer chip. Therefore, NVidia is already working on their new 7nm GPU, which has not even been taped out yet. So it won't be avail for retail until about Sept 2020.

So since there is no more new Turing chips coming, then Turing architecture is all Nvidia has to offer for another full year.


Much different seen when looking at AMD and their new RDNA architecture with little-navi10. Everyone already knows what is coming with bigger-navi14, in that no matter how you slice it, it is going to be a small chip, that is able to compete with the 2080ti.

To understand what that means is this:
-AMD can sell the 5700 for $299 all day long. Because the chip is only 252mm^2.
-AMD can sell the 5800 for $499 all day long. Because THAT chip would be something like 335mm^2.



You can already see Retailers discounting the Radeon XT 5700s for $299.

But I believe…. AMD is just holding the Price Structure and Mindshare of the new RDNA Radeon 5700 Series in check, until the AIB make their debute. Then you will see AMD blower design drop to the normal pricing of $299 and $389. And an news soon, with the Radeon 5800 Series slotting into the $499 to $599 bracket. Leaving the 5900 Series at the TOP DOG... at $799.
 
Then we may as well all forget Ray tracing.

AI tensor core hardware is not a good long term solution. I doubt AMD or Intel go that route. If you folks are all right and shaders can not be improved in anyway to be efficient doing ray calculation. Then ray tracing raster hybrid is dead. Which is very possible.

Nvidia used tensor cause frankly the chip wasn't designed for gaming only... they designed on monolithic chip to cover all their market segments. I don't think that is going to be realistic at 7nm and beyond.

The only way Ampere keeps tensor cores and hence ray tracing cores as part of its design... is if they choose to give up a ton of potential shader performance. Which is pretty damn risky with AMD seeming to be back in a position to spend money and Intel about to join the fight. Think of it this way if you bought a 2060 or 2070.... you COULD have had 2080 performance at least if Nvidia had dedicated the 1/3 of the die that is right now tensors to more shader cores. Tensors/RT cores which have zero other practical game use are a waste of die space. Nvidia would be better off shoving 20% more shaders on board... finding ways to run shaders at lower precision (which is what they did with tensor cores this gen) so they can make things practical there.

Yes I get it pascal sucks at DXR... I never claimed it didn't. (counting design time its also 5 years old) Just that it is possible to do ray calculation in shader space. It doesn't require 2-4 billion more transistors with no other use... and if NV was serious about gamers they would be figuring out how to make that happen. The reality is gamers are secondary to their main tensor core market segment. If Ampere still has Tensor hardware... then its official imo... Intel or AMD will be the new king by the end of 2020. NV seems to be sitting in the same position Intel was when they thought it was a good idea to swing for the fences with 10nm. If they really try and pack 20+ billion transistors into a 7nm die, they are looking at their own Intel 10nm disaster. Only in a funny turn it will probably be Intel sweeping in with the chips to fawn over.

You keep saying Tensor cores (and RTX cores I assume) are not a long term solution. Based on what? I mean, its only the 1st generation RTX.
 
I really don't care much for consumer SKU's, my days are spent working with stuff like this:
View attachment 179168

The prices (and profit margins) make people that whine over prices for consumers SKU's look like muppets.

And we all get it...DXR is "useless"...until AMD get their head out the *bleeeep* and fully support DX12.

/que "Wait for "big Navi"....the mantra being "Wait for..." ;)


What does any of THAT^^ have to do with Gamers upgrading/buying GPUs..?


Ignore the little people all you want, but as so many Reviewers and Gamers and People have said publicly (ie: told you ), TURING ACHITECTURE... IS A GAMING FLOP.

Time to set your pom poms down dude. You team lost and the game is over. Turing shot it's load... it's all Jensen has for the next 13 months. If you keep playing at this (I don't read other people reviews, because I know more than them) game, you are going to get jebaited again... when Dr Su drops more RDNA on us, in the flavor of 5800 Series...


You know things, so you have to know and admit that this is true and that big-navi is incoming.... and WAY BEFORE Ampere is even ever in sight.
 
Ignore the little people all you want, but as so many Reviewers and Gamers and People have said publicly (ie: told you ), TURING ACHITECTURE... IS A GAMING FLOP.

Option A: Big Navi lacks hardware ray-tracing --> it's dead on arrival
Option B: Big Navi has the DXR support that you so vehemently despise and is priced according to its performance relative to Turing

Pick one ;)
 
Or DXR are like Phys X and go the way of the Dodo, and AMD are betting on something not proprietary.
I cant see how DXR or something else along those lines could make my gaming better / more enjoyable. i think just getting to DX 12 will be just fine for me, and then i will leave the "megapixel race" to the few elitist players.

To me it seem like AMD won the freesync / G sync battle, and i think they can still win the next level grafix too, and next level grafix dont absolutely have to be DXR.
 
Or DXR are like Phys X and go the way of the Dodo, and AMD are betting on something not proprietary.
I cant see how DXR or something else along those lines could make my gaming better / more enjoyable. i think just getting to DX 12 will be just fine for me, and then i will leave the "megapixel race" to the few elitist players.

To me it seem like AMD won the freesync / G sync battle, and i think they can still win the next level grafix too, and next level grafix dont absolutely have to be DXR.

Keep your expectations low, avoid disappointments.
 
I'd agree, Oldmodder . AMD did basically win the FreeSync/G-Sync battle. You can obviously spin it either way (since Nvidia did gain an advantage by taking away AMD's last big feature), I think in the long run you will see standards-based features winning out over proprietary (all else being equal).

With DXR it is a little different. While Nvidia's RTX hardware is proprietary, they were smart to work with Microsoft and incorporate it into DX12. This basically forces AMD's hand, they will be left behind even further if they can't come up with a viable RT solution.

And DXR does look nice. I actually chose to "upgrade" from a 1440p monitor to 1080p so I can experience DXR with higher framerates. And I'm happy I made that trade-off. It is obviously the future, and it will only get better as developers learn to use it.
 
Winning are always hard to quantify.
The western way of living clearly won over the old divided world way of living, but what was our price then, 100 USD cheaper I phones - 75 cents burgers - Japanese cars with large turbos.
America won the space race, and we got duct tape - Teflon ASO, duct tape might be fine if you dont ask some kidnap victim's, Teflon also slick if you dont ask people living downstream of the Dupont factories that made the damn stuff.

I dont know, but someone always win something.
 
What exactly did AMD get by “winning”?
Well display manufacturers can now make their products work with both AMD and Nvidia customers by using FreeSync, while also saving money on the G-Sync license fee.

It doesn't take Albert Einstein to see that the displays companies are going to choose FreeSync over G-Sync, and that eventually there will be no G-Sync monitors made anymore (though, by that time, we will likely have HDMI VRR).

AMD did not "win" financially, but they were able to lead with a feature that is now becoming standard, so I still would take that as a win.
 
Back
Top