Why does raytracing suck so much, is it because of nvidia's bad hardware implementation?

Status
Not open for further replies.
Nvidia hardware implementation sucks compared to what? We literally have no other RT implementation to compare it against.

No need to compare it to anything. The added bloat is another nvidia tax on their already overpriced cards. If raytracing was usable without tanking performance then an argument could be made to how useful the implementation was. I guess. But as it stands the ballooning of the die is something consumers are paying for even if raytracing on all but the highest end card is virtually useless without serious compromise to the experience. And even then, it's really only good for screenshots and stills. Nvidia's attempt at raytracing made it clear that the industry is not ready, nor are they.
 
No need to compare it to anything. The added bloat is another nvidia tax on their already overpriced cards. If raytracing was usable without tanking performance then an argument could be made to how useful the implementation was. I guess. But as it stands the ballooning of the die is something consumers are paying for even if raytracing on all but the highest end card is virtually useless without serious compromise to the experience. And even then, it's really only good for screenshots and stills. Nvidia's attempt at raytracing made it clear that the industry is not ready, nor are they.

Its expensive and I don’t own a 4k monitor? Or maybe you just wish AMD had a 1080ti killer laying around, let alone a 2080ti.

RT was always a first gen gimmick, it will be something one day, but your looking for the result of the bottom of the ninth when the first pitch has just been thrown.

No need to compare my butt, if your the first out, you are the one that gets to show how it can be done, your the only one that can really do it, and everyone else gets to stare at your rear for a while.
 
Its got to keep you up tonight if you are designing consoles... when will everything shift to ray tracing? Will it ever? Will it occur the year after we launch our hardware and look like idiots? If we do a full ray trace console will it be a bit to early and not hold up?

Will VR screw us?
 
AMD implementations? How will AMD address DXR - huge big chips like Nvidia or something chiplet like DXR coprocessors? There are different ways of designing a bridge so to speak, as long as it gets the intended traffic across all is good. I do not see AMD going to the big die route. Now if AMD can give options, let say you can buy the GPU for $400 and if you want DXR (ray tracing) another $200 or whatever for a coprocessor card or on the card itself. You can have your cake and maybe eat with some extra frosting later. Some other interesting options is an interposer with 3 HBM (768 bit bus) and one DXR coprocessor stack, as in multiple ASICs for DXR. Who thinks AMD will shift to very large dies? I don't.

Smaller dies you get more per wafer which means you don't have to consume as much wafer production for you product which TSMC may not have the capacity to fulfill chips needed when large but can with small size chips. DXR coprocessors can be manufactured at different fabs, node process etc. Flexible design that can apply way more calculational ability.

Intel was able to do Ray Tracing using their CPU's last decade:
https://everipedia.org/wiki/lang_en/Quake_Wars:_Ray_Traced/

All the coprocessor has to do is how much intensity the light has per pixel, color bounce/shift from other object, shadows/GI information, caustics, refraction . . . -> send to rasterizer to weigh the texture/colors and the light information. It is not rendering the scene, not looking at textures other than normal map/bump map information for calculation meaning the bandwidth does not have to be that great. As far as the rasterizer is concern, it is a super dynamic lightmap, shadow map etc.

There is something I though Nvidia would do, render the lighting let say at 30FPS and interpret to the actual FPS. Won't be as accurate but probably would be hard to tell the difference as well. Just some thoughts.
 
  • Like
Reactions: amenx
like this
No need to compare it to anything. The added bloat is another nvidia tax on their already overpriced cards. If raytracing was usable without tanking performance then an argument could be made to how useful the implementation was. I guess. But as it stands the ballooning of the die is something consumers are paying for even if raytracing on all but the highest end card is virtually useless without serious compromise to the experience. And even then, it's really only good for screenshots and stills. Nvidia's attempt at raytracing made it clear that the industry is not ready, nor are they.

YOU!!!

Do you STILL claim (falsely) that RTX is 50% of the die?!

Either you LIED...or you are so ignorant about the topic you should not post...so which is it?


And you even posted this:
I think the ray tracing effects are unrealistic and tacky, and a waste of resources. Resources for which the consumers heavily pays the price for. The water in FarCry 5 for example looks much better than any demos we've seen from ray tracing so far, without the need to spend $1200 and 100's of mm of die space wasted. Nv is going all out on their public disinformation campaign, and i think they realize they are screwed if they can't convince everyone that ray tracing is better than sliced bread. Controlling the media is part of that process. I wonder who else uses that tactic?
If you think On-screen-reflections are better than Raytraycing...you must be trolling?
This is FarCry 5:


This is Battlefield V:


Postting crap is easy....backing it up...much harder...


AMD implementations? How will AMD address DXR - huge big chips like Nvidia or something chiplet like DXR coprocessors?

You might want to read up....RTX is not what is making the Turin dies "large"...
 
Last edited:
This is Battlefield V:


Postting crap is easy....backing it up...much harder...


Is that video supposed to be impressive? To me ray tracing is more about global illumination. Everything else is secondary.
 
YOU!!!

Do you STILL claim (falsely) that RTX is 50% of the die?!

Either you LIED...or you are so ignorant about the topic you should not post...so which is it?


And you even posted this:

If you think On-screen-reflections are better than Raytraycing...you must be trolling?
This is FarCry 5:


This is Battlefield V:


Postting crap is easy....backing it up...much harder...




You might want to read up....RTX is not what is making the Turin dies "large"...

To have RTX it is more then just tensor and ray tracing cores, you also have to have logic, communication lanes etc. to support the new features.

GP102 (used in 1080Ti), 16nm node, surface area 471mm, 3584 shader units
TU102 (2080Ti), 14nm node, surface area 754mm, 4352 shader units

4352/3584 = 1.21 -> The 2080Ti has 21% more, basically shaders, texture units etc. over the 1080 Ti

Die size wise the 2080Ti/1080Ti is 754/471 = 1.61 times the size. 21% more shaders but 61% bigger overall. The memory controller and support for 11gb of memory should be about the same. So where did all the increase come from in die size?

Then TU102 on the 12nm node has a potential transistor density of 20% higher than the 16nm of the GP102. In other words if the 1080 Ti or GP102 was using the 12nm process you would be looking at a die size of 377mm making the real difference 754/377 a factor of 2!

If one would to add shaders/texture units to the GP102 GPU or 1080Ti to equal the 2080Ti, no tensor or Ray tracing core crap, and place it on the 12nm node from TSMC, what size would it be around?
  • 1.21 x 4.71 = 570nm on the 16nm process
  • 12nm process would be 570 - (.2 x 570) = 456mm - smaller than the current 1080Ti GPU
Of course Nvidia improved the processing ability of the shaders for Turing so nothing is exactly proportional - but Raytracing ability added a hell a lot of complexity to the chip and is way above just 16% overall increase in size needed. Yes if one just looks at the TCU, RTC only it may not appear to increase the size that much but those are not just isolated blocks of transistors, the whole chip will need to support their functions as well adding even more transistors, Caches, buffers etc. to make it all work.

Now the math indicates a 50% increase in die, factor of 2 overall, to support all the new features.
 
To have RTX it is more then just tensor and ray tracing cores, you also have to have logic, communication lanes etc. to support the new features.

GP102 (used in 1080Ti), 16nm node, surface area 471mm, 3584 shader units
TU102 (2080Ti), 14nm node, surface area 754mm, 4352 shader units

4352/3584 = 1.21 -> The 2080Ti has 21% more, basically shaders, texture units etc. over the 1080 Ti

Die size wise the 2080Ti/1080Ti is 754/471 = 1.61 times the size. 21% more shaders but 61% bigger overall. The memory controller and support for 11gb of memory should be about the same. So where did all the increase come from in die size?

Then TU102 on the 12nm node has a potential transistor density of 20% higher than the 16nm of the GP102. In other words if the 1080 Ti or GP102 was using the 12nm process you would be looking at a die size of 377mm making the real difference 754/377 a factor of 2!

If one would to add shaders/texture units to the GP102 GPU or 1080Ti to equal the 2080Ti, no tensor or Ray tracing core crap, and place it on the 12nm node from TSMC, what size would it be around?
  • 1.21 x 4.71 = 570nm on the 16nm process
  • 12nm process would be 570 - (.2 x 570) = 456mm - smaller than the current 1080Ti GPU
Of course Nvidia improved the processing ability of the shaders for Turing so nothing is exactly proportional - but Raytracing ability added a hell a lot of complexity to the chip and is way above just 16% overall increase in size needed. Yes if one just looks at the TCU, RTC only it may not appear to increase the size that much but those are not just isolated blocks of transistors, the whole chip will need to support their functions as well adding even more transistors, Caches, buffers etc. to make it all work.

Now the math indicates a 50% increase in die, factor of 2 overall, to support all the new features.

Look at what they changed in Turing...the SM's really got reworked, unified cache, double the cache size...I know poeple like to whine about everything today...but the die-size impact of raytraining...that is retarded whine...the Turing CUDA cores are 20% more efficient that Pascals (at the same Hz)...I guess that happen via "free transistors"?
Besides, look at and look eg. at the TU116-400-A1 (It has no RT cores, only Tensor cores)...RT cores are pretty small compared to the rest of the stack.
And we are no where near the retarded claim of "50% die space"...
 
Look at what they changed in Turing...the SM's really got reworked, unified cache, double the cache size...I know poeple like to whine about everything today...but the die-size impact of raytraining...that is retarded whine...the Turing CUDA cores are 20% more efficient that Pascals (at the same Hz)...I guess that happen via "free transistors"?
Besides, look at and look eg. at the TU116-400-A1 (It has no RT cores, only Tensor cores)...RT cores are pretty small compared to the rest of the stack.
And we are no where near the retarded claim of "50% die space"...

Another way to look at it. Shaders/mm2

I did a comparison of 10 series, 16 series, 20 series.

There appears to be about a 15% penalty for Turing (16 series without RTX), about 30% penalty for Turing RTX...

So I get to approximately 15% penalty for the RTX units, over the 16 series without.

It seems no matter how we look at it, the RTX "Tax" is not that large.

I am not sure where the assumption that RTX units used something like half the die came from.

Edit: Extra note about 16nm-12nm process change. Ignore it. It's marketing. Check transitors/mm2, its ~25 million/mm2 for both Pascal and Turing.
 
Last edited:
Another way to look at it. Shaders/mm2

I did a comparison of 10 series, 16 series, 20 series.

There appears to be about a 15% penalty for Turing (16 series without RTX), about 30% penalty for Turing RTX...

So I get to approximately 15% penalty for the RTX units, over the 16 series without.

It seems no matter how we look at it, the RTX "Tax" is not that large.

I am not sure where the assumption that RTX units used something like half the die came from.

Same place as "Raytracing looks bad" came from...some cesspool of FUD...I have seen claims like this before, ranging from 1/3 of the die to 1/2 the die...ignorance is the new black it seems.
And those 15% do seem to speed up RT significantly more than 15% ;)
 
Look at what they changed in Turing...the SM's really got reworked, unified cache, double the cache size...I know poeple like to whine about everything today...but the die-size impact of raytraining...that is retarded whine...the Turing CUDA cores are 20% more efficient that Pascals (at the same Hz)...I guess that happen via "free transistors"?
Besides, look at and look eg. at the TU116-400-A1 (It has no RT cores, only Tensor cores)...RT cores are pretty small compared to the rest of the stack.
And we are no where near the retarded claim of "50% die space"...
Yes indeed, Nvidia did some cool stuff no doubt, the chip size is way bigger than the performance gained for regular non raytraced games but there are a number of new features with ability to do floating point math and interger math at the same time, plus half precision and so on. Add in the raytracing, AI and you have one expensive chip to make but it works for the most part.

I don't see AMD going the big chip route, chiplet route with some screaming fast separate chips which potentially can keep the cost down. I think it will be very interesting how big Navi does it if at all.
 
Yes indeed, Nvidia did some cool stuff no doubt, the chip size is way bigger than the performance gained for regular non raytraced games but there are a number of new features with ability to do floating point math and interger math at the same time, plus half precision and so on. Add in the raytracing, AI and you have one expensive chip to make but it works for the most part.

I don't see AMD going the big chip route, chiplet route with some screaming fast separate chips which potentially can keep the cost down. I think it will be very interesting how big Navi does it if at all.

I am curious on why AMD cannot (it seems) do what NVIDIA did for Pascal SKU's....enable shader based DXR on any of their current GCN architechtures.
 
I am curious on why AMD cannot (it seems) do what NVIDIA did for Pascal SKU's....enable shader based DXR on any of their current GCN architechtures.
They probably could except it might be rather embarrassing if they did :D. If they can allow multiple cards to interact better to do Raytracing, like dedicating one card to ray tracing, that would be cool. Even Pro-Render, AMD ray tracing software runs better on Nvidia :LOL: without using the ray tracing hardware, ouch! Still the bottom line is yes raytracing or some of it you can do with Nvidia Turing but it does come with a price tag in cost as well as performance.
 
They probably could except it might be rather embarrassing if they did :D.

I agree. I don't think AMD should offer DXR drivers until they have something with usable DXR performance.

NVidia does it because it helps up-sell RTX cards, not because it is much real use to have DXR drivers on Pascal, so for NVidia poor DXR performance of Pascal doesn't hurt.

AMD has nothing to up-sell, and in fact, DXR drivers on Vega would probably only help sell RTX cards, so for AMD, poor DXR performance does hurt.
 
What about PhysX, better comparison?

Bits and pieces demoed here and there..

What about when "3D accelerators" came into the picture? Hardware anti aliasing? Hardware T&L? Bits and pieces demoed here and there too until it became widely implemented. I don't own an RTX card and honestly, I wasn't expecting stellar ray tracing performance from a 1st gen product. Somewhere down the line, they'll figure out how to do ray tracing even faster and more efficiently, but someone has to get the ball rolling first.
 
No need to compare it to anything. The added bloat is another nvidia tax on their already overpriced cards. If raytracing was usable without tanking performance then an argument could be made to how useful the implementation was. I guess. But as it stands the ballooning of the die is something consumers are paying for even if raytracing on all but the highest end card is virtually useless without serious compromise to the experience. And even then, it's really only good for screenshots and stills. Nvidia's attempt at raytracing made it clear that the industry is not ready, nor are they.

How exactly is the industry supposed to get ready without taking a first step? No matter when raytracing is introduced it will always be expensive compared to rasterization. No magic in the future will change that so I rather they invest early so I can experience RT nirvana before I die.
 
How exactly is the industry supposed to get ready without taking a first step? No matter when raytracing is introduced it will always be expensive compared to rasterization. No magic in the future will change that so I rather they invest early so I can experience RT nirvana before I die.

Who says it has to be expensive? You got brainwashed by Nvidia to pay out the nose.
 
Who says it has to be expensive? You got brainwashed by Nvidia to pay out the nose.

I hate the price increase, but there is no valid argument on your position. This isn’t some brainwash job.

All new tech is expensive for early adopters as companies look to recover their R&D costs, we pay now for your future enjoyment of better hardware. You are welcome.
 
Who says it has to be expensive? You got brainwashed by Nvidia to pay out the nose.

Really?
The sole reason WHY we have used hacks is that we have lacked the COMPUTIONAL power to do true RayTracing.
I hope you are trolling...the alternative is that you are too ignorant to debate about the topic?
 
I hate the price increase, but there is no valid argument on your position. This isn’t some brainwash job.

All new tech is expensive for early adopters as companies look to recover their R&D costs, we pay now for your future enjoyment of better hardware. You are welcome.

It like rayTracing is making the fanboys post even more retarded stuff than normally...this thread is is fine example...I sense much fear for the DXR ;)
 
Who says it has to be expensive? You got brainwashed by Nvidia to pay out the nose.

Expensive is a relative term. I don't think the RTX 2060 is that expensive.

Though I would have phrased it as "There will always be a cost". There would also always be a shortage in initial software.

So I agree with MagoSeed, best get started as soon as reasonable. That way, when the second generation arrives, more refined at a better price, there will actually be some games to use it, and developers will have stepped further into the RT optimization learning curve.
 
Expensive is a relative term. I don't think the RTX 2060 is that expensive.

Though I would have phrased it as "There will always be a cost". There would also always be a shortage in initial software.

So I agree with MagoSeed, best get started as soon as reasonable. That way, when the second generation arrives, more refined at a better price, there will actually be some games to use it, and developers will have stepped further into the RT optimization learning curve.

If you follow what developers are posting one thing is clear: RayTracing (DXR) is here to stay.
 
Expensive is a relative term. I don't think the RTX 2060 is that expensive.

Though I would have phrased it as "There will always be a cost". There would also always be a shortage in initial software.

So I agree with MagoSeed, best get started as soon as reasonable. That way, when the second generation arrives, more refined at a better price, there will actually be some games to use it, and developers will have stepped further into the RT optimization learning curve.

Expensive compared to launch prices of:

GTX 460 - $229
GTX 560Ti - $240
GTX 660Ti - $300
GTX 760 - $249
GTX 960 - $199
GTX 1060 6GB - $300

So you're looking at $50 more expensive than their previous mainstream parts right out of the gate. Granted that's not as egregious as the high end pricing (looking at you $1199 2080Ti FE), but $250-300 is really a cut off for people looking at mainstream cards. The 1660Ti is a much better card in that space. If the 2060 was a $299 card and the 1660Ti was a $249 card I think it would sell a lot better.

Also, the used pricing of Pascal cards still puts a damper on sales. A $250 1080 runs circles around the 2060 and the RTX features of the 2060 aren't fast enough to actually be used in any meaningful way.

At the end of the day, I'm pro new technology. I just don't think that I should have to pay a premium to be a beta tester though...
 
I think
Expensive compared to launch prices of:

GTX 460 - $229
GTX 560Ti - $240
GTX 660Ti - $300
GTX 760 - $249
GTX 960 - $199
GTX 1060 6GB - $300

So you're looking at $50 more expensive than their previous mainstream parts right out of the gate. Granted that's not as egregious as the high end pricing (looking at you $1199 2080Ti FE), but $250-300 is really a cut off for people looking at mainstream cards. The 1660Ti is a much better card in that space. If the 2060 was a $299 card and the 1660Ti was a $249 card I think it would sell a lot better.

Also, the used pricing of Pascal cards still puts a damper on sales. A $250 1080 runs circles around the 2060 and the RTX features of the 2060 aren't fast enough to actually be used in any meaningful way.

At the end of the day, I'm pro new technology. I just don't think that I should have to pay a premium to be a beta tester though...
You post like someone unaware of the rising cost of die the smaller we get?

The curve has reached the bottom a couple of generations ago...muliti-masking, tooling, design...all are going up.

So you are whining about the wrong metric...good job!
 
I think

You post like someone unaware of the rising cost of die the smaller we get?

The curve has reached the bottom a couple of generations ago...muliti-masking, tooling, design...all are going up.

So you are whining about the wrong metric...good job!

I'm whining about the only metric....$$$$$. Thanks for playing though.
 
I'm whining about the only metric....$$$$$. Thanks for playing though.

And it will keep going up...due to physics....so keep whining forever from now on...but don't blame the false reason :rolleyes:
 
And it will keep going up...due to physics....so keep whining forever from now on...but don't blame the false reason :rolleyes:

If the price keeps going up, nobody is going to buy it eventually. I don't think I'm way off base thinking that "mainstream" is around $250-300 on the top end and more realistically $150-200. It doesn't matter what the die size is. It doesn't matter the R&D cost. It doesn't matter if it shows you fancy lighting or not. When you get above $250-300 you're moving past what people will pay (usually in the form of whatever Acer, Dell, etc. puts in their OEM boxes). The market for the $800-1000 computer is a lot bigger than the $1500-2000.
 
If the price keeps going up, nobody is going to buy it eventually. I don't think I'm way off base thinking that "mainstream" is around $250-300 on the top end and more realistically $150-200. It doesn't matter what the die size is. It doesn't matter the R&D cost. It doesn't matter if it shows you fancy lighting or not. When you get above $250-300 you're moving past what people will pay (usually in the form of whatever Acer, Dell, etc. puts in their OEM boxes). The market for the $800-1000 computer is a lot bigger than the $1500-2000.

Sure, we can choose to try and ignore physics.
The last time that happened was for Intel in the early 2000's...no 10GHz P4's
 
Sure, we can choose to try and ignore physics.
The last time that happened was for Intel in the early 2000's...no 10GHz P4's

I don't know what you're arguing about anymore. Physics isn't the consumer's problem. Consumer wants to spend $X and when Nvidia provides product that's well above $X for whatever reason, consumer won't buy or will buy in smaller quantities.
 
I don't know what you're arguing about anymore. Physics isn't the consumer's problem. Consumer wants to spend $X and when Nvidia provides product that's well above $X for whatever reason, consumer won't buy or will buy in smaller quantities.

You post like someone who thinks manufactors are above the laws of physics...just saying.

We didn’t get 10Ghz Pentiums, Moore’s law is dead and Fab-cost are rising...those are facts you have to follow...like it or not.
 
You post like someone who thinks manufactors are above the laws of physics...just saying.

We didn’t get 10Ghz Pentiums, Moore’s law is dead and Fab-cost are rising...those are facts you have to follow...like it or not.

Once again, PHYSICS IS NOT THE PROBLEM OF THE CONSUMER. If Nvidia or AMD can't put a compelling product in place of what already exists within a price point they are willing to pay, they will stick with what they already have. For comparison, see the slowdown in mobile phone sales with the rising prices. People don't care WHY cell phone prices are rising (BOM, etc....similar to your argument). They only care that they are rising and choose not to upgrade as often.
 
Once again, PHYSICS IS NOT THE PROBLEM OF THE CONSUMER. If Nvidia or AMD can't put a compelling product in place of what already exists within a price point they are willing to pay, they will stick with what they already have. For comparison, see the slowdown in mobile phone sales with the rising prices. People don't care WHY cell phone prices are rising (BOM, etc....similar to your argument). They only care that they are rising and choose not to upgrade as often.

Again, this is how the world looks...if you are AMD, Intel or NVIDIA.

You keep whining...see what good that will do...you cannot divide by zero no matter how many fluffy feelings you feel entitled too...it doesn’t matter what you FEEL is the rigth price...reality trumps your fluff.

Now whine on....
 
IMHO this thread is dead.
The OP posted a load of BS and seems unwilling to correct his false claims...aka he ran away from the pile of feces he dumped.
 
Once again, PHYSICS IS NOT THE PROBLEM OF THE CONSUMER. If Nvidia or AMD can't put a compelling product in place of what already exists within a price point they are willing to pay, they will stick with what they already have. For comparison, see the slowdown in mobile phone sales with the rising prices. People don't care WHY cell phone prices are rising (BOM, etc....similar to your argument). They only care that they are rising and choose not to upgrade as often.

Phone sales are slowing down because they’ve hit a hard wall on diminishing returns. As a result people are upgrading less often and manufacturers are raising prices to milk the people who are still buying.

We have many, many years to go before that happens with graphics. The upcoming console cycle will reset the bar again and people will upgrade.

The current pricing situation is a little bit off the curve though. We need some killer midrange products to bring things back in line. And the only way that happens is through more competitive parts from AMD.
 
I guess this has to be posted here, since reality seems to be something we just ignore for some posters:
image001.jpg


And this might be relevant reading too:
https://www.google.com/url?sa=i&rct...aw2VBJGDjBykuQP-aje8E-TR&ust=1560199462003830
 
If the price keeps going up, nobody is going to buy it eventually. I don't think I'm way off base thinking that "mainstream" is around $250-300 on the top end and more realistically $150-200. It doesn't matter what the die size is. It doesn't matter the R&D cost. It doesn't matter if it shows you fancy lighting or not. When you get above $250-300 you're moving past what people will pay (usually in the form of whatever Acer, Dell, etc. puts in their OEM boxes). The market for the $800-1000 computer is a lot bigger than the $1500-2000.

I have to say that I agree.

What we're going to see, it seems, is a further decelleration of performance improvement per generation of the midrange, and an acceleration of the price in the higher end. While it is because of physics, I agree that that makes no difference to the average buyer.
 
Status
Not open for further replies.
Back
Top