Is Path Tracing the future, beyond Ray Tracing?

Based on the video.. I would agree with his assessment... Nvidia had to support Rasterization as that is the STANDARD... but ray-tracing or path tracing is where we will probably end up..
the question is... Did Nvidia jump the gun?
 
A future maybe not the future.
If there was no such problems (programming memory sharing) with having 2 cards with one being dedicated to ray tracing it might be better.
All of the solutions regarding ray/path tracing requires dedicated hardware and software.

That is where the problem lies.

Brent_Justice Nvidia did what it wanted to do when they made their decision to their approach would require such a large die making it limited as it is and the only way forward for Nvidia ray tracing is either making larger die or cannibalizing shaders performance.

Unless there is a way to make shaders more ray tracing friendly it will take a real long time to get anywhere performance wise.
 
I'm not sure full raytracing or pathtracing will become the norm anytime soon if ever. Thing is, the biggest advantage of RT over rasterization is lighting and its derivates (shadows, amblient occlusion, reflection, refractions, etc).
The "rest" is easier and faster to do in rasterization, so by using hybrid rendering you take the best of both worlds and you can have great IQ and good performance.

AFAIK not even hollywood SFX production houses do full RayTracing. Everyone starts with rasterization for modeling and previsualizaion and the final rendering is done in raytracing.
 
Well don't worry about full ray tracing you won't get that from Nvidia :) .
 
Very interesting stuff, given me a far better perspective of graphics. :cool:

Ultimately though, it seems to me that once gaming graphics has fully shifted to ray/path tracing, since it's all so massively parallel, it simply becomes a matter of how much silicon you can afford to put on a PCB for a sellable price.
 
Great video overview. Maybe one day a company will come out with a dedicated path tracing card "co-processor" to reduce the need for rasterization silicon taking up space.

Imagine how much easier it would be to only have to worry about models, local light sources, global light sources, and textures when it comes to visual design. The remaining time can be spent on creativity. When we have a useful library of common items with a bit of randomization thrown in, you're back to high school kids creating games with AAA title visuals in class.
 
Last edited:
Ray tracing is used in games all the time, Real Ray Tracing, it is baked into the textures for accurate lighting. It is not dynamic but a number of tricks using floating point math works out pretty good. HDR lighting is used as well to give more accurate lighting of spaces. So going from a traditional approach to a totally RTX lighting approach may actually degrade IQ.

I would like to see RTX in motion, to see if the denoising is sufficient to prevent motion artifacts. This demo using RTX is terrible on motion noise, look at the textures particularly the white cup on the table. While there is some video noise this is not the noise I am seeing all over the place in this demo:



If this is the standard for RTX or the quality level then I would call this launch a fail for consumers. Now for developers it will give them tools to explore and learn on, if Nvidia can deliver something better in the future, if they will then maybe this will take off. The processing to get the number of rays up to lower the noise so that AI can effectively remove the artifacts maybe a factor of 10x (guessing) plus get the performance up.
 
He is right...nVidia is walking a very fine line to move the industry where it needs to go. If nVidia could go from 1SPS @ 2k or 4k and 2x that every 2 years it will be less than a decade where the photorealism will be possible in action based games (e.g. not staring at a screenshot) thus nuking the need to rasterize. After that it is just back to days of incremental improvements. But I can see 50SPS + denoise in the next decade without issue.
 
Nor AMD....so your point being?
That we sit down and do nothing...until all parties are ready?

Point being that INTEL will come to the rescue bringing great RT performance at affordable prices... right? :D:D:rolleyes::rolleyes:
 
He is right...nVidia is walking a very fine line to move the industry where it needs to go. If nVidia could go from 1SPS @ 2k or 4k and 2x that every 2 years it will be less than a decade where the photorealism will be possible in action based games (e.g. not staring at a screenshot) thus nuking the need to rasterize. After that it is just back to days of incremental improvements. But I can see 50SPS + denoise in the next decade without issue.

How the Nvidia die size is already over 700mm2?
The tools required to go lower in nano meters on the process are currently so hard that Intel the company that lead the fab race for a long long while stagnated for the past 5 years on 14nm FinFet process. Another step from 7nm to 5nm is not going to make things much better the process steps are going to stop progressing unless another method is used that allows an easier path.

And all the current features only work on Nvidia hardware which translates to Nvidia needing to implement all of them. The lack of games currently only shows how Nvidia is not really committed.
 
How the Nvidia die size is already over 700mm2?
The tools required to go lower in nano meters on the process are currently so hard that Intel the company that lead the fab race for a long long while stagnated for the past 5 years on 14nm FinFet process. Another step from 7nm to 5nm is not going to make things much better the process steps are going to stop progressing unless another method is used that allows an easier path.

And all the current features only work on Nvidia hardware which translates to Nvidia needing to implement all of them. The lack of games currently only shows how Nvidia is not really committed.

You are making the assumption that ray tracing silicon can't be made more efficient nor will it take the place of raster tech. Remember what the 8800 series did to gaming. It was a huge fucking tech shift that it dominated the market for 2+ years. But that was back in the day when a new die development didn't cost you the GDP of many small nations. Now they need to be a bit more nuanced but based upon how much of the die is for ray tracing vs raster...there is massive gains there alone.
 
You are making the assumption that ray tracing silicon can't be made more efficient nor will it take the place of raster tech. Remember what the 8800 series did to gaming. It was a huge fucking tech shift that it dominated the market for 2+ years. But that was back in the day when a new die development didn't cost you the GDP of many small nations. Now they need to be a bit more nuanced but based upon how much of the die is for ray tracing vs raster...there is massive gains there alone.
Or solved by multiple dedicated smaller RT chips, AI chips etc.
 
The other option is multiple cards hooked up by a very fast interlink.
 
Didn't everyone know that Ray tracing isn't going to be here with this generation? I thought we all knew that.
 
Word wall incoming, sorry about that, but thanks for the vid.

hmmm.... you mean like nvlink?
or Infinity fabric if reference AMD, QPI if reference Intel with all kinds of other "modern" examples.

Maybe Nv should "rethink" the decision to move away, as far away as possible from SLI and instead "visit it again" by repurposing PhysX style, but have 1 as a "dedicated" tracer card while the other handles the rest, so they each can help each other., SLI was made to do just that, and so was PhysX initially to "offload" stuff to make the experience less computationally expensive and so more folks could see and feel that extra level of nummy nummy good..

am sure if they really wanted to do so, they could have a "base card" and you could "clip on" that extra portion, like a "G-sync" module, like AMD did with those crazy SSG cards were you could add more memory to them, why not a "expansion" for extra oomph, I know folks use mobile phones that way for VR or AR style, take "base phone" or console "expand it" with that additional module purchased after the fact to get much more out of the same experience...pretty sure 3DFX did exactly that initially as well, add on as needed, or don't.

you cannot afford the "do it all including raytrace GPU, no problem, we have this one that is very near the same potency with reduced power consumption AND increased every day type grunt"

----------------------------------
-------------


if they were "intelligent" and not just there to chase as much $$$$$$$ as humanly possible, they would allow a more concentrated/dedicated card that was directly purposed for ONLY one 1/2 of the equation so when "crunching" for example rays, or doing path finding (whatever example) then it has a heavy workout, but when it is not doing the same strain, then it can be put into an idle mode not burning power and creating heat or noise for no reason.

hell while they are at it, that more dedicated card (or cluster) can do the more "advanced" stuff in hardware that they have been slowly trying to emulate in software to conserve transistors for other things, raytrace (or path finding) tessellation, a card that can do the G-sync (or freesync so they can stop being douches about it) for the display instead of "requiring" a more expensive monitor with the "fancy part" AND the gpu with it enabled as well.

--------------------
--------


If Nvidia was as "great" as so many people claim them to be, they would truly be going out of the box to lead the pack, to reformulate the basics, to show themselves and others how they truly are focused on making computing "great" again instead of just wanting individuals and industry buying their version of caviar which really is not all that much better than what others are doing.

why try make it all proprietary BS I suppose is MY contention, others do many of what they are doing, why not "join the club" and instead put the $$$$$$$ and engineer time towards making that experience even better, a great product will always make sales, even more so if it is a FAMILIAR experience.

--------------------------
--------

Long story short, that "one size fits all approach" is becoming near impossible to keep doing or like these new RTA cards have shown is going to be getting more and more and more expensive in many aspects just to "get a taste" just imagine if one GPU was "freed up" by not having to worry about doing it all, it could keep yields and costs WAY better, the end cost to the maker way less, and of course the consumer price point (if done well) much less make you cry level pricing "to keep up with the joneses".

rasterization, ray cast, raytrace, pathfinding, voxels, all have their own place (like ingredients in a cake)
 
Last edited:
it might not ever be here is the point...

I seem to remember many of us equating this to physx too. Not really surprised that new tech might just stay unused.

Edit: If you buy some new fangled tech and expect it to be both successful and adopted, your an idiot. Buy things for what they deliver today, not what they might deliver tomorrow.
 
hmmm.... you mean like nvlink?
Maybe, NVlink does give a higher bandwidth - I wonder what that extra bandwidth Nvidia is planning to use it for or why incorporate it. Can developers use a two card/GPU or more setup where RT is being done by all cards while only one card GPU is being used for the rasterization end? Don't know. Much more than just mGPU or SLI in other words as in added flexibility.
 
Dedicated Ray tracing card is a financially stupid move. It's already niche enough with RT baked in, imagine what would happen if it's optional? Nothing would change, and every year there'd be 1-2 marquee AAA games + a bunch of indie games using the tech for extra market exposure. Just like what happened with PhysX.
 
Dedicated Ray tracing card is a financially stupid move. It's already niche enough with RT baked in, imagine what would happen if it's optional? Nothing would change, and every year there'd be 1-2 marquee AAA games + a bunch of indie games using the tech for extra market exposure. Just like what happened with PhysX.

Why do people keep claiming that PhysX is dead? ITS THE MOST POPULAR PHYSICS ENGINE.
 
Why do people keep claiming that PhysX is dead? ITS THE MOST POPULAR PHYSICS ENGINE.

Yeah lots of people refer to GPU PhysX as just PhysX and couldn’t be bothered to make the distinction. GPU PhysX still shows up here and there but as long as it’s nvidia only it’ll remain a side show.

DXR isn’t in the same boat though as it’s IHV agnostic. There’s nothing stopping Microsoft from adding support for running DXR on a second GPU. Games would still need to target single card performance though.
 
You are making the assumption that ray tracing silicon can't be made more efficient nor will it take the place of raster tech. Remember what the 8800 series did to gaming. It was a huge fucking tech shift that it dominated the market for 2+ years. But that was back in the day when a new die development didn't cost you the GDP of many small nations. Now they need to be a bit more nuanced but based upon how much of the die is for ray tracing vs raster...there is massive gains there alone.
You could gather it from the same percentage gain from older generations and see how much die size was used for the increase last time around.
And what you can see from the RTX 2080 TI is that it is the same silicon as the Quadro. It must have been a long time ago that anything like this happened Nvidia tends to never sell their best chip to consumers (not professional).
That also might explain a little of how little headroom there is check the RTX 2080 .
https://hardforum.com/data/attachment-files/2018/09/158254_Nvidia_table.jpg
 
You could gather it from the same percentage gain from older generations and see how much die size was used for the increase last time around.
And what you can see from the RTX 2080 TI is that it is the same silicon as the Quadro. It must have been a long time ago that anything like this happened Nvidia tends to never sell their best chip to consumers (not professional).
That also might explain a little of how little headroom there is check the RTX 2080 .
https://hardforum.com/data/attachment-files/2018/09/158254_Nvidia_table.jpg

The RTX 2080Ti is a cut down silicon compared to the Quadro 8000. nvidia does this all the time.
 
The RTX 2080Ti is a cut down silicon compared to the Quadro 8000. nvidia does this all the time.
Nope. They did that way way back but in previous generations they never did it.
I would not generalize it as cut down, just lesser binned version with some of the features disabled.
 
Nope. They did that way way back but in previous generations they never did it.
I would not generalize it as cut down, just lesser binned version with some of the features disabled.

Not sure what you’re trying to say. They did the exact same thing with the 1080 Ti and Quadro P6000.
 
Not sure what you’re trying to say. They did the exact same thing with the 1080 Ti and Quadro P6000.
I'm saying that the TU102 is the same die size.
And mistakenly thought they never sold it before but it seems it is the same as the previous Geforce 1080TI but the timing is different.
This RTX launch sees a TI of the top silicon at the start of the cycle rather then in the middle of it.
 
Nope. They did that way way back but in previous generations they never did it.
I would not generalize it as cut down, just lesser binned version with some of the features disabled.

I mean cut down, as in some of its silicon disabled. The 2080Ti has less SMs, RTX and Tensor cores as the Quadro version.
 
I'm saying that the TU102 is the same die size.
And mistakenly thought they never sold it before but it seems it is the same as the previous Geforce 1080TI but the timing is different.
This RTX launch sees a TI of the top silicon at the start of the cycle rather then in the middle of it.
This launch is different.
NVIDIA is not relying on cut-down SKU's to make up their consumer product stack.
We have 3 SKU's
TU102: RTX 2080 Ti
TU104: RTX 2080
TU106: RTX 2070
 
This launch is different. NVIDIA is not relying on cut-down SKU's to make up their consumer product stack.

They're not "cut down" because they don't have to - they just sell you an inferior GPU, making more money than they did previously. This looks particularly suspicious to me:

GP106 1060
GP104-2 1070
GP104-3 1070ti
GP104-4 1080
GP102 1080ti

TU106 2070
TU104-4 2080
TU102 2080ti

Product segmentation has been clear for a while with Nvidia:

107 low end
106 mid range
104 high end
102 luxury

A 106 core would get you a mid range card. With Turing, you pay high end money for it.
A 104 core would get you a high end card. With Turing, you still pay high end.
A 102 core would get you a luxury card. With Turing, you still pay luxury.

What will happen to 104-2 and 104-3? One of them has to become the 2070ti. I'm guessing 104-2 as it's a good jump from 106 and will remain far from the 104-4 performance. That means 104-3 disappears.

Most worrying, what will happen to the mid range core? A 2060 with a TU107 core, which should belong in the low end x50 range? Notice that the 2070 isn't a 106-4 or 3, but since it's it's own GPU core unrelated from a 2080, it's not in itself a cut-down version. This suggests it can be cut down to a 106-2 to become a 2060ti, which would mimic the 2070ti 104-2 vs 2080 104-4 divide and push the 2060 down to a full TU107-4, with a 107-2 becoming the 2050ti. The structure, though speculative, seems to be shaping up like this.

All the signs point to Nvidia moving all the cores down generation to generation, thereby charging you what would've been the upper model for lower model performance. Can AMD come back in force, please? Because Nvidia is getting increasingly (obviously) greedy.
 
Last edited:
They're not "cut down" because they don't have to - they just sell you an inferior GPU, making more money than they did previously. This looks particularly suspicious to me:

GP106 1060
GP104-2 1070
GP104-3 1070ti
GP104-4 1080
GP102 1080ti

TU106 2070
TU104-4 2080
TU102 2080ti

Product segmentation has been clear for a while with Nvidia:

107 low end
106 mid range
104 high end
102 luxury

A 106 core would get you a mid range card. With Turing, you pay high end money for it.
A 104 core would get you a high end card. With Turing, you still pay high end.
A 102 core would get you a luxury card. With Turing, you still pay luxury.

What will happen to 104-2 and 104-3? One of them has to become the 2070ti. I'm guessing 104-2 as it's a good jump from 106 and will remain far from the 104-4 performance. That means 104-3 disappears.

Most worrying, what will happen to the mid range core? A 2060 with a TU107 core, which should belong in the low end x50 range? Notice that the 2070 isn't a 106-4 or 3, but since it's it's own GPU core unrelated from a 2080, it's not in itself a cut-down version. This suggests it can be cut down to a 106-2 to become a 2060ti, which would mimic the 2070ti 104-2 vs 2080 104-4 divide and push the 2060 down to a full TU107-4, with a 107-2 becoming the 2050ti. The structure, though speculative, seems to be shaping up like this.

All the signs point to Nvidia moving all the cores down generation to generation, thereby charging you what would've been the upper model for lower model performance. Can AMD come back in force, please? Because Nvidia is getting increasingly (obviously) greedy.

Look at the dies....not just the pretty PR names...SKU bandwith and size.
It will tell you all about these "inferior GPU"'s"...(you made me chuckle...but not for the resaon you think).
 
Look at the dies....not just the pretty PR names...SKU bandwith and size.

Since when are core names like GP104 "pretty PR names"? And since when do they have to do anything with die size?
As far as I know they've always been related to product tier segmentation.
 
Since when are core names like GP104 "pretty PR names"? And since when do they have to do anything with die size?
As far as I know they've always been related to product tier segmentation.
I could spoonfeed you, but I found this to be a waste of time.
If you cannot see how the SKU’s differ this generation...I’ll waste my time on something better.
 
Back
Top