An interesting watch on the history of Nvidia's Ray Tracing work

Thanks, that was very interesting.

The most interesting part for me was around 8:45. My understanding from the video is that NVidia has taken ray-tracing from being something that took months to render a frame into something that can be done in milliseconds (aka real-time) partly by reducing the number of light projection samples needed to obtain an image. The low number of light rays basically gives you a rough dotted image but then Nividia uses AI to fill in the gaps which then results in an impressively clear image (the narrators didn't specifically say that it was an AI algorithm but I assume that it would be given the impressive results). It's kind of like DLSS except for light rays instead of video frames.

It's crazy, though. Between AI-agmented ray tracing and DLSS I wonder pretty soon if video cards are going to be primarily used for executing neural nets instead of rendering graphics during gameplay. I mean, it's not totally surprising given how useful GPU's have been for AI applications for years now but until this point it they have always been two separate worlds: frame rendering for gamers and cuda core neural net execution for AI developers. It makes sense that over time these two worlds would gradually merge.
 
This one is more detailed and easier to understand:


It is absolutely nuts that real-time analytics is needed to just determine which rays are worth it to cast, because the budget is so insanely low. And then, smart denoiser is needed to fill in the gaps. Wouldn't surprise me if most people believed the final image is fully ray-traced.
 
The voice is kind of annoying. Sounds like one of those AI generated voices that takes text and turns it into a voice.

Some interesting history on Raytracing though.
 
My understanding from the video is that NVidia has taken ray-tracing from being something that took months to render a frame into something that can be done in milliseconds (aka real-time) partly by reducing the number of light projection samples needed to obtain an image.
This is exactly how it works. It's really the same as any other task. Both computation and even manual labor go faster through two approaches:
1) Do less work
2) Do work faster

The AI-based denoiser allows less work to be done. The RT cores allow for the remaining work to be done faster.
 
  • Like
Reactions: gvx64
like this
This is exactly how it works. It's really the same as any other task. Both computation and even manual labor go faster through two approaches:
1) Do less work
2) Do work faster

The AI-based denoiser allows less work to be done. The RT cores allow for the remaining work to be done faster.
0) Decide what work is sufficient

This step is neither part of the denoiser nor can be done with RT cores. They could probably use the tensor cores in some way, but it's very likely just CPU code in the driver.
 
Thanks, that was very interesting.

The most interesting part for me was around 8:45. My understanding from the video is that NVidia has taken ray-tracing from being something that took months to render a frame into something that can be done in milliseconds (aka real-time) partly by reducing the number of light projection samples needed to obtain an image. The low number of light rays basically gives you a rough dotted image but then Nividia uses AI to fill in the gaps which then results in an impressively clear image (the narrators didn't specifically say that it was an AI algorithm but I assume that it would be given the impressive results). It's kind of like DLSS except for light rays instead of video frames.

It's crazy, though. Between AI-agmented ray tracing and DLSS I wonder pretty soon if video cards are going to be primarily used for executing neural nets instead of rendering graphics during gameplay. I mean, it's not totally surprising given how useful GPU's have been for AI applications for years now but until this point it they have always been two separate worlds: frame rendering for gamers and cuda core neural net execution for AI developers. It makes sense that over time these two worlds would gradually merge.


I want on GPU real time AI based in game NPC AI

Don't @ me with facts and aKcHuAlLiEs - I want what I want :p
 
  • Like
Reactions: gvx64
like this
0) Decide what work is sufficient

This step is neither part of the denoiser nor can be done with RT cores. They could probably use the tensor cores in some way, but it's very likely just CPU code in the driver.
Uh, step 1 covered that already.
 
Thanks, that was very interesting.

The most interesting part for me was around 8:45. My understanding from the video is that NVidia has taken ray-tracing from being something that took months to render a frame into something that can be done in milliseconds (aka real-time) partly by reducing the number of light projection samples needed to obtain an image. The low number of light rays basically gives you a rough dotted image but then Nividia uses AI to fill in the gaps which then results in an impressively clear image (the narrators didn't specifically say that it was an AI algorithm but I assume that it would be given the impressive results). It's kind of like DLSS except for light rays instead of video frames.
Yes, that's basically what denoising is. They just shoot one ray per pixel which isn't nearly enough, you need to shoot a thousand or more to really get an image with no noise in it. So you get the super grainy image you see where a lot of pixels don't even get a ray returning to them. But then you throw a denoising filter on that and holy shit, it looks pretty good. Not 100% the same as an output that has thousands of passes, but way closer than you ever thought possible. They are doing a bunch of other things to speed it up as well, but the denoiser is really the magic part.

If you have an RTX card, you can try it in The RayTraced Quake 2. It'll let you enable and disable the denoiser, and also let you do a screenshot mode where you can do tons of passes on a static image. The denoiser isn't perfect, you still get a better image when you properly calculate lots of passes, but damn if it doesn't get you 90%+ of the way there with only one pass.

Here nVidia has a video talking about denoising, what it is, why it is needed, and showing examples of it:
 
  • Like
Reactions: gvx64
like this
0) Decide what work is sufficient

This step is neither part of the denoiser nor can be done with RT cores. They could probably use the tensor cores in some way, but it's very likely just CPU code in the driver.
That's done at the user interface level in rendering packages or pre-configured by the dev in real time setups like games. In rendering, AI denoising has been a game-changer by allowing an 80% reduction in passes for many scenes. Combined with dedicated RT hardware, this has changed many artists' workflows.
 
That's done at the user interface level in rendering packages or pre-configured by the dev in real time setups like games. In rendering, AI denoising has been a game-changer by allowing an 80% reduction in passes for many scenes. Combined with dedicated RT hardware, this has changed many artists' workflows.
This is why Ray Tracing will gradually take over traditional raster only methods. The time it saves in workflow and man hours is insane for HQ assets. It saves studios too much money to not use it, if they can save millions on development by making us spend an extra $200 on hardware it’s an easy decision for them.

It’s the new version of the old problem.

“Yes we could spend a few million and make is more ram efficient, or we could tell them to buy more ram”
 
Back
Top