AMD RDNA 2 gets ray tracing

DXR is a standard by Microsoft.
This is where EVERYONE'S going.
Intel will support DXR.
AMD will support DXR.
NVIDIA supports DXR.

Raytracing is going, which put an expiration date on rasterazation.
It is already begun...it is not a question about "if"...it is a question about "when".

This should be mandatory to spoon-fed these facts to the people that cannot separate Microsoft's extension of the DX12 API wit DXR and then NVIDIA's branding of their cards with RT-cores supporting Microsoft's DXR (All though some of NVIDIA's Turing and Pascal cards supports shader-based DXR).

Raytracing IS the future.
Even Vulkan is basically doing a "copy" of DXR and will have feature parity with DX12 DXR.

The sole reason people are whining about Raytracing is because their favorite peevee company has not yet entered the party.

If you told people 15 years years ago that some people would respond to real time raytracing in a FUD/dismissive way....they would assume the future had gone "Idiocracy"...
Microsoft is clear what DXR is:
What Does This Mean for Games?
DXR will initially be used to supplement current rendering techniques such as screen space reflections, for example, to fill in data from geometry that’s either occluded or off-screen. This will lead to a material increase in visual quality for these effects in the near future. Over the next several years, however, we expect an increase in utilization of DXR for techniques that are simply impractical for rasterization, such as true global illumination. Eventually, raytracing may completely replace rasterization as the standard algorithm for rendering 3D scenes. That said, until everyone has a light-field display on their desk, rasterization will continue to be an excellent match for the common case of rendering content to a flat grid of square pixels, supplemented by raytracing for true 3D effects.[/quote]
Bolded by me. It is at best a hybrid at this time. The current hardware is unable to give acceptable quality and performance the full gamut of raytracing in modern games but can enhance rasterized games.

Maybe the argument should be what is being used, mathwise for the pixel? Today a hell alot if compute is used, which will come to a point where you are not using fixed function units or rasterization. More and more of those compute functions will use raytracing math or methods which today is not the case.

I very much look forward to better games, rendering and AI by whomever, hopefully good by all GPU makers. Turing was the first fixed function hardware assisted intersection finding for RT with some success. I would say Nvidia need to improve significantly the performance and AMD actually having something viable and useful. Developers will then have the hard work making it work and worthwhile.

Yes I am all for more and more RT that actually enhances the game play as well as productivity on other type of applications. We are at the initial baby steps for dynamic real time raytracing.
 
I think you could, today, build a full ray traced game with the current hardware and API.

It would be slow as hell, and probably not useful in a real game, but you could do it.
 
I think you could, today, build a full ray traced game with the current hardware and API.

It would be slow as hell, and probably not useful in a real game, but you could do it.
I wrote a "real time" raytracing engine back in 2003/2004... it was low resolution and very limited geometry... I'm sure it can be done better nowadays, but it's always been possible to make a full raytraced engine/game... How fast it runs and how good the quality is was always the hold up on going mainstream. ;)
 
I think you could, today, build a full ray traced game with the current hardware and API.

It would be slow as hell, and probably not useful in a real game, but you could do it.

Aren't Quake 2 RTX and Minecraft RTX fully ray traced?
 
Quake II might be, good point.
Plays on a 1080ti plus 2 decently. Path tracer is my understanding for global illumination. Still uses rasterization.

Still think at the end of the day, it is what the new hardware can do and what the developers will do to use it. It will just be a combination of techniques to bring the best gaming experience.
 
Plays on a 1080ti plus 2 decently. Path tracer is my understanding for global illumination. Still uses rasterization.

Still think at the end of the day, it is what the new hardware can do and what the developers will do to use it. It will just be a combination of techniques to bring the best gaming experience.

Back when I had my 1080Ti I tried it and could get playable performance if I turned off global illumination and played at a lower resolution. This was before the update that improved the textures. With my 2080Ti I can play with it maxed out but using dynamic resolution scaling at 4K with the fps target set to 80 keeps it at around a 50% scale most of the time.
 
Back when I had my 1080Ti I tried it and could get playable performance if I turned off global illumination and played at a lower resolution. This was before the update that improved the textures. With my 2080Ti I can play with it maxed out but using dynamic resolution scaling at 4K with the fps target set to 80 keeps it at around a 50% scale most of the time.
Amazingly it worked with two GPU'S, mGPU, looked decent, lighting wise. Very much playable at like 720p, maybe 1080p, don't remember.
 
Aren't Quake 2 RTX and Minecraft RTX fully ray traced?

Yeah, they're both path traced including primary visibility so no rasterization. Of course that's only possible because of the simple geometry and low number of lights in each game. Minecraft is also getting help from DLSS.
 
Plays on a 1080ti plus 2 decently. Path tracer is my understanding for global illumination. Still uses rasterization.

Still think at the end of the day, it is what the new hardware can do and what the developers will do to use it. It will just be a combination of techniques to bring the best gaming experience.
Where Quake 2 uses rasterization?
It uses path tracing + fancy denoising filters which you can disable. I actually prefer native noisy look.

DXR and Vulkan RT are fully programmable. You need to load geometry which as far as I am aware doesn't even need to be triangle based and then write shader programs for each ray intersection where you for example handle getting point on textures, sending secondary rays and doing color blending.

edit://
RT can also be used for other things than ray-tracing. For example it can used for anti-aliasing where you can detect where there are hard edges or shimmering and send additional rays to get additional samples.
Techniques like these will be probably used in future game engines.
 
Yeah, they're both path traced including primary visibility so no rasterization. Of course that's only possible because of the simple geometry and low number of lights in each game. Minecraft is also getting help from DLSS.
Where Quake 2 uses rasterization?
It uses path tracing + fancy denoising filters which you can disable. I actually prefer native noisy look.

DXR and Vulkan RT are fully programmable. You need to load geometry which as far as I am aware doesn't even need to be triangle based and then write shader programs for each ray intersection where you for example handle getting point on textures, sending secondary rays and doing color blending.

edit://
RT can also be used for other things than ray-tracing. For example it can used for anti-aliasing where you can detect where there are hard edges or shimmering and send additional rays to get additional samples.
Techniques like these will be probably used in future game engines.
Yep, you guys are right -> It does not project triangles to 2d viewspace (rasterization) and since path tracing is a form of Raytracing -> It is a Raytrace game which is not raterized. The video is very reviewing on the mapping of the frames between 1080Ti, 2080 w/o RT cores and 2080 with RT cores. With this fully RT game (yes a full RT game here) those RT cores actually do consume a large part of the frame rendering. Thanks for the correction. Have to look more into this and probably fire it back up on the 1080Ti's again.

 
This is pertinent information dealing with RNDA2 Ray tracing, Mark Cerny discussion of PS5 hardware, looks like the Patent AMD filed couple of years ago does reflect how AMD is going about Raytracing in the shaders, very interesting what he says about performance with detailed complex scenes with reflections with the PS5:

 
Yep, you guys are right -> It does not project triangles to 2d viewspace (rasterization) and since path tracing is a form of Raytracing -> It is a Raytrace game which is not raterized. The video is very reviewing on the mapping of the frames between 1080Ti, 2080 w/o RT cores and 2080 with RT cores. With this fully RT game (yes a full RT game here) those RT cores actually do consume a large part of the frame rendering. Thanks for the correction. Have to look more into this and probably fire it back up on the 1080Ti's again.

path tracing is not only a form of ray tracing, but actually an evolution of ray tracing which it's more accurate but also more resource intensive, and affect not only light sources but also material quality. what's good about path tracing is that it can be emulated through the use of shaders, which it's the method AMD will be using on next gen consoles..
 
path tracing is not only a form of ray tracing, but actually an evolution of ray tracing which it's more accurate but also more resource intensive, and affect not only light sources but also material quality. what's good about path tracing is that it can be emulated through the use of shaders, which it's the method AMD will be using on next gen consoles..
Definitely related, just a similar approach to calculate a lighting value, I agree on an evolution and a more useful approach.

https://www.online-tech-tips.com/co...ray-tracing-and-why-do-they-improve-graphics/

Quake II RTX also shows the limitations of the hardware, with very simple geometry, animations, explosions etc. A 2080 Ti at 4K max settings gets in the twenties, If DSLL 2.0 were to be incorporated that should go up into the forties. Anything more modern in complexity would make this approach (full path tracer) utterly unusable. Now the good thing is that Quake II RTX would be an excellent test to see how much faster Ampere RT performance is. Since a Nvidia derived rendering on their hardware you won't be able to compare to AMD method.

https://www.dsogaming.com/pc-perfor...-pc-performance-analysis-on-nvidia-rtx2080ti/
 
Quake II RTX also shows the limitations of the hardware, with very simple geometry, animations, explosions etc. A 2080 Ti at 4K max settings gets in the twenties, If DSLL 2.0 were to be incorporated that should go up into the forties. Anything more modern in complexity would make this approach (full path tracer) utterly unusable. Now the good thing is that Quake II RTX would be an excellent test to see how much faster Ampere RT performance is. Since a Nvidia derived rendering on their hardware you won't be able to compare to AMD method.
Actually we do not know how RT performance would scale with more complex geometry. There are many operations that go into ray-tracing and I would suspect geometry density could be increased by a lot until we would see serious performance difference.

Or in other words Quake 2 might be just too simple and actually not very good to showing what RTX cards can do.
Whole Quake for Vulkan RT was not even really started by Nvidia but by Christoph Schied (http://brechpunkt.de/q2vkpt/). Nvidia just took his code, improved it and made it into full release.
 
Actually we do not know how RT performance would scale with more complex geometry. There are many operations that go into ray-tracing and I would suspect geometry density could be increased by a lot until we would see serious performance difference.

Or in other words Quake 2 might be just too simple and actually not very good to showing what RTX cards can do.
Whole Quake for Vulkan RT was not even really started by Nvidia but by Christoph Schied (http://brechpunkt.de/q2vkpt/). Nvidia just took his code, improved it and made it into full release.
Geometry as in complexed objects and more objects have a huge impact on render time if you done any ray tracing rendering. More complex the scene is, the more materials and interactions for for light and bounce light occurs the slower it goes. There are ways to reduce that load but it is still there. That is why Nvidia uses rasterization or Hybrid approach for the more complex games using methods such as LOD of objects so the Ray Tracing calculations can be reduced:
https://devblogs.nvidia.com/effectively-integrating-rtx-ray-tracing-real-time-rendering-engine/

Now are you saying Nvidia did a poor job or something like that with Quake II RTX, hindered performance to show what RTX can do? To promote their cards? I think they did a rather good job on the code, it even runs, playable with Pascal cards which don't have dedicated RT hardware for intersection finding. Nvidia claimed they have been working on this for 10 years, unless they are lying or incompetent, Quake II RTX is probably a fair representation of Turing actual Real Time capabilities/limitations when doing full ray tracing. Something like Control is a good representation for a hybrid rendering approach. When Ampere comes out, Quake II RTX would be a perfect game to judge how much improvement in RT performance over Turing, Minecraft RTX is probably another title as well.
 
Now are you saying Nvidia did a poor job or something like that with Quake II RTX, hindered performance to show what RTX can do? To promote their cards? I think they did a rather good job on the code, it even runs, playable with Pascal cards which don't have dedicated RT hardware for intersection finding. Nvidia claimed they have been working on this for 10 years, unless they are lying or incompetent, Quake II RTX is probably a fair representation of Turing actual Real Time capabilities/limitations when doing full ray tracing. Something like Control is a good representation for a hybrid rendering approach. When Ampere comes out, Quake II RTX would be a perfect game to judge how much improvement in RT performance over Turing, Minecraft RTX is probably another title as well.
What I am saying is that RT cores that do bounding volume hierarchy (BVH) searching might find intersection so fast (with ultra simple Quake2 geometry there is never need to go very deep in this searching...) that they might still need to wait for given intersection to be processed due to internal hardware implementation that might have latencies. This hardware we are talking not software and in hardware there are latencies and to maximize performance you need to properly saturate execution units.

For example if you had OpenCL benchmark program where you have N different programs and tested two cases:
1. do one operation per program
2. do 1024 operations per program (data set is 1024)
Then if you divided amount of all performed operations by time and calculated how fast given GPU is those two cases would give vastly different results because almost all execution units per GPU core would sit idle doing nothing in first case and have work to do in second case. If then you eg. increased data set to eg. 4096 and compared with case 2 then results would probably not be that much different and that would depend on hardware you test and how large data sets it can handle at once.

Similarly here, optimum geometry complexity might be higher that what Quake 2 from 1997 provides.
I would assume hardware is optimized to higher density geometry and Quake2 is terrible choice for showcasing its capabilities, but at the same time I stress that it is just my assumption and I do not claim this to being truth.

There might be no such bottlenecks here but we simply do not know that.
Here pure synthetic benchmarks would be very useful...
 
What I am saying is that RT cores that do bounding volume hierarchy (BVH) searching might find intersection so fast (with ultra simple Quake2 geometry there is never need to go very deep in this searching...) that they might still need to wait for given intersection to be processed due to internal hardware implementation that might have latencies. This hardware we are talking not software and in hardware there are latencies and to maximize performance you need to properly saturate execution units.

For example if you had OpenCL benchmark program where you have N different programs and tested two cases:
1. do one operation per program
2. do 1024 operations per program (data set is 1024)
Then if you divided amount of all performed operations by time and calculated how fast given GPU is those two cases would give vastly different results because almost all execution units per GPU core would sit idle doing nothing in first case and have work to do in second case. If then you eg. increased data set to eg. 4096 and compared with case 2 then results would probably not be that much different and that would depend on hardware you test and how large data sets it can handle at once.

Similarly here, optimum geometry complexity might be higher that what Quake 2 from 1997 provides.
I would assume hardware is optimized to higher density geometry and Quake2 is terrible choice for showcasing its capabilities, but at the same time I stress that it is just my assumption and I do not claim this to being truth.

There might be no such bottlenecks here but we simply do not know that.
Here pure synthetic benchmarks would be very useful...
The Nvidia video linked above showed the frame and what percentage the RT cores spent on the intersection of geometry/RT phase -> since it is fully rendering using path tracing more sample points or rays are needed than if a Hybrid approach like Control, the percentage time spent in the RT core here is significantly more than what is seen in Rasterized games percentage wise for hybrid games. If you increase the geometry that part would increase virtually linear with it. Since the game uses limited amount of compute/shaders that part of the frame rendering is small -> in a more modern game I would expect that to go way up. Still this will be a good comparison test for Ampere compared to Turing. If benchmarks come out that can compare AMD to Nvidia Ray Tracing -> Maybe 3dMark -> Preferably a real game or better yet many games would be better. Crysis should be out, Cyberpunk and not sure of any other big surprises. If some of the older RTX games are updated to take advantage of anything AMD is doing would be nice as well.

The focus should be more on how it will benefit the gaming experience, how ever done, performance and IQ improvements. Loosing performance to upgrade IQ in some aspect yet degrade IQ in some other aspect as in lower settings and resolutions to have an acceptable frame rate is not ideal. I think Nvidia has an edge particularly with AI aspects of RTX, AMD may or may not with the hardware aspects for RT.
 
Right. Even if AMD can match Nvidia in RT perf, they still don't (AFAIK) have a solution for denoising in hardware or some alternative to DLSS.

Without DLSS, even the 2080 Ti is struggling with RT and I'm talking 1080p (for example in Control or Metro). So what they come up with will be interesting.
 
  • Like
Reactions: noko
like this
I wrote a "real time" raytracing engine back in 2003/2004... it was low resolution and very limited geometry... I'm sure it can be done better nowadays, but it's always been possible to make a full raytraced engine/game... How fast it runs and how good the quality is was always the hold up on going mainstream. ;)

Would you happen to still have the source code ? I think it would be interesting to look at
 
If you increase the geometry that part would increase virtually linear with it.
Do you have any proof of that?
Those are your simplistic assumptions only. Might be those are correct but without proof you should not state them like some kind of facts.
 
If you increase the geometry that part would increase virtually linear with it.

BVH traversal performance scales with log {n}. BVH construction is n log {n}. Rasterization of course scales linearly with n on average.

That's all theoretical of course. In practice the challenge is random memory access to the BVH. With rasterization, memory access is much more controlled and predictable and therefore easier to optimize.
 
Would you happen to still have the source code ? I think it would be interesting to look at
I doubt it, I can try looking but even then it was a project with another individual who I would have to try to get back in contact with and ask permission to share. It really wasn't all that interesting, lol. Although the Monte Carlo light splatting worked pretty well, other than that is was just a typical raytracer.
 
Do you have any proof of that?
Those are your simplistic assumptions only. Might be those are correct but without proof you should not state them like some kind of facts.
Do you have proof otherwise? Yes simplistic. Reality wise it would vary depending upon not only the number of triangles but also any additional materials, additional rays if needed for the materials, additional compute shaders and so on. Scene geometry complexity is just one factor of many which itself can be optimized into primitives building upon itself.

Now what is very apparent with Quake II RTX is resolution and performance, basically 1X:1/X, at 1080p 83fps, at 4K 21fps. Resolution factor of 4 up, performance going down by a factor of 4. That is with the initial release data, don't have the Nov 2019 update version performance. With rasterized games going from 1080p to 4K, if GPU limited is more like a factor of 2 or 1/2 not directly inverse proportional with resolution as in non rasterized Quake II RTX. Now can someone explain that?
 
Yep, you guys are right -> It does not project triangles to 2d viewspace (rasterization) and since path tracing is a form of Raytracing -> It is a Raytrace game which is not raterized. The video is very reviewing on the mapping of the frames between 1080Ti, 2080 w/o RT cores and 2080 with RT cores. With this fully RT game (yes a full RT game here) those RT cores actually do consume a large part of the frame rendering. Thanks for the correction. Have to look more into this and probably fire it back up on the 1080Ti's again.



That is not the only place you need to look into.
A lot of your posts about raytracing is based on false assumptions, weird terms and a general lack of understanding of raytracing...your latest blunder was "serial vs async" raytracing...please stop.
 
That is not the only place you need to look into.
A lot of your posts about raytracing is based on false assumptions, weird terms and a general lack of understanding of raytracing...your latest blunder was "serial vs async" raytracing...please stop.
lol, teach us some wisdom then. Talking about how the GPU does the RT intersection processing, looks like AMD will do more than one thing at a time dealing with RT in that respect. Now what assumptions did I make?
 
I would be very surprised and shocked actually if Nvidia's RTX implementation was not parallel. I would need to see a source for that.

I did find this, though, which seems to indicate that RTX is massively parallel, as you would assume.

Ray tracing shaders are dispatched as grids of work items, similar to compute shaders. This lets the implementation utilize the massive parallel processing throughput of GPUs and perform low-level scheduling of work items as appropriate for the given hardware.
https://devblogs.nvidia.com/introduction-nvidia-rtx-directx-ray-tracing/
 
lol, teach us some wisdom then. Talking about how the GPU does the RT intersection processing, looks like AMD will do more than one thing at a time dealing with RT in that respect. Now what assumptions did I make?

"Serial raytracing"....citation needed.
Your Quake II blunder is also quite funny.

Provide sources for your nutty claims...I bet you cannot.
 
I would be very surprised and shocked actually if Nvidia's RTX implementation was not parallel. I would need to see a source for that.

I did find this, though, which seems to indicate that RTX is massively parallel, as you would assume.


https://devblogs.nvidia.com/introduction-nvidia-rtx-directx-ray-tracing/
You do know your link is dealing with Volta, programming using shaders to cast rays to find intersection points vice RT Cores? This is not dealing with RT cores but using the shaders to cast rays.

Here is an example of one frame being rendered in Metro Exodus:

1591252915609.png


In this case the RT Cores did a solid block/batch for the intersections with very little other work being done. If this was parallel I would expect RT Cores would be used throughout the one frame rendering. Hardware wise I do not think that is possible with Turing. AMD patent and explanation in PS5 presentation by Mark Cerny, RNDA2 RT design will allow independent and concurrent operations of finding intersections and shading.
 
Well, there are going to be dependencies, that doesn't mean it's not parallel.

For example, you can't run the pixel shader unless the ray tracing is complete, as you wouldn't know how lit or shadowed the pixel is first (using Metro as an example since they do global illumination).
 
The only thing that is "serial" (if you twist the word) is that you have to do primary rays frist (but they can be done in parallel), then reflection, refractions and direct ligthing and then indirect lighting (1st bounce, 2nd bounce etc.).

But calling that "serial" is dishonest....serial would be if it had to do one ray, before starting on the second ray....for every single ray in the frame.

Stupid claims are stupid claims...
 
The only thing that is "serial" (if you twist the word) is that you have to do primary rays frist (but they can be done in parallel), then reflection, refractions and direct ligthing and then indirect lighting (1st bounce, 2nd bounce etc.).

But calling that "serial" is dishonest....serial would be if it had to do one ray, before starting on the second ray....for every single ray in the frame.

Stupid claims are stupid claims...
I don't think (or really hope) nobody here thinks the GPU does a single ray at a time... When he said serial I took it as does not do rasterization and raytracing at the same time; since this was the topic of AMD (possibly) handling raytracing during raster/shaders so potentially doing two things in parallel that aren't so parallel right now. I'm not positive this is exactly how it actually works just what I gathered from his statements. No clue why anyone would assume someone would try to say raytracing itself was completely serial (I hope nobody on here is that dense, but alas, I wouldn't be surprised either).
 
Do you have proof otherwise?
I do not claim anything here and only point out that making generalizations without enough data points can lead to wrong predictions.

Yes simplistic. Reality wise it would vary depending upon not only the number of triangles but also any additional materials, additional rays if needed for the materials, additional compute shaders and so on. Scene geometry complexity is just one factor of many which itself can be optimized into primitives building upon itself.
With materials it is similar story.
Imho more complex materials and higher resolution textures should not affect performance if total amount of rays being shot remains the same. There is probably some limit in how complex material can get before starting to affect performance but Q2 RTX doesn't seem to be anywhere close to hitting it.

Now what is very apparent with Quake II RTX is resolution and performance, basically 1X:1/X, at 1080p 83fps, at 4K 21fps. Resolution factor of 4 up, performance going down by a factor of 4. That is with the initial release data, don't have the Nov 2019 update version performance. With rasterized games going from 1080p to 4K, if GPU limited is more like a factor of 2 or 1/2 not directly inverse proportional with resolution as in non rasterized Quake II RTX. Now can someone explain that?
Resolution directly influence total number of rays that are being shot from the camera at scene and each hit generates the same amount of secondary rays.
From my experience frame rate is pretty much constant except cases you use classic sky in which case because it does not generate any additional rays. Performance can increase a lot when you gaze at classic sky. Not so much these modern sky boxes which seems to be ray traced and so generate additional says.

Possible way to test how it really is with geometry would be to make (or find) map with much more complex geometry and just benchmark it.
 
  • Like
Reactions: noko
like this
In this case the RT Cores did a solid block/batch for the intersections with very little other work being done. If this was parallel I would expect RT Cores would be used throughout the one frame rendering. Hardware wise I do not think that is possible with Turing. AMD patent and explanation in PS5 presentation by Mark Cerny, RNDA2 RT design will allow independent and concurrent operations of finding intersections and shading.
In the graphs you showed we already have concurrent operations of finding intersections and shading on Turing so what is your point?

Entirely different issue is if you can do ray tracing and normal rasterization at the same time or perhaps what would come immediately to mind seeing these images: utilize RT cores for the whole frame time.
I do not know if that can be done or not but I do not think and anyone who is not nvidia engineer or developer who attempted such implementation would know. You sound like neither of these cases...
One issue comes to mind immediately though: in case of utilizing RT cores for whole time it would require to add one frame of input lag and any frame rate gained at expense of input lag is a big no.

If you claim however that somehow AMD RT implementation (which is not even out) can somehow do what Turing can not when it comes to concurrent ray tracing and rasterization then please provide proof.
 
In this case the RT Cores did a solid block/batch for the intersections with very little other work being done. If this was parallel I would expect RT Cores would be used throughout the one frame rendering. Hardware wise I do not think that is possible with Turing. AMD patent and explanation in PS5 presentation by Mark Cerny, RNDA2 RT design will allow independent and concurrent operations of finding intersections and shading.

Well first of all the pic you shared does show concurrent execution of RT with other work so your premise is wrong.

The parallelism of work on modern GPUs is mostly dependent on how the game engine schedules tasks as all recent hardware can process graphics and compute concurrently. Obviously there are dependencies during the rendering of a single frame and there is an order in which work needs to be completed.

Given the inherent divergence between rays it does actually make sense to write ray-hit params out to an intermediate buffer and then shade only once RT is done. This would maximize coherence of memory requests during shading.

Firing off shading immediately for each individual ray hit without any sort of sorting or batching would wreak havoc on any architecture including RDNA.
 
You do know your link is dealing with Volta, programming using shaders to cast rays to find intersection points vice RT Cores? This is not dealing with RT cores but using the shaders to cast rays.
No, that is not at all what the article was saying. Volta has RTX hardware, and the page was explaining how to write shaders for it (DXR RT programming uses shaders too, not just for the fallback).
 
No, that is not at all what the article was saying. Volta has RTX hardware, and the page was explaining how to write shaders for it (DXR RT programming uses shaders too, not just for the fallback).
Is it really? Specification says that Volta has tensor cores but no RT cores.
Turing is based on Volta so they share many similarities and Volta was used as development platform until Turing was ready. That is why it supported DXR from the start.
 
Back
Top