Real-Time Ray Tracing Support Comes to GeForce GTX GPUs and Game Engines

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,054
NVIDIA has announced that real-time ray tracing support is coming to GeForce GTX GPUs. This driver is scheduled to launch in April. GeForce GTX GPUs will execute ray traced effects on shader cores and support is extended to both Microsoft DXR and Vulkan APIs. NVIDIA reminds consumers that its GeForce RTX lineup of cards has dedicated ray tracing cores built directly into the GPU which deliver the ultimate ray tracing experience. GeForce RTX GPUs provide up to 2-3x faster ray tracing performance with a more visually immersive gaming environment than GPUs without dedicated ray tracing cores. NVIDIA GameWorks RTX is a comprehensive set of tools and rendering techniques that help game developers add ray tracing to games. Unreal Engine and Unity have announced that integrated real-time ray tracing support is being built into their engines.

Real-time ray tracing support from other first-party AAA game engines includes DICE/EA's Frostbite Engine, Remedy Entertainment's Northlight Engine and engines from Crystal Dynamics, Kingsoft, Netease and others. Quake II RTX -- uses ray tracing for all of the lighting in the game in a unified lighting algorithm called path tracing. The classic Quake II game was modified in the open source community to support ray tracing and NVIDIA's engineering team further enhanced it with improved graphics and physics. Quake II RTX is the first ray-traced game using NVIDIA VKRay, a Vulkan extension that allows any developer using Vulkan to add ray-traced effects to their games.
 
Interesting addition but I’ll wait till performance numbers come out before I pass judgement.
 
I wonder if this is just a quick response to the recent crytek demo showing Ray Tracing on a AMD video card.

Well that may have sped the announcement. But I suspect that was always the way it was heading. The real tracing push will come when AMD starts talking about Navi and their console parts. I doubt they have tensor flow hradware... but they are going to support ray tracing. People forget AMD was showing off real time ray tracing with Radeon Rays long before RTX.

IMO Nvidia used tracing to try and sell people on their latest GPUs early as they know full well when the real wave of traced games hit they will use shaders not tensor flow. I'm sure they will use their way its ment to be played money to get some "high end RTX" tracing into a handful of those games. Perhaps it will be faster perhaps not.
 
Well that may have sped the announcement. But I suspect that was always the way it was heading. The real tracing push will come when AMD starts talking about Navi and their console parts. I doubt they have tensor flow hradware... but they are going to support ray tracing. People forget AMD was showing off real time ray tracing with Radeon Rays long before RTX.

IMO Nvidia used tracing to try and sell people on their latest GPUs early as they know full well when the real wave of traced games hit they will use shaders not tensor flow. I'm sure they will use their way its ment to be played money to get some "high end RTX" tracing into a handful of those games. Perhaps it will be faster perhaps not.
Misinformation. Dxr is direct x raytracing. Nvidia has dedicated hardware that takes those calls and does them on it. Anything codes to Dxr will work with Nvidia's implementation.
 
This is like Nvidia's admitting that ray tracing adoption among game developers was stagnant because customers weren't buying their RTX hardware, and customers weren't buying the hardware because developers weren't designing games with real-time ray tracing (and it was overpriced as all hell). I suppose this is the most logical way to instantly populate an install-base of "RTX" capable hardware so developers have more of an ecosystem of customers to work with.
 
  • Like
Reactions: N4CR
like this
Misinformation. Dxr is direct x raytracing. Nvidia has dedicated hardware that takes those calls and does them on it. Anything codes to Dxr will work with Nvidia's implementation.

Also, nvidia also previously stated that their gtx series cards were capable of doing the same raytraced effects as shader processes, but that it was significantly slower. So this shouldn't be surprising for anyone that's been paying attention. If anything, this is a move to help spread adoption of raytracing techniques in software, as more hardware will technically be able to use those features, even if it's not practical. The more programs that implement any level of raytracing, the more opportunities for their rtx series to shine.
 
Misinformation. Dxr is direct x raytracing. Nvidia has dedicated hardware that takes those calls and does them on it. Anything codes to Dxr will work with Nvidia's implementation.

Working efficiently with a radically different architecture is going to take a lot more then a generic driver implementation. Games coming down the pike are clearly going to be coded to use DXR shaders. I highly doubt their drivers will be able to deouple all that shader math and feed it to their tensor units without some coding help. The driver perhaps can do that but my guess performance gain will not be all that great if its not optimised on the software end.

https://devblogs.microsoft.com/directx/announcing-microsoft-directx-raytracing/

A new command list method, DispatchRays, which is the starting point for tracing rays into the scene. This is how the game actually submits DXR workloads to the GPU.
A set of new HLSL shader types including ray-generation, closest-hit, any-hit, and miss shaders. These specify what the DXR workload actually does computationally. When DispatchRays is called, the ray-generation shader runs. Using the new TraceRay intrinsic function in HLSL, the ray generation shader causes rays to be traced into the scene. Depending on where the ray goes in the scene, one of several hit or miss shaders may be invoked at the point of intersection. This allows a game to assign each object its own set of shaders and textures, resulting in a unique material.

Its not like MS has ever hid that DXR uses shaders to achieve tracing. From everything I have read so far all this RTX stuff is mostly hogwash. Pure marketing BS to get gamers to pay more for GPUs with googles AI hardware sandwiched on the silicon. Sure perhaps there are some uses for tensor flow hardware for games... but as I see it not much jumps out that isn't better suited to more general GPU cores. As is almost always the case Kudos to Nvidias marketing team for confusing the hell out of some tech to sell cards.
 
I'm seeing a lot of talk about Raytracing and Tensor Cores... Just to clarify, their two separate things. Tensor cores have nothing to do with Raytracing, they have everything to do with Nvidia DLSS (Deep Learning Super Sampling). Raytracing on RTX cards is offloaded to RT cores rather than running in the shaders. So in reality, Raytracing should be faster on RTX cards so long as the RT cores are utilized properly. Can Raytracing be done on shaders? Sure, but at the loss of performance as some sharers need to be consumed to perform the Raytracing operations. See the photo below which shows the difference. Note that the Tensor cores are seperated from the RT cores.

Pascal_vs_Turing.jpg
 
See look how good a job that Nvidia marketing dept has done.

No "RT cores" are complete and utter BS. Pascal isn't the chip to compare it to volta is. Volta also has tensor flow. Main difference is Volta has basically reference tensor flow hardware. Pascal adds the ability to run in higher precision float point modes.

The issue for volta and GPU computation in general has always been precision. The reason why GPUs are not used for Final ray tracing renders is because they simply can't do precise calculation. IEEE 754 is the standard for float point math... GPUs haven't always conformed to that standard at all. And even now Turing and Vega alike don't provide full float point error check registers. This means there usefulness for something like a massive 50+GB ray traced render Pro level CGI work is zero. They can be used for pre vis cause they are great at quick and dirty calculation... but they suck at highly ordered double/triple checked math.

What I'm getting at is with Turing Nvidia expanded on Tensor Flow. They added higher precision float point modes. They still don't add the same error floats you would find in x86 but they are alot better. This has AI uses of course. But the truth is Nvidia wants to get into pro level render farms. They have been providing programming support to Pixar for a few years now to expand their Renderman software with something called XPU which is a CPU + GPU final render unit for renderman. I sort of doubt Pixar uses it for any of their own flicks for some time... but it might be good enough for companies using renderman with things like flow and houdine ect for lower end CGI work for TV commercials ect. Nvidia is looking to sell into those markets and that is why they extended Tensor Flow. All those "RT" cores their marketing dept is talking about are nothing but the tensor control units allowing FP64 math to be done wiithin the tensor matrix. (just look at the counts.. 544 tensor cores 68 "RT" cores yes 8x as many exactly their nothing but FP64 control units)

Here is a quote from Jensen himself from their Q2 earnings call;
"In the case of Turing, it’s really designed for three major applications. The first application is to open up Pro Visualization, which is a really large market that has historically used render farms. And we’re really unable to use GPUs until we now have -- we now have the ability to do full path trace, global illumination with very, very large data sets. "
 
Last edited:
I'm seeing a lot of talk about Raytracing and Tensor Cores... Just to clarify, their two separate things. Tensor cores have nothing to do with Raytracing, they have everything to do with Nvidia DLSS (Deep Learning Super Sampling). Raytracing on RTX cards is offloaded to RT cores rather than running in the shaders. So in reality, Raytracing should be faster on RTX cards so long as the RT cores are utilized properly. Can Raytracing be done on shaders? Sure, but at the loss of performance as some sharers need to be consumed to perform the Raytracing operations. See the photo below which shows the difference. Note that the Tensor cores are seperated from the RT cores.
You go out of your way to highlight and underline that tensor cores have nothing to do with ray tracing. Maybe do a few seconds of research as they are used for denoising the ray traced image.
 
Last edited:
People are going on about Ray Tracing I am more interested in how the RTX cores can do some nice AI work. I would one day love to see boss fights enhanced with AI, a boss that remembers how each player defeated it and works to counteract it.... that would be interesting especially in MMO’s.

Ray tracing is great, don’t get me wrong there but until mainstream cards < $300 have the power to do it and still deliver frame rates > 30 at 1080p it’s an interesting feature that most literally can’t afford to run.
 
Last edited:
I wonder if this is just a quick response to the recent crytek demo showing Ray Tracing on a AMD video card.
NV's API has clearly been in pipeline for a while, doesn't have anything to do with Crytek's canned demo; it's not an either-or. One company makes hardware, the other 3D engines - they're not actually competing - their innovations will complement one another. NV GPUs will end up the prime beneficiaries of Crytek's work by default just because of marketshare. If Crytek is able to leverage any AMD specific hardware features (compute or whatever) that'd be wonderful.

Regardless, more companies hopping the raytracing bandwagon benefits everyone.
 
Last edited:
Interesting addition but I’ll wait till performance numbers come out before I pass judgement.
Yep.
the death blow for the RTX 2000 series...
Only if non-RTX cards perform similarly to the RTX series. Let's wait and see some real numbers before jumping the gun.
Regardless, more companies hopping the raytracing bandwagon benefits everyone.
Exactly. If this pushes RT forward and there are real performance/IQ improvements then it's all good.
 
Also, nvidia also previously stated that their gtx series cards were capable of doing the same raytraced effects as shader processes, but that it was significantly slower. So this shouldn't be surprising for anyone that's been paying attention. If anything, this is a move to help spread adoption of raytracing techniques in software, as more hardware will technically be able to use those features, even if it's not practical. The more programs that implement any level of raytracing, the more opportunities for their rtx series to shine.
It is not practical on RTX either. Unless you consider 1080p 30fps practical.
 
i dont care about performance numbers, the drivers can be built to purposely cripple ray tracing for previous gen cards.
ive never trusted nvidia when it comes to their drivers.. theyve been notorious through the years to push you into buying the next tech.

i will admit, and i commented it in the ray trace amd vid, the ray tracing kinda looks cheap or like a fake ray tracing compared to previous tech demos ive seen with rtx hardware.
 
People were complaining about 60 FPS 1080p with ray tracing. Well, congratulations, because now you can play games with ray tracing at 15 FPS 1080p now.

magic drivers from Nvidia will improve things...of course the folks that jumped to buy a 2080ti on the first day don't want to hear this...do you really think Nvidia would enable RT on older cards if it only played at sub 20 fps?...like the recent Crytek announcement showed, hardware RT was never really needed
 
magic drivers from Nvidia will improve things...of course the folks that jumped to buy a 2080ti on the first day don't want to hear this...do you really think Nvidia would enable RT on older cards if it only played at sub 20 fps?...like the recent Crytek announcement showed, hardware RT was never really needed
Last I checked shader and compute cores are still hardware.
 
The real test will be seeing how games really jump to adopt it now and how well they do it. After playing Metro Exodus with it I can say that it can definitely add to the game but it has to be used properly. BFV showed how not to use it. I also believe that NV should just incorporate it as another tab in CP and the dev's just have an on/off switch in game. On the tab we could have a slider or manually enter a number to control how much it's computing.
 
the death blow for the RTX 2000 series...

I mostly disagree with this. Sure the prices sucked. Sure anything less than a 2080TI was near useless for RT in 4k but for the greatest effect RT will still need a lot of performance driven hardware and drivers can only help so much. It's possible NV may want to distance themselves from the RTX moniker in future gens for the sake of publicity but anyone believing the pricing debacle is going away is deluding themselves. Until AMD or Intel can release something same year same as NV's best for a lower price they still have no reason to change their pricing path. It's still taking 2 years or more for anyone to catch up to a x80TI and by then NV is usually ready to release whatever's next.
 
I wonder if AMD's superior compute abilities plus working Async Compute maybe the reason for this "delayed" release of Nvidia drivers? Technically if your card supports D12 then it should support DXR. So what's with the drivers to be released in April? Is Crytek giving Nvidia some time to optimize their drivers so Nvidia doesn't look stupid compared to AMD cards?
 
For an enthusiast site it's amazing to see how little people understand about ray tracing, and how quickly they turn to conspiracy theories to convince themselves they are smart for avoiding RTX rather than just admitting the first gen is expensive. As soon as prices normalize with ray tracing everyone will be screaming about how necessary the RT cores are for performance. Be happy there are early adopters ready to pay the price in so that one day the masses will be able to get the same great tech for reduced prices. The butt hurt is strong in all these threads.
 
I wonder how far we are from having real-time raytracing in games.

I feel like unless there's a major breakthrough, it'll be about 10 years at least, if we even get there, engineers and developers love to stall at "good enough".
 
For an enthusiast site it's amazing to see how little people understand about ray tracing, and how quickly they turn to conspiracy theories to convince themselves they are smart for avoiding RTX rather than just admitting the first gen is expensive. As soon as prices normalize with ray tracing everyone will be screaming about how necessary the RT cores are for performance. Be happy there are early adopters ready to pay the price in so that one day the masses will be able to get the same great tech for reduced prices. The butt hurt is strong in all these threads.
Yep.

I personally don't care to pay the early-adoption premium for RT at this point, but I will certainly jump on the train a few stops down the line if it does gather pace. But without any early adopters, that won't happen, so fair play to people who are willing to pay for the latest and greatest features even though they haven't reached maturity yet.
 
You go out of your way to highlight and underline that tensor cores have nothing to do with ray tracing. Maybe do a few seconds of research as they are used for denoising the ray traced image.
And learn how to spell. It’s they’re, not their.
 
Hahahahaha.......oh wait.......wait........you could ray trace on our 10xx models all the time.......I guess we forgot to tell you.....we lost that driver in the back room and just found it.....

It just goes to prove.......Half-Life 3 confirmed.
 
So for the RTX 3xxx cards would you prefer that more silicon are used on RT cores or shader/compute power?

Would doubling the performance of the RT cores also result in doubling the fps?
 
People are going on about Ray Tracing I am more interested in how the RTX cores can do some nice AI work. I would one day love to see boss fights enhanced with AI, a boss that remembers how each player defeated it and works to counteract it.... that would be interesting especially in MMO’s.

Ray tracing is great, don’t get me wrong there but until mainstream cards < $300 have the power to do it and still deliver frame rates > 30 at 1080p it’s an interesting feature that most literally can’t afford to run.

That's what I have been saying as well, but the more I think about it, that should be handled by the CPU now that the core wars has begun between intel and AMD. Best to probably let those extra cpu cores sitting around doing nothing handle it and let the GPU focus on the graphical bits imo.
 
So for the RTX 3xxx cards would you prefer that more silicon are used on RT cores or shader/compute power?
I'd prefer shader compute power since that improves both traditional games and RTX games as well. Also I'm not even sure if the RT cores do anything beneficial since turning on Ray-Tracing tanks performance regardless. But I guess we'll see when we can compare Radeons to Geforce to see if more compute is a better alternative.
Would doubling the performance of the RT cores also result in doubling the fps?
Doesn't matter, just hand me the popcorn cause this is going to get good.
 
Back
Top