NVIDIA Titan V Runs Battlefield V with Ray-Tracing Effects Well, Even without RT Cores

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
NVIDIA’s marketing campaign may have (inadvertently) led some to believe that real-time ray tracing was exclusive to RTX cards, but that’s not the case, as the company’s last-gen Titan card, the Titan V, was used by studios to test the much-hyped graphics technology before Turing was available to the public. 3Dcenter.org has revealed how the Titan V (Volta GV100) fares against the Titan RTX (Turing TU102) in Battlefield V: in a map with heavy ray-tracing effects, the former managed an average of 56 FPS on Ultra with high RTX, while the latter achieved 80 FPS.

...while the RT cores do not provide a 10X performance boost, there is a very big performance difference between a GPU with and without RT cores. Let’s also not forget that the Titan V comes with 5120 shader units whereas the Titan RTX comes with 4608 shader units. So, in case you were wondering, the RT cores are not a gimmick. Still, I have to say that it would be really cool if DICE provided a software mode for its real-time ray tracing effects so all players could test them and see whether their GPUs are powerful enough to run them in software mode.
 
This could be a silver lining for AMD, provided that their upcoming GPUs have the horsepower.
 
Still, I have to say that it would be really cool if DICE provided a software mode for its real-time ray tracing effects so all players could test them and see whether their GPUs are powerful enough to run them in software mode.

This could be a silver lining for AMD, provided that their upcoming GPUs have the horsepower.

I'm guessing they had to sign NDA's up the wazoo for this, and that even in software rendering mode it is all proprietary Nvidia API's that are being used, and if any such thing were to happen, they'd be taken to court in a big way.

There may be promise for an open accelerated raytracing API, but that would likely require rewriting the game to support it, once any such thing becomes reality.

I'm still not convinced that real-time raytracing is anything but a gimmick that will soon fade away, though. The graphical differences are too small, and the performance impacts are too large. I don;t know about everyone else, but I'd rather play at 4k with rasterized graphics than at 1080p with raytraced graphics, but that's just me. Time will tell what the rest of the market thinks.
 
How about a nice cpu/gpu that uses the cpu for ray tracing and the gpu for other graphics from amd. Should blow Nv out of the water.
 
I'm guessing they had to sign NDA's up the wazoo for this, and that even in software rendering mode it is all proprietary Nvidia API's that are being used, and if any such thing were to happen, they'd be taken to court in a big way.

There may be promise for an open accelerated raytracing API, but that would likely require rewriting the game to support it, once any such thing becomes reality.

I'm still not convinced that real-time raytracing is anything but a gimmick that will soon fade away, though. The graphical differences are too small, and the performance impacts are too large. I don;t know about everyone else, but I'd rather play at 4k with rasterized graphics than at 1080p with raytraced graphics, but that's just me. Time will tell what the rest of the market thinks.


What I think the biggest detractor of Ray Tracing is, is that devs seem to be going overboard and overkill with the effects to showcase RT, instead of simplifying light sources, such as a single source (the sun) or adequately sparse multiple sources (street lights) interacting with terrain and objects in the game. The focus seems more on over-reflective shiny surfaces and too numerous the primary light sources all trying to force awe and bedazzlement from the viewers/gamers.

I don't think nVidia can outright own the rights to Ray Tracing (only their own methodology to render it), since an open source variant exists in Vulkan for AMD called Radeon Rays.

It's up to devs to create/update game engines that utilize open-source software (or even hardware) based RT that isn't an inefficient steaming pile like nVidia's proprietary GameWorks method is.
 
What I think the biggest detractor of Ray Tracing is, is that devs seem to be going overboard and overkill with the effects to showcase RT, instead of simplifying light sources, such as a single source (the sun) or adequately sparse multiple sources (street lights) interacting with terrain and objects in the game. The focus seems more on over-reflective shiny surfaces and too numerous the primary light sources all trying to force awe and bedazzlement from the viewers/gamers.

I don't think nVidia can outright own the rights to Ray Tracing (only their own methodology to render it), since an open source variant exists in Vulkan for AMD called Radeon Rays.

It's up to devs to create/update game engines that utilize open-source software (or even hardware) based RT that isn't an inefficient steaming pile like nVidia's proprietary GameWorks method is.

Yeah, Nvidia can't own raytracing. It's been around in one form or another for many decades. I used to play with 3D Studio 4 for Did and Povray on Linux in the 90's for crying out loud.

They can own the API and implementation though, and that is what I was referring to. I was unaware that there was a Vulkan implementation as well. Do any titles support it today?
 
Yeah, Nvidia can't own raytracing. It's been around in one form or another for many decades. I used to play with 3D Studio 4 for Did and Povray on Linux in the 90's for crying out loud.

They can own the API and implementation though, and that is what I was referring to. I was unaware that there was a Vulkan implementation as well. Do any titles support it today?

Do they own this API though? I could swear I’ve read that they’re simply using DXR through DX12. Hence MS being the owner of the API, right?
 
Yeah, Nvidia can't own raytracing. It's been around in one form or another for many decades. I used to play with 3D Studio 4 for Did and Povray on Linux in the 90's for crying out loud.

They can own the API and implementation though, and that is what I was referring to. I was unaware that there was a Vulkan implementation as well. Do any titles support it today?


I don't think any titles support it (yet), despite being open-source since early last year (2018). I wouldn't be a damned bit surprised if nVidia paid their GameWorks-utilizing partner devs to "delay" implementing Radeon Rays.
 
They can own the API and implementation though, and that is what I was referring to. I was unaware that there was a Vulkan implementation as well. Do any titles support it today?

It's using DXR which is Microsoft DirectX Ray Tracing. When Ray Tracing is needed it just calls in the API suitable for whatever hardware is in the system.

I don't think any titles support it (yet), despite being open-source since early last year (2018). I wouldn't be a damned bit surprised if nVidia paid their GameWorks-utilizing partner devs to "delay" implementing Radeon Rays.

Nonsense. AMD themselves have said that they are holding off on Ray Tracing for the moment.
 
Do they own this API though? I could swear I’ve read that they’re simply using DXR through DX12. Hence MS being the owner of the API, right?

It's using DXR which is Microsoft DirectX Ray Tracing. When Ray Tracing is needed it just calls in the API suitable for whatever hardware is in the system.

I did not know that. Thanks for setting me straight. I made an incorrect assumption.

Nvidia could probably block their non RTX boards from running raytracing on the driver side though.
 
I'm not surprised. This was a Hair Works TM , Grass Works TM, Game Works TM, all the way to Ray Tracing TM.
 
I did not know that. Thanks for setting me straight. I made an incorrect assumption.

Nvidia could probably block their non RTX boards from running raytracing on the driver side though.

They could but what would be the point? Heavily toned down Ray Tracing is just about playable on specialised RT cores.
 
It's DICE and EA, so I'm not sure this is the best showcase for RTX. I'll be interested in how Metro fares, they're much better devs IMO.
 
I don't think any titles support it (yet), despite being open-source since early last year (2018). I wouldn't be a damned bit surprised if nVidia paid their GameWorks-utilizing partner devs to "delay" implementing Radeon Rays.

If you really look how does it goes with game development over the years it does not work like that. Most often game developer are sponsored to use tech from nvidia or amd. Meaning without that marketing co-op developer most often did not really care about it and just use what ever they have in-house (or what the features the game engine has to begin with). So there is no need to pay certain dev not to use certain tech from other company because in general most game developer did not really care about it. It will sort itself out by itself. If you want ANY of your tech to be adopted by other developer there is no other way to make it happen other than doing marketing co-op with them via sponsorship. Just look with Bullet and it's GPU accelerated feature. They said it was PhysX killer for being truly open source since it should work on any GPU. For almost a decade of it's existence how many game actually use Bullet GPU accelerated feature?
 
Do
I did not know that. Thanks for setting me straight. I made an incorrect assumption.

Nvidia could probably block their non RTX boards from running raytracing on the driver side though.
Do they really need to do that? Titan V is quite a special case here. There is no volta based discrete GPU outside GV100. We already know how pascal based GPU run those starwars demo vs turing with RT cores. Seeing how poor are the performance on pascal there is no special driver needed for nvidia for blocking their non RTX GPU from running any ray tracing stuff.
 
Being able to run it well on a VOLTA card isn't really saying it doesn't take anything special to do ray tracing in real time.

The volta just like the turing chips have Tensor flow.

Turing improved on tensor by allowing the tensor cores to operate at lower accuracy settings.

Volta has been used for real time ray calculation outside of gaming. With Nvidias Optix.

If anything this should help make it clear to the people that still believe NVs marketing people that RTX cards have "ray tracing cores" lol

Turings lower accuracy settings though could be a real boon for real time tracing down the road. It really is going to depend how developers use it. The DX and Vulkan APIs both allow developers to basically use them to create light maps. Which seems what everyone has done so far. They also however go further allowing the API to place objects in the scene into the math matrix (the tensor flow) when rays are calculated and interact with these objects the API has the ability to transform the ray as expected but to also create a new lower matrix to calculate multiple bounce calculations. (The entire point of a tensor is to transform a set of math based on a change made in one)

Anyway my point is if developers have mostly been using Volta based cards with standard tensors not capable of dropping to lower accuracy settings like int16 int8. Turing (if I am correct) can operate any of its "master" tensor blocks at any accuracy... so a 2080ti would have 68 blocks of tensors (marketing is calling them Ray tracing cores) that can operate at independent accuracy ratings meaning developers could keep blocks free waiting for lower quality secondary bonus calculations. A 2080tis total number of tensor cores is 68 x 8 or 544. But they can as a block unit operate at lower quality settings making for a lot of very fast data.

It feels wrong but I guess I'm sort of defending Turing a bit here. Game developers have likely not been even trying to do a lot of secondary bounce calculation... just doing a standard light map creation as Volta wouldn't be able to drop banks of tensor blocks to lower quality settings to calculate less accurate bounce traces quickly. Volta would quickly run out of tensor cores as they all operate at full accuracy. So Turing may have some more tracing performance in the tank if its exposed properly. (and tracing may be visually more impressive if its not used as just a fancy light map generator) I think the deciding factor for Turing might actually end up being AMD. I think we all suspect AMD has tensor cores for their navi parts... and is planning to include them in the PS5/XboxNext. IF AMDs first tensor parts include the ability to likewise operate in blocks at differing accuracy levels, developers will start coding for that... and in that case Turing may show what it can actually do.
 
It really shouldn't be surprising as BFV's RT started with Titan Vs (DICE said so themselves). The first few BFV RT demonstrations were actually run on normal shaders, and was later accelerated via RT Cores, so the Titan V running it is normal.
 
Volta and turing both have tensor cores built in. The rtx is just a marketing bit. I have both and found a way to run tensorflow applications on both with similar performance. Taking advantage of int8 and int4 precision is a bit tricky for turing. Scaling is required where you would have to map your values to within the range int4 or int8 covers. You get the speed up, accuracy and maintain precision within some percentage points if your data maps cleanly. If your data exceeds the range covered by the precision, your accuracy tanks or is downright wrong. Bf5 is proably able to map some aspects of rt to int8 but i suspect int4 is pushing it else you'd end up with a lot of artifacts within the render.
 
Last edited:
All this shows is that “rt cores” was just marketing lingo for the tensor cores. Hopefully people will stop referring to them as that.
 
All this shows is that “rt cores” was just marketing lingo for the tensor cores. Hopefully people will stop referring to them as that.

Read Microsoft's DXR programming guideline, and you'll understand why it works on Titan Vs. Tensor Cores do work for the denoising step of the RT process, though. The RT cores are more or less fixed-function (at least as they are known right now), so they don't do anything else other than calculating BVH to speed up RT. BVH search algorithms can be run on regular shaders too, but on a much slower speed.
 
Rt cores ARE tensorcores. This is why titan v can do rt. Turing tensorcores have int8 and int4 acceleration in addition. Yes a shader can be programmed to do a matrix function but it is akin to emulation compared to the hw implementation of a tensorcore.

If you do an int8 calculation on a in16 hw it will just calculate int8 at the same speed of an int16 since int8 is just a subset of int16. This is why i have near identical volta and rtx tensorcore apps. I run mixed int 16 32 functions. Variations in perf is due to mhz differences in the parts. If i run older apps without tensorcores, my p100 cards run faster than my v100 cards due to the mhz difference in the parts.
 
Last edited:
So how does that explain the discrepancy between Titan V's RT performance (when it has the greater number of Tensor Cores and Shader units) against both the 2080 ti and the Titan RTX? ~15% more Boost Clock (with about ~10% less resources) results in ~30% more performance in favor of the RTXs?



Hell, that one shows best case for Titan V (overclocked) while the 2080 Ti is on stock clocks with increased power/temp targets.

Even if they are Tensor Cores, they're definitely tuned to whatever algorithm NV is making use for it, so they still do the job better than the Titan V.
 
It's DICE and EA, so I'm not sure this is the best showcase for RTX. I'll be interested in how Metro fares, they're much better devs IMO.

I dunno I think they are pretty garbage on game/weapon balance and map design and other things but their strength to me is their game engine. Performance and visuals are just excellent on it. If they can't get ray tracing to work well then I don't have lots of hope other devs are going to do a much better job.
 
Dunno about that. Activating DX12 in BFV already slams frames down both for NV and AMD, so there's obviously some trouble there, while Sniper Elite 4 gets gains for both.

I guess it really depends on the implementation.
 
Dunno about that. Activating DX12 in BFV already slams frames down both for NV and AMD, so there's obviously some trouble there, while Sniper Elite 4 gets gains for both.

I guess it really depends on the implementation.
Agreed.
 
Even if they are Tensor Cores, they're definitely tuned to whatever algorithm NV is making use for it, so they still do the job better than the Titan V.

Well its a generation newer. It has likely other IPC gains.

But mainly as has been talked about. Nvidia upgraded the tensor cores with Turing. Tensor cores are what has been driving their high end volta card sales. So improving tensor performance was as much a priority as any other bit on the chip. Its also possible that BF is in fact using some INT8 calculation or only using INT8. As Slade has already mentioned if that is the case Volta would be running them at the higher precision factor no matter what cause it can't do INT8. So its basically using a full size tensor core to perform the math... where as Turing is doing double the work per bandwidth at INT8. In general whenever a CPU or GPU does something at half precision you can expect a 20-40% boost in performance. So it seems pretty logical to me.

The CPUs folks went down this path some time ago. ;)
https://software.intel.com/en-us/articles/performance-benefits-of-half-precision-floats
 
I don't buy it. If they really were just Tensor Cores that could operate at lower precisions for speed, shouldn't you market them as better Tensor Cores, since, as you say, they have high-paying customers who are buying GPUs for more/better Tensors?

Instead, they market it (even the Titan RTX) with *less* Tensor Cores instead of flexible cores that can be used for both RT and Tensor Ops.
 
I don't buy it. If they really were just Tensor Cores that could operate at lower precisions for speed, shouldn't you market them as better Tensor Cores, since, as you say, they have high-paying customers who are buying GPUs for more/better Tensors?

Instead, they market it (even the Titan RTX) with *less* Tensor Cores instead of flexible cores that can be used for both RT and Tensor Ops.

Because marketing. It's easier to market them as 'raytracing cores' for the consumer gaming market because it's something that gamers would care about more. At the end of the day it's just tensor cores though.

And yes - It also helps them out with the high paying Titan customers who are buying the Titan for the tensor cores, but maybe for their application didn't need all the VRAM and would have been fine with just a normal 2080ti.

It's 100% marketing.
 
I don't think you're getting what I'm saying; I'm talking about the Titan RTX having *less* Tensor Cores than Titan V. If you were marketing the Titan RTX towards folks who need more/better Tensor Cores, then why advertise it with less than the previous generation instead of touting more and improved ones?
 
Back
Top