RTX Tested on a Titan XP

CAD4466HK

2[H]4U
Joined
Jul 24, 2008
Messages
2,686
No surprises here. it's slower than a RTX 2060.

And this begins to illustrate one of the key problems with ray tracing on Pascal: the experience is very inconsistent. This is because there is such a large difference between the capabilities of a card like the Titan X without ray tracing and with ray tracing. So as you move around an environment with varying ray counts, interactions and degrees of ray tracing, the performance of the Titan X fluctuates massively. In areas with little ray tracing, performance is decent but when you’re in an area with lots of shadows, your frame rate will absolutely tank.

https://www.techspot.com/review/1831-ray-tracing-geforce-gtx-benchmarks/

2019-04-16-image-2.png
 
I already knew the answers to that question, because I ran the demos provided by Nvidia on my GTX 1060. It was 10fps in Atomic Heart, and 5fps in the other two.

Shadow of the Tomb Raider has some of the simplest ray tracing effects to render (of all the games with it), but pascal still slows down to a crawl with it enabled. I'd probably get 20fps at 1080 Ultra.
 
The question is, is nVidia purposely gimping these cards to make RTX look better on RTX cards? I find it hard to believe that the 1080ti would be slower than a RTX 2060 in anything :p apart from depreciation in value... :D
 
The question is, is nVidia purposely gimping these cards to make RTX look better on RTX cards? I find it hard to believe that the 1080ti would be slower than a RTX 2060 in anything :p apart from depreciation in value... :D


You can keep telling yourself that. Call me back when you wake up and recognize that ray tracing requires dedicated units to get anywhere near playable performance.

And if they pulled driver cheats like that, it wouldn't be long before they were found out, just based on the analysis that people whit too much time do. Remember when they sat on multi-threaded CPU Physx for years, and several sites called them out for it? Between hardcore analyses, and the eventual open-source reverse engineering of drivers, it would be found.
 
You can keep telling yourself that. Call me back when you wake up and recognize that ray tracing requires dedicated units to get anywhere near playable performance.

And... where do you get that information from :)? I've seen ray tracing demos run on CPUs before....
 
And... where do you get that information from :)? I've seen ray tracing demos run on CPUs before....


How many CPUs, and how limited were the environments being rendered?

There's a big difference between bringing RT to playable games, and posting some fantastic demo video on Youtube. Companies have been pulling that shit for years.

Also, don't forget that Microsoft has it;s hands in this API, so it's not just a case of "Nvidia said." Every one of these demos is using the DirectX Raytracing API. How do I know? because I had to install 1809 before any of them would run.

I don't think that Microsoft would perpetuate the lie that you need these massive RT rake Nvifidia over the coals form being greedy.

Also, if this software layer is so slow, why hassn't AMD released such a driver update for their Polaris and higher? They've had six damn months to build one, and like you think it would totally embarrass Nvidia for it to match the RTX cards at games.
 
Last edited:
How many CPUs, and how limited were the environments being rendered?

There's a big difference between bringing RT to playable games, and posting some fantastic demo video on Youtube. Companies have been pulling that shit for years.

Also, don't forget that Microsoft has it;s hands in this API, so it's not just a case of "Nvidia said." Every one of these demos is using the DirectX Raytracing API. How do I know? because I had to install 1809 before any of them would run.

Look it up, intel ran Realtime Ray tracing demos years ago on 8 core cpus.
 
Look it up, intel ran Realtime Ray tracing demos years ago on 8 core cpus.


At 256x256 resolution, at 15fps:

benchfps.png


https://www.pcper.com/reviews/Graph...ay-Traced-Project/Performance-and-Conclusions

That would give you 30fps if you threw eight cores at the problem. If you bumped it up to a decent resolution (1024x1024), your performance drops to 2fps.

You know why we needed a sea change in graphics hardware to make real-time RT possible? Its because the complexity of scenes is consonantly INCREASING, which is why we only tried to RT older less-complex games (at resolutions you would never bother with). So to get ahead of the never-ending graphics complexity train, they added dedicated RT units, and the matrix powerhouse tensors.

See here for more detail about why we've never been able to catch up with RT.

https://blog.codinghorror.com/real-time-raytracing/

The problem is you really need to be doing a whole bunch of different calulations at once, but Pascal is designed just to do tons of FP32.
 
Last edited:
Is it possible to have RT done on a 2nd card like Physx?

So to equal a 2080ti in RT at playable speeds, you would need ~2 1080ti's in SLI, if the game supports it. Ouch.
 
I'm curious to see how well those effects work when you're able to move around freely in a game engine. It's obviously not traditional RT (Crytek's developers even admit this), so the question is how good the fidelity really is?

It's easy to paint around your weaknesses when the creator of the demo is the one controlling the camera. But if they can back the demo up with reasonable fidelity, it's going to mean tons of new Cry Engine converts.

They also said the engine can be accelerated by RTX, so it's not a useless tech, as suggested earlier in the thread.
 
Is it possible to have RT done on a 2nd card like Physx?

So to equal a 2080ti in RT at playable speeds, you would need ~2 1080ti's in SLI, if the game supports it. Ouch.


I don't see why it wouldn't work. If the game supports AFR SLI in the Nvidia drivers, it' s still performing the same way as before, it's just more demanding to draw a frame.

But I doubt dedicated RT will ever be a thing. We're not talking about a hacked-together API that's run in-parallel on graphics cards, we're talking about the same old easily parallelizeable rendering cards have always done.
 
RTX it's what nvidia call to the DXR extension to utilize RT Cores. that's all.. so at the end both use microsoft DXR code.
That's my point, since Pascal doesn't have RTX cores I'm guessing its not using RTX extensions, just "plain" DXR
 
I think the real reason Nvidia did this was for developers.

First, this allows a much wider range of cards to be supported, meaning the addressable market is much larger.

Second, not every developer has the latest graphics card, so now they can at least test simple code on older machines.

Third, because there are now these slow cards in the mix, it forces developers to optimize their code to work on the lower spec.

And I don't believe any conspiracy, I'm actually surprised it works at all. I ran the Star Wars demo on my 1070 laptop. 7 fps but it worked.

AMD has said they are working on ray-tracing, I'm sure they will have a solution when it's ready.
 
Back
Top