Initial RTX performance...

So I don't know what I was smoking last night, SLI is not really working, even in DX11. I was sure I was getting good performance, but maybe it was just the level I was testing.

I tried again tonight. With DX11 SLI I was getting activity on both cards, but they were staying under 40% usage on both. Tested in the France SP level, getting in the 90 - 100 fps range, not as impressive. I also noticed bad blur artifacts that seem SLI related.

Tested DX12 DXR in the France map, at DXR Ultra was getting 30 - 40 fps, completely unplayable. With DXR Low it was hitting the 60 - 70 fps range, so just making the cut (though still choppy for my taste). Putting everything else on Low did not improve fps at all.

Then I tested DX12 with DXR off, and this was the most playable for me. Getting around 135 fps average, give or take. Lowering settings helped a bit, but even on Low everything it was still only around 155 fps, not worth sacrificing the image quality for 20 fps.

I'm going to try with DXR but with frame limit at 60Hz. Maybe that will be playable.

I used Nvidia Inspector to change the compability bits for SLI to use the Battlefront II bits for my SLI 1080Ti's at 4k HDR. I get almost perfect scaling on both cards 99-98% usage with all settings at High, some at Ultra. It's important to turn future frame rendering ON for this, which set my cards to almost max usage. I get 120 FPS avg.

I'm under the impression that with DXR you cannot use SLI at the same time.
 
DXR on low is playable in 4K but having it off and all the post processing on looks better. DXR on ultra is where it’s at. I have DXR off for now until they can get sli + DXR working.

Also, I noticed higher input lag with it on.
 
upload_2018-11-21_20-33-57.png
 
This dude is claiming to have gotten 2080 Ti SLI working with DXR. What does he know?



I think he's just mistaken. He doesn't display his fps, thus probably just capturing via shadowplay, and doesn't see his actual GPU usage. From my understanding, with DX12 and Vulkan, you need to specifically program the game to use multigpu. The good news about this, in the future, you might not need SLI, and can mix and match video cards (thus, 1 1080TI and 1 2080TI will run faster than just 1 2080TI). Right now, it only works with classic SLI setups.

Unfortunately, this will require a bit of work to get it all working, so no modern engine will support it.

https://developer.nvidia.com/explicit-multi-gpu-programming-directx-12
 
nvidia would never allow mix and match - But at the very least I think multi-GPU will still be relevant in the future because of raytracing. I think we're at a point here soon where folks will start dropping in pure compute/raytracing cards. Then, eventually once the full render path is raytracing, we'll be back to a single card taking care of it all.
 
If it's all done at the DX12 level shouldn't matter at that point, right?

That's right, if it's DX12 version the game devs have to get multi-GPU to work. There's nothing for Nvidia to do. Same goes for Crossfire. That is one of the biggest reasons I think it is on a hiding to nothing. One or both will have to pay the game companies to implement it.
 
There are some developers that care. Tomb Raider has had great mGPU support. But most don't bother.

Yeah, both that engine and the one that Rebellion uses for their games work great for multi GPU. It's insane that AAA big money studios like DICE can't do the same with their technology.
 
Based on the many posts I've read, I think you're in the majority. I would have preferred if nVidia could have used standard compute functions instead of dedicated cores so AMD could join the party. Then again, since AMD's cards tend to be better at compute that might have been a very bad move on nVidia's part, for nVidia. :whistle:
They did use standards. It's part of DX12. 1.
 
They did use standards. It's part of DX12. 1.

While true, I meant that I was hoping that nVidia would use standard compute instead of dedicated hardware. Maybe it's not possible and that's why they went the route they did. My understanding (maybe incorrect?) was that the raytracing demoed was built specifically for the RT cores and would have to be developed for once AMD puts the hardware out there. If it were simply built on DX standards, all AMD would need to do is release the hardware and it would be compatible instead of the game developers having to go back to the drawing board and support the AMD side.

Again, maybe I'm completely misunderstanding the situation... :oldman:
 
While true, I meant that I was hoping that nVidia would use standard compute instead of dedicated hardware. Maybe it's not possible and that's why they went the route they did. My understanding (maybe incorrect?) was that the raytracing demoed was built specifically for the RT cores and would have to be developed for once AMD puts the hardware out there. If it were simply built on DX standards, all AMD would need to do is release the hardware and it would be compatible instead of the game developers having to go back to the drawing board and support the AMD side.

Again, maybe I'm completely misunderstanding the situation... :oldman:

Think you are misunderstanding. If AMD had their own Ray Tracing hardware all that would happen is that the Microsoft DXR would call the AMD Ray Tracing API instead of Nvidia's one.
 
Except DICE has never gotten DX12 working right so far. Not all AAA developers are made equal. In my opinion, if anyone is to get ray tracing right it's going to be id.

So much this. Real time global illumination is going to be the real game changer. Once a game comes out that is completely ray traced instead of a mix of rasterization and ray tracing I bet people will change their tune.


Sure they will - and that game will debut about 2-? years after the release of a console with raytracing hardware.. but until then.. you better get used to the tune..
 
Back
Top