Rise of the Tomb Raider DX11 vs. DX12 Review @ [H]

DX12 SLI is a disaster for me with this patch. Extremely choppy.

Also seems the game force disables steam overlay, and reshade was causing crash on startup.

My 980s are at normal boost clock but power usage doesn't hit greater than 50%.
 
you don't see the difference between the systems so never mind then. I'm just wasting me time and keystrokes...
Interesting.
You came in saying I'm doing it wrong and cant be bothered to do it right.
I asked you what you would like to see and you dont even know.
You are correct, you wasted your time and keystrokes.

Perhaps you would like to demonstrate your intellectual acuity by describing what is irking you without wasting keystrokes this time?
 
Interesting.
You came in saying I'm doing it wrong and cant be bothered to do it right.
I asked you what you would like to see and you dont even know.
You are correct, you wasted your time and keystrokes.

Perhaps you would like to demonstrate your intellectual acuity by describing what is irking you without wasting keystrokes this time?

you are also confused.. you misinterpreted his words.. he was referring to noko's post.. the one im gona quote below:

Patch 7 results:

Configuration tested is in my sig

Gigabyte 1070 G1, AMD FX 9590, using Gigabyte default OC mode for the card

2560x1440, max settings except textures on high, SMAA, using built in Benchmark

DX11/DX12

Mountain peak
77.8/78.4​
Syria
60.1/60.1​
Geothermal Valley
56.7/57.6​
Overall
65.1/65.7
PowerColor R9 Nano, I7 6700K, +30% on PowerTune in the driver

2560x1440, max settings except textures on high, SMAA, using built in Benchmark

DX11/DX12

Mountain peak
59.2/55.2​
Syria
44.7/41.27​
Geothermal Valley
42.1/41.3​
Overall
49/47.4
The 1070 actually performed better going from DX 11 to DX 12, however slight and probably within margin of error while the Nano loss performance with the better cpu. I guess one explanation is the FX 9590 got better used increasing the performance of the 1070. In a SLI/CFX configuration DX 12 with the increase utilization of the cpu may show.

As a note using very high textures with the 1070 didn't seem to affect performance at all, for the nano it significantly decrease performance on these benchmarks.

As much as I can understand he wasn't mentioned you or your Post Nenu.. all was related to the method used by Noko.
 
  • Like
Reactions: Nenu
like this
you are also confused.. you misinterpreted his words.. he was referring to noko's post.. the one im gona quote below:

As much as I can understand he wasn't mentioned you or your Post Nenu.. all was related to the method used by Noko.
lol, you might be right.
He responded (post 165) as though the intended recipient had replied though and that wasnt Noko.

edit corrected word "was" to "wasnt", dur.
 
Last edited:
thank you araxie!
yes I am/was talking about/to noko. I just wanted him to swap video cards between systems to see the results but he said he cannot be bothered. somewhere along the line nenu and mrb got confused(I forgot to quote noko) and I got frustrated trying to explain. nothing personal and no offence intended(none comprehended! anybody get that?) to anybody.
 
thank you araxie!
yes I am/was talking about/to noko. I just wanted him to swap video cards between systems to see the results but he said he cannot be bothered. somewhere along the line nenu and mrb got confused(I forgot to quote noko) and I got frustrated trying to explain. nothing personal and no offence intended(none comprehended! anybody get that?) to anybody.

I understood. =) hehehe. I just tried to avoid a 3rd World War here.. Wars can start by giving a wrong message. xD
 
hehehe you confused him with Pieter3dNow... just saying..
The obtuse here talking BS without any knowledge of modern gaming is Pieter3dNow not pendragon1... Just saying..

Well you can explain to him what the difference is between DX11 and DX12 because you are the fountain of all knowledge regarding the technical difference of both api.
And yes DX11 can't scale , hence DX12 , dear god ....
 
Well you can explain to him what the difference is between DX11 and DX12 because you are the fountain of all knowledge regarding the technical difference of both api.
And yes DX11 can't scale , hence DX12 , dear god ....
You are attributing an issue where there isnt one.
In post 152 you said
There is no DX11 engine that can do more then 4-5 cores without a negative scaling impact ..
I demonstrated that TR can and does. The CPU use graph was posted as direct proof that all 8 cores are well utilised.
Why do you think this is not representative?
I can disable HT and post the CPU/GPU and framerate graphs with and without HT if you like?

In post 154 I asked you to try it for yourself.
Here you are again with the same misinformation so you havent tried it yet.
Its better to know from direct experience.

Try TR with DX11 on a 4 core + HT CPU or an 8 core and you will see that it scales to 8 cores easily.
Post your CPU use graph for the benchmark as evidence.

When you have finished, please answer post 158.
 
Well you can explain to him what the difference is between DX11 and DX12 because you are the fountain of all knowledge regarding the technical difference of both api.
And yes DX11 can't scale , hence DX12 , dear god ....

Rise of the tomb raider, Shadow of mordor, Watch Dogs, Dragon Age Inquisition, GTA V, Crysis 3, Hitman (also Absolution) and another quite substantial amount of games will use 8 threads easily without the need of DX12, DX12 is a help only to weak AMD CPUs that's all, as it was already proven with all available DX12 games out there, none of those bring anything important except to also help AMD with their poor DX11 performance.

Sorry i'm not the one here who need proof to validate his claims, is you, always talking without any proof, claims here and there of DX11 limitations and DX12 advantages that doesn't exist yet in any game except for AMD hardware as Intel and Nvidia work as intended without that.
 
Last edited:
Rise of the tomb raider, Shadow of mordor, Watch Dogs, Dragon Age Inquisition, GTA V, Crysis 3, Hitman (also Absolution) and another quite substantial amount of games will use 8 threads easily without the need of DX12, DX12 is a help only to weak AMD CPUs that's all, as it was already proven with all available DX12 games out there, none of those bring anything important except to also help AMD with their poor DX11 performance.

Sorry i'm not the one here who need proof to validate his claims, is you, always talking without any proof, claims here and there of DX11 limitations and DX12 advantages that doesn't exist yet in any game except for AMD hardware as Intel and Nvidia work as intended without that.
Nvidia 970 in SLI is showing bigger gains with DX 12 over DX 11 with ROTR. Just not a AMD thingy here as in weak gpu.

DX 11 is single thread (read one core) when it goes to the video card, DX 12 can use multiple cores to send data to the video card. Yes your right, the game can use multiple threads/cores to process with but once it has to send that information to the video card in DX 11 it can only be done with a single thread. Which limits draw calls tremendously.
 
They are using homogenous mGPU so the cards must be near-identical.
Can you link me to that information?

Would kinda explain what I got with a 290x and 1070 together except for one 290x run which did abnormally good.

As for putting the 1070 into the I7 6700K rig to retest - it will not fit. Custom made SFF case, best I could do is put the Nano in the FX 9590 case which I see is pointless and not worth the effort. DX 11 performance to DX 12 showed me Pascal gains with AMD cpu (unknown if with Intel processor), Fiji going from DX 11 to DX 12 looses with Intel CPU.

Now I do have a 290x and 1070 in the same case with the FX 9590. I will see how the 290x does with DX 11 and DX 12. Not sure if EMA is viable yet between such a difference in cards. Just for fun.
 
Can you link me to that information?

It was in the MSDN article you posted.

Homogeneous mGPU refers to a hardware configuration in which you have multiple GPUs that are identical (and linked). Currently, this is what most people think of when ‘MultiGPU’ is mentioned. Right now, this is effectively direct DX12 control over Crossfire/SLI systems. This type of mGPU is also the main focus of this post.
...
The developers of Rise of the Tomb Raider on PC have implemented DX12 explicit homogeneous mGPU...
 
It was in the MSDN article you posted.
Must have missed that. I am going to have to retest with the Nano - 16.6.2 drivers I don't trust the results. 16.7.2 are much better. Played Doom with Vulkan for over 2 hours today being side tracked. I will just take the 290x out of the machine after a few benchmark runs with ROTTR.
 
Just ran the benchmark in DX11 vs DX12 1440p vsync off, SMAA, very high preset, and tested with presentmon:

d0xUKhL.jpg


1450/7400 980ti, 5820k 4.3ghz
 
Reran the tests on the Nano, a little bit different on the settings:
2560x1400, Nano with +25% PowerTune. 16.7.2 drivers
All high except textures on High (ran it later with very high textures and the Nano did not stutter like it use to)

DX 11
61.85
46.51
43.59
Avg - 50.91

DX 12
56.75
46.74
42.63
Avg - 48.82

Slightly slower in DX 12 average wise but benchmark indicates higher min frame rates with DX 12. So for this game it is a wash between DX11 or DX 12 for the Nano.
 
Well you can explain to him what the difference is between DX11 and DX12 because you are the fountain of all knowledge regarding the technical difference of both api.
And yes DX11 can't scale , hence DX12 , dear god ....
DirectX 11 supports multithreaded rendering and object creation concurrently. The device drivers have to support it. Remember the NVIDIA 337.50 driver? One of the things they did in that release that contributed to the huge performance gains was enabling multithreaded rendering support for DX11. DX11 by default is setup to be thread-safe unless the programmer uses the preprocessor to force singlethreaded rendering and free up access to memory resources (D3D11_CREATE_DEVICE_SINGLETHREADED). There is no limit to the number of threads you can create inherent in the API. DirectX 9, in fact, supported multithreading, but the overhead at the time was too great that performance was actually degraded if enabled.
 
Is just me or I can still not see any difference between API's?
There is a difference, you dont have the highest AO setting with DX12.
Its a backward step unless you have a slower CPU and cant use VXAO anyway.
 
980Ti @ 1463/7500 - 5820K @ 4375

DX11 - Very High preset + VXAO + SMAA 1440p
j4drbPN.png

DX11 - Very High preset + HBAO+ + SMAA 1440p
TgamgbW.png

DX12 - Very High preset + HBAO+ + SMAA 1440p
IPLacjX.png
 
Looks like DX 12 does have some legs on Nvidia hardware. I just realized I havn't tested the 1070 yet - hmmm.
 
Looks like DX 12 does have some legs on Nvidia hardware. I just realized I havn't tested the 1070 yet - hmmm.

Weird thing is on the UWP version I've tested so far, I NEVER got higher average FPS in DX12 vs DX11, best case was roughly equal averages with higher minimums. This is the steam version I tested.

I used driver 369.09 btw
 
Weird thing is on the UWP version I've tested so far, I NEVER got higher average FPS in DX12 vs DX11, best case was roughly equal averages with higher minimums. This is the steam version I tested.

I used driver 369.09 btw
Are both on the same game version? I also think you mean 369.05, which is the driver released for Pitan.
 
Are both on the same game version? I also think you mean 369.05, which is the driver released for Pitan.

Nope it's 369.09
http://download.windowsupdate.com/c..._f44d993a4269ca64c6cbf10ab7ec27a0231161e1.cab :) W10 anniversary update, NV sent it to MS for WHQL but didn't publish it themselves oddly.

The game version I was testing UWP should be the same but VXAO was missing for example, so I don't know. Either way, there's a ~9% improvement in going to DX12 for me, and minimums are in far better shape
 
Nope it's 369.09
http://download.windowsupdate.com/c..._f44d993a4269ca64c6cbf10ab7ec27a0231161e1.cab :) W10 anniversary update, NV sent it to MS for WHQL but didn't publish it themselves oddly.

The game version I was testing UWP should be the same but VXAO was missing for example, so I don't know. Either way, there's a ~9% improvement in going to DX12 for me, and minimums are in far better shape
ManuelG did say there was a new global driver releasing middle of this month, so maybe .09 is it. Still weird that MS would release it before NVIDIA doing so.
 
I think that's the same driver that Windows installed when I put the 1070 in (Windows x64 Insider Edition). When I ran 3DMark it said my driver wasn't approved. :notworthy:
 
I just ran the integrated benchmark yesterday on my XFX Fury X at 1440p (i7 4770k and 16gb RAM) with all settings maxed, but turned off pure hair and AA.

DX12 was now 5fps faster than DX11 on my machine.

65PS vs 60 FPS average. No stuttering whatsoever in DX12.

IIRC DX11 used to be faster than DX12 on my setup in this game when I first played it several months back.


Edit-

Just went back and looked at the review
DX12 was like 45fps on the review and DX11 was like 55fps.

I'll run the benches again with everything maxed out and see how much change has occurred with patches and driver updates. My 4770k is o/ced to 4.5ghz but still slower than the 4.7ghz sixth gen i7 that hardocp used in the benches.
 
Last edited:
what is flipping back and forth there?

neither one of those looks any different than the other. It's just like a wind of dust is moving on one more visibly than the other.

Nah, they're different. Look at the shadows starting from the bottom left corner, that sort of crevice in the cliff wall. When you hover over it switches to HBAO+, you can also tell by the freamrate

here is SSAO vs HBAO+ (I turned on SMAA along with HBAO+)
TOmb : Screenshot Comparison

VXAO vs HBAO+ again

ROTTR : Screenshot Comparison

Concentrate on the column on the left, just after that door/grate, you can tell the corners are much better shaded with VXAO

ROTTR : Screenshot Comparison
 
Screen shots never do games justice either although those do look beautiful.

Witcher 3 for example looks kinda meh in screenshots but when you're actually playing it it's beautiful.
 
don't sweat the small stuff.
Techgage Image - Rise of the Tomb Raider - Ambient Occlusion Comparisons

I miss the BIG changes with technology like S3TC

This S3TC comparison is on the Viper S2000 from ATI back in circa 1999, it had S3TC which made textures look so incredible back then :)
January 2002 3Digest - S3TC (FXT1) influence on speed and quality.
Unreal Tournament boxes looked liked this
ut-nos3tc1.jpg

unless you had S3TC - and then they looked like THIS!
j7dg1c.jpg
 
is the vulcan patch coming? dx12 sux at least with this game, i think the patch was a waste of time
 
Back
Top