Rise of the Tomb Raider DX11 vs. DX12 Review @ [H]

How much do options such as HBAO+/VXAO/etc affect memory?

Cheers

I bought the UWP version, I don't think it ever got the VXAO patch, I tried looking for patch notes for it but nothing to be found lol. I haven't actually found anyone else with the UWP version to ask :p

Even benchmarking is a pain in the ass, the game uses double buffering by default, so unless I set my refresh rate to something high it doesn't even stress the GPU
 
I bought the UWP version, I don't think it ever got the VXAO patch, I tried looking for patch notes for it but nothing to be found lol. I haven't actually found anyone else with the UWP version to ask :p

Even benchmarking is a pain in the ass, the game uses double buffering by default, so unless I set my refresh rate to something high it doesn't even stress the GPU
You must be 1 of about 10 who bought that on UWP :D
Cheers
 
I am surprised Tomb Raider was neglected as GOW Ultimate and Quantum Break were both updated to fix their UWP issues.
 
Last edited:
I bought the UWP version, I don't think it ever got the VXAO patch, I tried looking for patch notes for it but nothing to be found lol. I haven't actually found anyone else with the UWP version to ask :p

Even benchmarking is a pain in the ass, the game uses double buffering by default, so unless I set my refresh rate to something high it doesn't even stress the GPU

I have the UWP version. If you run it with all of the graphics settings at max on a 3x 1080P, it'll stress any GUP. There's now way my old GTX 680s could handle this, a GTX 1080 does pretty nicely. Everything max in DX 12 using FXAA at 5760x1080, it's pretty amazing.
 
You must be 1 of about 10 who bought that on UWP :D
Cheers

For $9 while pretending to be in Ukraine, it was almost literally a steal. Lol.
I have the UWP version. If you run it with all of the graphics settings at max on a 3x 1080P, it'll stress any GUP. There's now way my old GTX 680s could handle this, a GTX 1080 does pretty nicely. Everything max in DX 12 using FXAA at 5760x1080, it's pretty amazing.

Lol. I think that's key right there, 5760x1080. At 1440p max settings parts of the benchmark have the gpu sitting at 70% usage
 
Seriously though, I saw the very first reddit thread reporting the Ukraine workaround, and my first thought was

'let me prove this asshole wrong, no way in hell this is going to work'

And it did.

Congratulations Microsoft, stealing from you is easier than stealing candy from a baby.

This is one of the highest value purchases ever, up there with my 2 euro brand new Bioshock collector's edition
 
It was mentioned in one of recent Kotaku articles that steam version is 98% of sales. After putting this together with steamspy estimate UWP version is in 15k sales range and that's after Ukraine exploit and before people learned how bad UWP is.
 
Good article. I would like to see it updated with the last batch of cards (gtx 1070/80 and the upcoming rx 480).
 
Sorry for necroing this sticky, but a quick question about DX12 vs DX11 (I don't own any DX12 games currently available).

At the exact same settings, is there any difference between DX12 and DX11 in terms of visual quality?
 
Sorry for necroing this sticky, but a quick question about DX12 vs DX11 (I don't own any DX12 games currently available).

At the exact same settings, is there any difference between DX12 and DX11 in terms of visual quality?
Nope.

Actually in dx11 you have an extra graphics option
 
Patch 7 results:

Configuration tested is in my sig

Gigabyte 1070 G1, AMD FX 9590, using Gigabyte default OC mode for the card

2560x1440, max settings except textures on high, SMAA, using built in Benchmark

DX11/DX12

Mountain peak
77.8/78.4​
Syria
60.1/60.1​
Geothermal Valley
56.7/57.6​
Overall
65.1/65.7
PowerColor R9 Nano, I7 6700K, +30% on PowerTune in the driver

2560x1440, max settings except textures on high, SMAA, using built in Benchmark

DX11/DX12

Mountain peak
59.2/55.2​
Syria
44.7/41.27​
Geothermal Valley
42.1/41.3​
Overall
49/47.4
The 1070 actually performed better going from DX 11 to DX 12, however slight and probably within margin of error while the Nano loss performance with the better cpu. I guess one explanation is the FX 9590 got better used increasing the performance of the 1070. In a SLI/CFX configuration DX 12 with the increase utilization of the cpu may show.

As a note using very high textures with the 1070 didn't seem to affect performance at all, for the nano it significantly decrease performance on these benchmarks.
 
As a note using very high textures with the 1070 didn't seem to affect performance at all, for the nano it significantly decrease performance on these benchmarks.

VRAM being the culprit?
 
VRAM being the culprit?
Yes, very much so, very high textures eats a lots of VRam. The 290x choked using very high textures while the Nano just slows down but is still smooth for the most part. The 1070 it just improves the quality a little with no noticeable performance lost.
 
:LOL: nope, too much of a pain for that. Benchmark performance between the two machines it shows mostly plus DX 12 is not helping AMD but the opposite.
Those system are pretty different, might show different results... And it might show that the 9590 is actually comparable to the i7 despite what others think.
 
Those system are pretty different, might show different results... And it might show that the 9590 is actually comparable to the i7 despite what others think.
:LOL: nope, too much of a pain for that. Benchmark performance between the two machines it shows mostly plus DX 12 is not helping AMD but the opposite.

IF you can see the saturation of the 8 cores on the AMD machine then it would have make a difference since DX12 allows more cores to be used and using the Nano would have different results on it ..
 
I'm not so sure.
Running the benchmark on DX11 at 1080p with a 6600K @ 4.7GHz + clocked 980ti, there were a few short times all cores would max and framerate would stutter.
Changing to a 6700K @ 4.6GHz, in the same place all cores + HT cores still maxed out but for a fraction of the time and framerate looked smooth throughout.

DX11 can make use of 8 cores already.
Changing to DX12 will surely not help.
 
its funny how someone can post two different firestrike systems and everyone is all "but they are different systems!!" but this is cool...
 
Well I am hoping Brent or Kyle has this on the upcoming RX 480 CFX review. That is if it works CFX/SLI in DX 12. Also two 970/980 in SLI.
 
its funny how someone can post two different firestrike systems and everyone is all "but they are different systems!!" but this is cool...
Well for that game and on those systems that is the result. You can compare one system to the other. What shows is in this game DX 12 improves performance somewhat over DX 11 for Pascal and not for GCN 1.2. Yes I acknowledge that maybe with DX 12 the cpu was less of a bottleneck for the 1070 which still proves DX 12 is improving performance. If I down clocked my FX 9590 that maybe will prove that or not, same as with the I7-6700K. Better yet let Brent and Kyle plow through this and clearly review results when they get to it.
 
I'm not so sure.
Running the benchmark on DX11 at 1080p with a 6600K @ 4.7GHz + clocked 980ti, there were a few short times all cores would max and framerate would stutter.
Changing to a 6700K @ 4.6GHz, in the same place all cores + HT cores still maxed out but for a fraction of the time and framerate looked smooth throughout.
DX11 can make use of 8 cores already.
Changing to DX12 will surely not help.

Well this is not true :) You might see some activity but it does not work like that. There is no DX11 engine that can do more then 4-5 cores without a negative scaling impact ..
Due to technical reasons DX12 can use all the cores all the time and send data on every core to the gpu this is the difference between both API.
 
Well for that game and on those systems that is the result. You can compare one system to the other. What shows is in this game DX 12 improves performance somewhat over DX 11 for Pascal and not for GCN 1.2. Yes I acknowledge that maybe with DX 12 the cpu was less of a bottleneck for the 1070 which still proves DX 12 is improving performance. If I down clocked my FX 9590 that maybe will prove that or not, same as with the I7-6700K. Better yet let Brent and Kyle plow through this and clearly review results when they get to it.

Have you seen pendragon1 benchmarks on Ashes of the singularity ? Where he has 2 parts at same megahertz 4 & 8 core running the benchmark and nearly getting the same result on crazy settings. So if you wanted to test anything on that Nvidia system disable 4 cores ...
 
Well this is not true :) You might see some activity but it does not work like that. There is no DX11 engine that can do more then 4-5 cores without a negative scaling impact ..
Due to technical reasons DX12 can use all the cores all the time and send data on every core to the gpu this is the difference between both API.
Verify for yourself. All 8 cores of my 6700K get used almost identically.
If you dont manage it, I'll post a cpu use graph.
Let me know if I should post it.
 
Verify for yourself. All 8 cores of my 6700K get used almost identically.
If you dont manage it, I'll post a cpu use graph. Let me know.
Yeah there are several games I have tested that peg an i5 in spots including Rise of the Tomb Raider, Batman Arkam Knight, Witcher 3, AC Syndicate and AC Unity.
 
Well this is not true :) You might see some activity but it does not work like that. There is no DX11 engine that can do more then 4-5 cores without a negative scaling impact ..
Due to technical reasons DX12 can use all the cores all the time and send data on every core to the gpu this is the difference between both API.
Decided to post it anyway to save deviation

GPU use increased and it became smoother moving from a 6600K to a 6700K.
No negative scaling.
All "8" cores used optimally.

TR bench cpu use vsync off.png



ps on Win7-64

corrected the image, first one was vsync on
 
Last edited:
Decided to post it anyway to save deviation

GPU use increased and it became smoother moving from a 6600K to a 6700K.
No negative scaling.
All "8" cores used optimally.

View attachment 5128


ps on Win7-64

corrected the image, first one was vsync on

It is to technical to explain but the DX11 limitations and your screen shot have nothing to do with each other. Unless you are running it on a 8 core AMD cpu or a 8 core 16 threads Intel cpu you are not going to see the difference ....
 
It is to technical to explain but the DX11 limitations and your screen shot have nothing to do with each other. Unless you are running it on a 8 core AMD cpu or a 8 core 16 threads Intel cpu you are not going to see the difference ....
I changed from a 4 core to 4 core + 4HT and saw a reduction in stutters, no negative scaling and all 8 cores were well utilised when needed.
This directly opposed what you said.

Please explain further how what actually happens and what I experienced didnt really happen.
 
Well for that game and on those systems that is the result. You can compare one system to the other. What shows is in this game DX 12 improves performance somewhat over DX 11 for Pascal and not for GCN 1.2. Yes I acknowledge that maybe with DX 12 the cpu was less of a bottleneck for the 1070 which still proves DX 12 is improving performance. If I down clocked my FX 9590 that maybe will prove that or not, same as with the I7-6700K. Better yet let Brent and Kyle plow through this and clearly review results when they get to it.

to me you are trying to show the difference but are not doing it right and cannot be bothered to do it right. so you examples are useless.
 
Last edited:
to me you are trying to show the difference but are not doing it right and cannot be bothered to do it right. so you examples are useless.
He stated his criteria, I showed that TR under DX11 can fulfil them.

What is missing?
 
to me you are trying to show the difference but are not doing it right and cannot be bothered to do it right. so you examples are useless.
You should really just stop talking. There are DX11 games that will peg an i5 and that is a fact. And when the i5 is pegged it can cause some hitching and slow downs that are not there when using an i7.
 
why are you talking about i5? maybe you should stop. my point was the base systems are very different and he's trying to show the difference but wont take the hour(at most) to swap video cards to make the testing more "scientific" for a lack of a better term. I swapped cpus to show differences(or lack of) in my AOTS test...
 
why are you talking about i5? maybe you should stop. my point was the base systems are very different and he's trying to show the difference but wont take the hour(at most) to swap video cards to make the testing more "scientific" for a lack of a better term. I swapped cpus to show differences(or lack of) in my AOTS test...
Both my test systems were identical apart from the 6700K was 100MHz slower, which should have made it worse if only 4 cores mattered. But it was better.
The same memory and GPU were used in both.
 
you don't see the difference between the systems so never mind then. I'm just wasting me time and keystrokes...
 
Drop the the obtuse act. You said that there is no DX11 engine that can scale beyond 4-5 cores and that is BS. There are some games that peg an i5 and will perform better with an i7. I see 70-80% cpu usage in parts of some games on my 4770k.
idk what wtf youre talking about but I'm giving up.
 
Drop the the obtuse act. You said that there is no DX11 engine that can scale beyond 4-5 cores and that is BS. There are some games that peg an i5 and will perform better with an i7. I see 70-80% cpu usage in parts of some games on my 4770k.

hehehe you confused him with Pieter3dNow... just saying..

The obtuse here talking BS without any knowledge of modern gaming is Pieter3dNow not pendragon1... Just saying..
 
Back
Top