Rise of the Tomb Raider Graphics Features Performance @ [H]

Great game and thanks for the performance review.
But I get regular crashes related to memory use.
This started after finishing the first chapter.
From then on it completely filled my 16GB memory and crashed after about 15 mins, every time.
I enabled my swapfile and set it to auto managed and I now get about 1/2hr of play before it crashes, again all memory is filled.
MSI Afterburner says my gfx card memory use is over 4TB lol.

Basic PC specs
6600K
980 ti, latest driver and the one previous
16GB ram

Anyone else get this or similar?
 
Great game and thanks for the performance review.
But I get regular crashes related to memory use.
This started after finishing the first chapter.
From then on it completely filled my 16GB memory and crashed after about 15 mins, every time.
I enabled my swapfile and set it to auto managed and I now get about 1/2hr of play before it crashes, again all memory is filled.
MSI Afterburner says my gfx card memory use is over 4TB lol.

Basic PC specs
6600K
980 ti, latest driver and the one previous
16GB ram

Anyone else get this or similar?
I am getting a similar error message.
5930k
980ti Classy w/ the 362 driver that just came out a few days ago.
16GB RAM as well.
 
Anyone of the nice Titan X persons out there with 4k or surround and ESO/Witcher and the like up for maxing it all out and reporting Vram usage?

I'm seeing 8.6GB just standing around. SLI Titan X, all settings at highest, SMAA, no vignette/motion blur and no film grain.
 
AMD indicated DDR5 (I take it) is not as capable for Dynamic Ram as HBM? The 390x results kinda show this if tool measuring it is accurate, now the question is why?

Only thing I can come up with is that HBM is such a wide bus, that system ram can write to the card buffer and transfer 4096bits of information (both ways with PCIe) per each interrupt. For a 256bit bus it would take 16 transfers/interrupts to accomplish the same memory transfer. For 512bit - 8 transfers. Meaning HBM can spend more time with the GPU moving and receiving data around and less time interrupted from system ram transfers. Making it have less of an impact. Maybe it is even more transparent then this, the draw calls etc. may never consume the whole 4096bit bus width so the memory transfers to dynamic ram just uses the bandwidth more effectively. I would be interested to see how cutting down CPU cores available affects this - meaning is AMD via the drivers using more threads? One cpu or more to move memory around?

Still how and what is stored in system memory and not result in stalls in a game is determined and actually achieving this in this game is mind blowing and awesome that AMD can do this.
 
The review is outstanding and very good at reporting results. I hope HardOCP can get some info from AMD dealing with some of the results - like why the 390x is not using more ram? How does HBM allow dynamic ram to work better etc.

Awesome job and very interesting.
 
what I don't understand correctly is the fact that in the initial Tomb Raider review you said couple of things which contrast badly with this review

first one:




second:




And then why is vRAM not an issue this time in this game 14 days after the first one?.
Ahmmm - you had two game updates and now two driver hotfixes for AMD. The first hotfix AMD addressed this game. With changing drivers/code you can get dramatically different results.
 
Crimson 16.3 just released has in the notes of 16% performance boost in Rise Of The Tomb Raider. This just seems to be a very volatile period of time with new DX12 API, plus shift in how programming is being done to optimize hardware and using available resources such as multiple cores etc. As HardOCP does future reviews I can see results can change significantly with the recent fast pace changes that is going on.


AMD Radeon Software Crimson Edition 16.3 Release Notes
 
Credit to eloj (Rise of the Tomb Raider).
Patch #5 hit on Steam today, adding DX12 and VXAO to the game.

Brent, Kyle: will we be seeing any updates or a new article? I'd be particularly interested in seeing the differences between DX11 and DX12.
 
Well after the patch DX12 path does not support CFX so my performance is way slower.

For single card 290x my performance went down 3fps compared to the DX11 path. Game now has a benchmark built into it. Cpu usage on my FX9590 with DX12 goes up to 80% while DX11 never went above 50% - this did not change game play at all. 3440x1440 resolution all settings maxed except using High textures vice Ultra high and SMAA. Ultra High textures makes the game very jerky, not enough video ram.

I will try on my Nano setup later.

This is probably too early to do a review for this game using DX12.
 
Updated to the new patch, these are the avg fps results on Win7-64, DX11
All game settings maxed apart from using SMAA, no motion blur and no film grain.

After the update the game wouldnt run.
An error popped up with a webpage link and a challenge code.
Once the return code was given the game was allowed to run.

Machine specs:
1080p
GTX980ti stock 1304/3506, oc 1469/3954 (figures from MSI afterburner graph)
16GB DDR4-2900MHz CL15
6600K @ 4.7GHz

HBAO+
93.34 Stock
98.64 OC

VXAO
76.35 Stock
84.19 OC

When above 60fps without vsync there is a fair amount of hitching at the very start after a level loads.
This is due to being CPU limited at times on all 4 cores! As well as almost completely GPU limited.
It is much worse when the gfx card is overclocked because more cpu is needed for higher framerate, the cpu flatlines a bit more of the time.

With vsync at 60fps its very smooth apart from the 1st 1 or 2 seconds, avg 59.55fps with no card oc.
GPU use touches 95% at one point but it never maxes the GPU.

Gfx memory maxes out at 6GB during the early part of the benchmark. VXAO doesnt seem to affect memory use.
Its hard to fully note the effect of VXAO on framerate because of the CPU limit.
I cant be asked to do higher res benchies which will reduce the load on the CPU.
 
Last edited:
DX12 is way slower with Fury X. I'm losing 7fps in soviet installation and it stutters a lot and there's some graphics anomalies. Everything maxed except texture quality which is set to High.
 
Updated to the new patch, these are the avg fps results on Win7-64, DX11
All game settings maxed apart from using SMAA, no motion blur and no film grain.

After the update the game wouldnt run.
An error popped up with a webpage link and a challenge code.
Once the return code was given the game was allowed to run.

Machine specs:
1080p
GTX980ti stock 1304/3506, oc 1469/3954 (figures from MSI afterburner graph)
16GB DDR4-2900MHz CL15
6600K @ 4.7GHz

HBAO+
93.34 Stock
98.64 OC

VXAO
76.35 Stock
84.19 OC

When above 60fps without vsync there is a fair amount of hitching at the very start after a level loads.
This is due to being CPU limited at times on all 4 cores! As well as almost completely GPU limited.
It is much worse when the gfx card is overclocked because more cpu is needed for higher framerate, the cpu flatlines a bit more of the time.

With vsync at 60fps its very smooth apart from the 1st 1 or 2 seconds, avg 59.55fps with no card oc.
GPU use touches 95% at one point but it never maxes the GPU.

Gfx memory maxes out at 6GB during the early part of the benchmark. VXAO doesnt seem to affect memory use.
Its hard to fully note the effect of VXAO on framerate because of the CPU limit.
I cant be asked to do higher res benchies which will reduce the load on the CPU.

An interesting update.
I have been changing my gfx cards interrupt from standard Line based IRQ to MSI based, but forgot for my current video driver.
(It resets back to Line based IRQ when I install a new driver. It seems AMD automatically use MSI based so this probably wont help AMD users.)

With MSI based, the benchmark no longer has hitches at the start of each level and the overall framerate is faster.
I repeated the last test and got another 5.5fps (+6.5%)
VXAO
89.66 OC


See this thread for more information.
Windows: Line-Based vs. Message Signaled-Based Interrupts ... - Guru3D.com Forums
It is a bit of a messy procedure at the start but someone made a tool to simplify it called MSI Util.
** Its the second to last download in the first post. Run as Admin. **
Beware of what you change as you can prevent Windows booting!
The easy fix if you dont mess it up is to use the "last known good" boot option.

Quick way to use MSI Util to change your gfx cards interrupt type

To see which type of interrupt your gfx card is running:
Open Device Manager.
View resources by type.
Open the IRQ branch
+ve numbers are using Line based
-ve numbers are using MSI

If your gfx card has a +ve IRQ number then run MSI Util as Admin.
Expand the window wider so you can see everything.
Tick any boxes for your gfx cards interrupt.
Click Apply on the top right.
Reboot.

Check with Device Manager that it has changed.
Enjoy a smoother experience :)
edit: checking CPU use during the benchmark, it only hits max on all 4 cores for a short period now. Much better.
Edit 2: The third area is a little unsmooth at the start still. I've tracked this down to when the gfx cards 6GB memory fills up.

Again, be careful what you tick. Dont tick any USB 2.0 controllers.
If your intel SATA controller is not using MSI, DO NOT tick it. Install the latest Intel RST driver as this will use MSI.
Drivers & Software
 
Last edited:
OK so... from what I can tell here, my cards are sharing IRQs with other devices, one of those being one of the onboard USB controllers. If I'm reading the above correctly, I'm just out of luck?

EDIT: Well, I said "what the hell" and did it anyway. It changed, but also changed all the other devices on those IRQs. Rebooted without issue, if I start getting a bunch of crashes or anything I guess I'll just switch back.
 

Attachments

  • irqs.jpg
    irqs.jpg
    90.3 KB · Views: 35
Last edited:
You are golden if Windows boots ok and the devices function.
Just gotta remember each time you install a new video driver to tick that box again.
 
An interesting update.
I have been changing my gfx cards interrupt from standard Line based IRQ to MSI based, but forgot for my current video driver.
(It resets back to Line based IRQ when I install a new driver. It seems AMD automatically use MSI based so this probably wont help AMD users.)

With MSI based, the benchmark no longer has hitches at the start of each level and the overall framerate is faster.
I repeated the last test and got another 5.5fps (+6.5%)
VXAO
89.66 OC


See this thread for more information.
Windows: Line-Based vs. Message Signaled-Based Interrupts ... - Guru3D.com Forums
It is a bit of a messy procedure at the start but someone made a tool to simplify it called MSI Util.
** Its the second to last download in the first post. Run as Admin. **
Beware of what you change as you can prevent Windows booting!
The easy fix if you dont mess it up is to use the "last known good" boot option.

Quick way to use MSI Util to change your gfx cards interrupt type

To see which type of interrupt your gfx card is running:
Open Device Manager.
View resources by type.
Open the IRQ branch
+ve numbers are using Line based
-ve numbers are using MSI

If your gfx card has a +ve IRQ number then run MSI Util as Admin.
Expand the window wider so you can see everything.
Tick any boxes for your gfx cards interrupt.
Click Apply on the top right.
Reboot.

Check with Device Manager that it has changed.
Enjoy a smoother experience :)
edit: checking CPU use during the benchmark, it only hits max on all 4 cores for a short period now. Much better.
Edit 2: The third area is a little unsmooth at the start still. I've tracked this down to when the gfx cards 6GB memory fills up.

Again, be careful what you tick. Dont tick any USB 2.0 controllers.
If your intel SATA controller is not using MSI, DO NOT tick it. Install the latest Intel RST driver as this will use MSI.
Drivers & Software

That's good stuff right there, reminds of the days of configuring IRQ priorities in Win XP to throw the GPU on top of the order. I went ahead and applied the change to my GPU and my Audio device (if anything is going to generate as many interrupts during game play as a GPU it's going to be the audio device).
 
Ah, audio devices are one thing that had problems.
It would be wise to read the linked thread.

Good stuff though.
I'm interested to see what performance differences anyone gets.
 
Ah, audio devices are one thing that had problems.
It would be wise to read the linked thread.

Good stuff though.
I'm interested to see what performance differences anyone gets.

I read through, the problem child audio I saw was a Xonar Essence which has had shoddy driver support for years (also has massive compat issues on Win10). I'm just using an on board solution to do optical out to an external DAC which seemed safe enough, works a treat.
 
Sweet.
Unfortunate you dont have some before/after results.
It would be nice to confirm what I saw.
 
If anything I noticed /more/ stutter/load-in oddness, but it seems to settle down after a while and framerates are a little higher over all, maybe 2-3 frames in most places.
 
Tell you one thing, the game makes for a solid GPU OC stability test under DX12. I had to wind mine back a fair bit to get thing stable. I guess with the restrictions of CPU over head mostly removed the GPU has to be on it completely at all times, should be interesting to see if it's the same case for other titles (I should grab that new Hitman game).
 
I guess it is time to put the 980Ti Classy through its paces and see what it can do under water!
 
i am really enjoying this game with my 2 X 390X cards, however my game crashed when i put textures at very high (while every thing is maxed) i assume this is a bug in the drivers.

great article.
I used the DX12 option and very high textures on my 980 Ti and it quit on me while playing. I happened when the 6GB VRAM got maxed out (GPU-Z). Setting back textures does the trick. Everything else can be at very high.
 
Back
Top