Hitman 2016 Performance Video Card Review @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,532
Hitman 2016 Performance Video Card Review - Hitman (2016) supports the new DirectX 12 API. We will take this game and find out if DX12 is faster than DX11 and what it may offer in this game and if it allows a better gameplay experience. We will also compare Himan performance between several video cards to find what is playable and how AMD vs. NV GPUs compare.
 
Oh boy. This is gonna be a hot topic. I am curious to the DX12 playability issues.
 
Oh boy. This is gonna be a hot topic. I am curious to the DX12 playability issues.
A quick Google search will show you a LOT of people having this issue. Lots of home remedies posted on how to get DX12 to work, however, none that worked for us that did not change how we would evaluate overall.

What is really "bad" about the whole thing is just how the game looks. Brent called it looking 3 years old graphically, but honestly, it struck me as being about 5 in terms of overall feel. This is not the DX12 poster child you and I are looking for.
 
This isn't the first time I've seen GCN 1.2 lose performance in DX12, the 380 and 380X had a similar problem in other recent DX12 games. Weird.
Expected bigger gains than <5% for the other GPUs as well.
 
It crashed for me under DX12 in the middle of a mission on day one. I mothballed the game for now as DX12 runs so much nicer than DX11 for me. I don't even want to play it under DX11. I can of course. Just that when you've seen paradise; you don't want to go back to pedestrian life.
 
This isn't the first time I've seen GCN 1.2 lose performance in DX12, the 380 and 380X had a similar problem in other recent DX12 games. Weird.
Expected bigger gains than <5% for the other GPUs as well.

Well someone has to use and do something exciting with it first. These developers are forming the building blocks for game design and game engines into the distant future. As the say, "There will be bugs."
 
I just came upon this - HITMAN Lead Dev: DX12 Gains Will Take Time, But They're Possible After Ditching DX11

Async Compute in particular has received a lot of attention from PC enthusiasts, specifically in regards to NVIDIA GPUs lacking hardware support for it. However, in the GDC 2016 talk you said that even AMD cards only got a 5-10% boost and furthermore, you described Async Compute as “super hard” to tune because too much work can make it a penalty. Is it fair to say that the importance of Async Compute has been perhaps overstated in comparison to other factors that determine performance? Do you think NVIDIA may be in trouble if Pascal doesn’t implement a hardware solution for Async Compute

The main reason it’s hard is that every GPU ideally needs custom tweaking – the bandwidth to compute ration is different for each GPU, ideally requiring tweaking the amount of async work for each one. I don’t think it’s overstated, but obviously YMMW (your mileage may vary). In the current state, Async compute is a nice & easy performance win. In the long run it will be interesting to see if GPU’s get better at running parallel work, since we could potentially get even better wins.

That pretty much backs up my theory I posted in the conclusion R.E. Async Compute.
 
Yeah, before the 1.03 patch I couldn't even run in DX12 mode without crashing. Now it'll run, it just runs like garbage.

This game has bigger issues, though, because if you set Shadow Resolution to High you get extreme FPS drops (down to sub-10 FPS) when looking at certain mirrors / angles.

Graphically, it basically looks like Absolution. I think someone disagreed with me in the Hitman thread but I finished up Absolution right before playing the new game and they look practically the same.
 
is it weird that the first thing that popped in my head from this review was.. "where is the amd cpu numbers!!!??!!" Probably still in shock from seeing the previous article lol.

Seriously though, i had read the game is totally broken in dx12 so i am amazed you even managed to run some of the benchmarks. Your conclusion about async is something i wondered about. I think it makes sense but nothing anyone ever thought of really since i haven't seen anything performance numbers on older hardware. As for the game...it looks rather dull... and isn't this an episodic game now too?
 
Can someone point me somewhere were this dynamic vram on Amd cards is explained? I've seen it in the last few reviews but I havent seen any real information on it.

To the review. Sine both amd and nvidia crash under dx12 I would assume it is a game side issue?

Great write up. There is a ton of information in there, I need to go back and read some parts again.
 
Yes I dont understand the 2 vram columns either.
This was a great article. It is surprising that the 390x nearly matches the Fury in dx12, once they get fit to work properly. Amd definatley dominates here, dx11 or otherwise.
Am I to understand that the new AMD and Nvidia cards will be less focused on Compute than the older Hawaii cards? Does this include Dual precision? If that is the case, the 5 year old Tahiti cards will still be the dual precision champs!
 
Yes I dont understand the 2 vram columns either.
This was a great article.

Can someone point me somewhere were this dynamic vram on Amd cards is explained? I've seen it in the last few reviews but I havent seen any real information on it.


Here ya go. Instead of swapping textures in your pagefile, which is more than likely located on a SSD or worse if on a conventional hard drive; the AMD driver uses System Ram, as it is a lot faster than the afore mentioned solutions. This is from the [H]ardocp Tomb Raider review.
AA VRAM Usage - Rise of the Tomb Raider Graphics Features Performance

The AMD Radeon R9 Fury X VRAM behavior does make sense though if you look toward the dynamic VRAM. It seems the onboard dedicated VRAM was mostly pegged at or near its 4GB of capacity. Then it seems the video card is able to shift its memory load out to system RAM, by as much as almost 4GB at 4K with 4X SSAA. If you combine the dynamic VRAM plus the on board 4GB of VRAM the numbers come out to equal numbers much higher than the AMD Radeon R9 390X and closer to what the GeForce GTX 980 Ti achieved in terms of dedicated VRAM.


So in the future when building a system, an enthusiast needs to value other parts of their system like Ram speed and latency just as much as they would a video card or processor.
Battlefield 4 Loves High Speed Memory
 
Last edited:
It crashed for me under DX12 in the middle of a mission on day one. I mothballed the game for now as DX12 runs so much nicer than DX11 for me. I don't even want to play it under DX11. I can of course. Just that when you've seen paradise; you don't want to go back to pedestrian life.

Seems to me if you like the style of game you might as well play it in 11. 12 doesn't add anything other then performance, and that is seeming pretty questionable for this title. Not sure you got a hint of paradise, or if you simply have fallen victim to number envy.
 
Seems to me if you like the style of game you might as well play it in 11. 12 doesn't add anything other then performance, and that is seeming pretty questionable for this title. Not sure you got a hint of paradise, or if you simply have fallen victim to number envy.

It's an episodic affair stretched out over several months. I'm in no rush to spoil my experience. I can wait for them to get DX12 right.
 
Not a game that interests me, but I did enjoy reading the article. I was very disappointed to see the fragility of DX12. I wonder if this is a game issue, a driver issue, or a basic DX12 issue.

Thank you for including VRAM usage figures.

Fixed, thanks.
 
Remember people: DX12 shifts responsibility to the developers, and away from NVIDIA/AMD. This type of problems is both expected and predicted.

Definitely, problems to be expected, this doesn't change the fact that game is a piece of shit; if the developer is incapable/unwilling to hire programmers with some experience in low-level programming then they simply shouldn't use DX12, or at least use a DX12 engine built by a dev with the appropriate resources

Then again, in this case, even DX11 performance is shameful so IO Interactive really get no absolution *chuckle* for their technical sins

Forgive me if I seem bitter, I just really loved the old Hitman games and this is just the nail in the coffin for the series for me. Always online DRM, shitty performance, shameful "dx12 support", looks like a 5 year old game.

Basically IO is saying : "wait for a new version with the DRM removed so you can pirate it, game isn't worth the spare change you've lost in your couch"
 
I just came upon this - HITMAN Lead Dev: DX12 Gains Will Take Time, But They're Possible After Ditching DX11



That pretty much backs up my theory I posted in the conclusion R.E. Async Compute.

This confused me
1460841519d5rX3pPqHU_4_1.gif

1460841519d5rX3pPqHU_5_3.gif


Why is the 980Ti almost 30% faster in the first graph ? Are these not run at the same settings ?

1460841519d5rX3pPqHU_3_1.gif


Same with AMD results, something is wrong here
 
^ Right, bar graphs are the built-in "Benchmark" run and the average FPS result it provides.

The highest playable settings table / graph in DX11 is an actual real-world in-game physical manual run-through as we normally do.

Every result that is in bar graph form is from the benchmark. Every result show on page 5 and 6 is a manual run-through using FRAPs, not the benchmark.

As I indicated on the first page the benchmark results do not match up to real-world gameplay. The benchmark results are typically a lot higher than actually playing the game. I made sure to point that out in the intro page.

Page 3 and 4 - Benchmark data
Page 5 and 6 - Manual Fraps Run-through data
Page 7 - Manual in-game real-world analysis
Page 8 - Benchmark data
 
^ Right, bar graphs are the built-in "Benchmark" run and the average FPS result it provides.

The highest playable settings table / graph in DX11 is an actual real-world in-game physical manual run-through as we normally do.

Every result that is in bar graph form is from the benchmark. Every result show on page 5 and 6 is a manual run-through using FRAPs, not the benchmark.

As I indicated on the first page the benchmark results do not match up to real-world gameplay. The benchmark results are typically a lot higher than actually playing the game. I made sure to point that out in the intro page.

That makes a lot of sense, thanks for the clarification

Interesting to see the 980Ti take a much bigger hit in actual gameplay vs benchmark compared to Fury/FuryX
 
Remember people: DX12 shifts responsibility to the developers, and away from NVIDIA/AMD. This type of problems is both expected and predicted.

I agree!

Also people should remember DX12 is in its infancy. We saw the same stuff with DX10 and DX11. It takes several AAA titles for devs to get the most out of the API and engines. Unreal for example is constantly updating its guide with regard to the engines capabilities in DX12.

I wouldn't let the first few DX12 games become the indicator of how the API will perform and which GPU is best. That is reckless and nonsensical!
 
Last edited:
That makes a lot of sense, thanks for the clarification

Interesting to see the 980Ti take a much bigger hit in actual gameplay vs benchmark compared to Fury/FuryX
To be frank, it was explained in the review, you just have to read. I know a lot of folks like to be spoon fed information in the forums, but we do our best to give a full picture in our reviews and work very hard to make that happen.

What you are seeing here is exactly why we do NOT like to rely on canned benchmarks. More times than not, canned and built-in benchmarks do not actually reflect real world gameplay performance.
 
gotta read not just look at the pics!

I agree!

Also people should remember DX12 is in its infancy. We saw the same stuff with DX10 and DX11. It takes several AAA titles for devs to get the most out of the API and engines. Unreal for example is constantly updating its guide with regard to the engines capabilities in DX12.

I wouldn't let the first few DX12 games become the indicator of how the API will perform and which GPU is best. That is reckless and nonsensical!

i agree! very much so!

edit: and when game engines start being built specifically for DX12 instead of being upgraded DX11 I think it will improve even more. like the conclusion stated and the quote about aots dropping DX11.
 
If you have a 4GB or higher video card for Hitman will do nicely. It won't hamper your ability to run at the highest in-game settings at 1440p if you don't have a video card with more than 4GB of VRAM.
So the NVIDIA 980/970 is only good at 1080p due to the crippled 3.5GB?
 
The issue with all the low level apis, is the developers.

There was a time when everyone wanted a good high level api, so calling hardware directly could be avoided.

Its great that DX 12 and Vulkan give the power back to the developers to dig in and create a more direct call to hardware. Things is though its all on them to find performance gains. The only real games that will get anything much out of the low level calls will be games running newer versions of things like the Unreal and Cry engines. I wouldn't expect any large gains out of games for another year or so, and even then even the large AAA companies aren't going to do a good job. How many game developers really know how to create hardware calls, we are 30 years past kids learning assembler for fun. ;)

Most of the stuff getting DX 12 tacked on it now is just DX 11 code pretty much copied 1-1... and the branch looses all optimizations in the driver intended for DX 11. DX 12 likely has a painful few years in store, and for that matter Vulkan won't likely fair better. Vulkan will sitll get lots of support though as it is going to be the defacto API for Android devices.
 
The issue with all the low level apis, is the developers.

There was a time when everyone wanted a good high level api, so calling hardware directly could be avoided.

Its great that DX 12 and Vulkan give the power back to the developers to dig in and create a more direct call to hardware. Things is though its all on them to find performance gains. The only real games that will get anything much out of the low level calls will be games running newer versions of things like the Unreal and Cry engines. I wouldn't expect any large gains out of games for another year or so, and even then even the large AAA companies aren't going to do a good job. How many game developers really know how to create hardware calls, we are 30 years past kids learning assembler for fun. ;)

Most of the stuff getting DX 12 tacked on it now is just DX 11 code pretty much copied 1-1... and the branch looses all optimizations in the driver intended for DX 11. DX 12 likely has a painful few years in store, and for that matter Vulkan won't likely fair better. Vulkan will sitll get lots of support though as it is going to be the defacto API for Android devices.

I've said it time and time again, big name engines like Unreal and Cryengine are the new middleware api. Small studios should stay the fuck away from in-house DX12 development
 
  • Like
Reactions: ChadD
like this
Here ya go. Instead of swapping textures in your pagefile, which is more than likely located on a SSD or worse if on a conventional hard drive; the AMD driver uses System Ram, as it is a lot faster than the afore mentioned solutions. This is from the [H]ardocp Tomb Raider review.
AA VRAM Usage - Rise of the Tomb Raider Graphics Features Performance

The AMD Radeon R9 Fury X VRAM behavior does make sense though if you look toward the dynamic VRAM. It seems the onboard dedicated VRAM was mostly pegged at or near its 4GB of capacity. Then it seems the video card is able to shift its memory load out to system RAM, by as much as almost 4GB at 4K with 4X SSAA. If you combine the dynamic VRAM plus the on board 4GB of VRAM the numbers come out to equal numbers much higher than the AMD Radeon R9 390X and closer to what the GeForce GTX 980 Ti achieved in terms of dedicated VRAM.


So in the future when building a system, an enthusiast needs to value other parts of their system like Ram speed and latency just as much as they would a video card or processor.
Battlefield 4 Loves High Speed Memory

Precisely; Kyle or Brent, can you guys try adding another 16GB RAM and see what happens? I've read various reports of DX12 consuming massive amounts of system RAM. Just a thought. I mean it probably won't change anything, seeing as the 390x has 8gb, but who knows.
 
Precisely; Kyle or Brent, can you guys try adding another 16GB RAM and see what happens? I've read various reports of DX12 consuming massive amounts of system RAM. Just a thought. I mean it probably won't change anything, seeing as the 390x has 8gb, but who knows.
Well, the system RAM we have now is not being fully utilized. I do not see that adding more would change that. If you have any insight as to something otherwise, please share.
 
I agree. I don't think adding more will help but maybe higher speed ram would.
 
I don't think faster ram will make all that much difference really. The majority of what the game engine is working on will be in the cards memory if it has to swap to system ram that often performance will suck no matter what. This one is a simple case of a crappy implementation.

IO Interactive’s Lead Programmer Jonas Meyer, is quoted to have said. The 20% CPU performance and 50% GPU performance boosts advertised by Microsoft are possible, but only in time and after dropping DirectX 11 support entirely. He also said Hitmans dx12 was a dx11 port. They don't have any plans for any DX12 improvements coming up. He goes on to complain that tweaking is required for every specific GPU (its not just a general ATI / Nvidia / Intel path issue).

Just more of what we all know already... DX 12 is a snoozer until the major game engines start pushing support though to developers who won't have to worry about it themselves. It makes it pretty clear why the Big engine guys like Epic and Valve like to talk about DX12 and Vulkan. Its a great selling point for them to developers looking for an engine. Having to deal with coding paths yourself for tons of GPUs will be a massive expense.
 
I don't think faster ram will make all that much difference really. The majority of what the game engine is working on will be in the cards memory if it has to swap to system ram that often performance will suck no matter what. This one is a simple case of a crappy implementation.
To put it succinctly, Hitman is a shit show on the DX12 front. We will be spending our resources elsewhere.
 
And now we see the complete mess that is attempting to make a to-the-metal API on a flexible platform which will have an ever increasing number of chips and architectures to deal with. Developers don't and never will have the time to get it all working well. At best I suspect we'll see tightly optimized DX12 paths for the current hotness cards, and a DX11 "fallback path" which will probably work better for everything else.

If you think it is bad now, wait a 5 years and go back to play an older DX12 title which has no idea how to optimize for your GPU.
 
In future reviews.. and I am sure I've seen this in others. Even after establishing what one gives you better on average performance. I would LOVE to see if when you start taxing the system (4k) that you also include the DX12 number for that test. And state on each posted chart weather it was taken with DX12 or 11?

I think this title was such a shit show that you didn't bother and I don't blame you.
 
In future reviews.. and I am sure I've seen this in others. Even after establishing what one gives you better on average performance. I would LOVE to see if when you start taxing the system (4k) that you also include the DX12 number for that test. And state on each posted chart weather it was taken with DX12 or 11?

I think this title was such a shit show that you didn't bother and I don't blame you.

I think it will just get worse at 4k, dx12 shines when cpu is the bottleneck, to have it perform better under gpu stress it has to match the driver optimization
 
And now we see the complete mess that is attempting to make a to-the-metal API on a flexible platform which will have an ever increasing number of chips and architectures to deal with. Developers don't and never will have the time to get it all working well. At best I suspect we'll see tightly optimized DX12 paths for the current hotness cards, and a DX11 "fallback path" which will probably work better for everything else.

If you think it is bad now, wait a 5 years and go back to play an older DX12 title which has no idea how to optimize for your GPU.

That's a good point. When I read the interview with Jonas Meyer earlier I was sort of thinking the same thing. They aren't planning to update their DX 12 code... so what happens when new Nvidia or ATI hardware hits. Perhaps the big hardware vendors throw a little cash around so developers put out some updates for the next cycle of cards when a game is still sort of fresh, but 2 or 3 years down the road ? I wonder if its possible to code a DX 12 path that uses very minimal basic calls that tend to be more or less standard.

Then I had an even worse thought... what if the hardware guys get scared to change anything drastic on the hardware level for that very reason. I mean the basics of the GPU cores haven't changed much for sometime, but having tons of games in the wild with aggressive low level coding could keep anyone from doing anything revolutionary for the same reason Intel has had to stick with x86 and just keep extending it, and trying to extend it in ways where the software doesn't have to be to aware. Or it could keep the GPU guys from innovating much at all... remember MMX extensions. It was hard enough to get the software guys to code MMX into software. Once they did all Intel could really do was extend it. Which meant their newer processors where capable of much more, but none of the software called those extended registers anyway so they performed just like the old chips 99% of the time.

The more I think about it the more I'm not liking the idea of a super low level API becoming the standard. It will;
1) narrow the number of real good game engines to a handful of big house engines like Unreal and source. all the small houses won't be able to really pull off great all around support... I am not sure at this point a Carmak level genius could rise and help push some new upstart.
2) When buying a new GPU down the road you might have to consider that it may be to advanced (radical) for its own good, breaking its "common" registers with other cards from that manufacturer. It could well be that a radical new Video card could end up being 2 or 3x faster in new games and 20% slower then your old card in your old games.
3) the GPU guys might end up in the intel pit fall of just constantly extending their instruction sets instead of innovating in anyway... so as to stay compatible.

Hopefully all 3 of those doom and gloom ideas are wrong, and the API is still high level enough that worse case DX12 = DX11 at all times when the drivers mature a bit more.
 
That's a good point. When I read the interview with Jonas Meyer earlier I was sort of thinking the same thing. They aren't planning to update their DX 12 code... so what happens when new Nvidia or ATI hardware hits. Perhaps the big hardware vendors throw a little cash around so developers put out some updates for the next cycle of cards when a game is still sort of fresh, but 2 or 3 years down the road ? I wonder if its possible to code a DX 12 path that uses very minimal basic calls that tend to be more or less standard.

Then I had an even worse thought... what if the hardware guys get scared to change anything drastic on the hardware level for that very reason. I mean the basics of the GPU cores haven't changed much for sometime, but having tons of games in the wild with aggressive low level coding could keep anyone from doing anything revolutionary for the same reason Intel has had to stick with x86 and just keep extending it, and trying to extend it in ways where the software doesn't have to be to aware. Or it could keep the GPU guys from innovating much at all... remember MMX extensions. It was hard enough to get the software guys to code MMX into software. Once they did all Intel could really do was extend it. Which meant their newer processors where capable of much more, but none of the software called those extended registers anyway so they performed just like the old chips 99% of the time.

The more I think about it the more I'm not liking the idea of a super low level API becoming the standard. It will;
1) narrow the number of real good game engines to a handful of big house engines like Unreal and source. all the small houses won't be able to really pull off great all around support... I am not sure at this point a Carmak level genius could rise and help push some new upstart.
2) When buying a new GPU down the road you might have to consider that it may be to advanced (radical) for its own good, breaking its "common" registers with other cards from that manufacturer. It could well be that a radical new Video card could end up being 2 or 3x faster in new games and 20% slower then your old card in your old games.
3) the GPU guys might end up in the intel pit fall of just constantly extending their instruction sets instead of innovating in anyway... so as to stay compatible.

Hopefully all 3 of those doom and gloom ideas are wrong, and the API is still high level enough that worse case DX12 = DX11 at all times when the drivers mature a bit more.
Hahahaha I was nodding all the way through reading this, then at the end "... Until the drivers mature", dammit! That's the while point of dx12, substitute a significant part of the under-the-hood driver code with something accessible to the developer natively through directx
 
  • Like
Reactions: ChadD
like this
Hahahaha I was nodding all the way through reading this, then at the end "... Until the drivers mature", dammit! That's the while point of dx12, substitute a significant part of the under-the-hood driver code with something accessible to the developer natively through directx

Hope springs eternal.
I am leaning to the being more a MMX situation though, if the developers end up focusing on the low levels, its going to force the hardware vendors to stick to extending rather then rebuilding. The GPU world has had the advantage of not having to stick to a register set like x86 or MMX, they have been free for the most part to do what they want in hardware and then just write a driver to link to the high level stuff. A few years of games written to take advantage of the low level registers of a generation of chips might make, a change up product a real issue. Anyway ya we agree.

EDIT... ok I couldn't help this one. There is a solution to all the terrible stagnation of the hardware I am talking about. Anyone remember Transmeta. Bad joke... but a future GPU with a Hardware/Software translation layer so that fancy new ATI XYZ5000 card can emulate a FuryX. Only half joking, if there talking about hardware translation layers in a few years, I'll remember this day. The day the software guys got their hardware access... and that future day as the day the hardware guys put a software layer in between. lol
 
Last edited:
Back
Top