Hitman 2016 Performance Video Card Review @ [H]

Hope springs eternal.
I am leaning to the being more a MMX situation though, if the developers end up focusing on the low levels, its going to force the hardware vendors to stick to extending rather then rebuilding. The GPU world has had the advantage of not having to stick to a register set like x86 or MMX, they have been free for the most part to do what they want in hardware and then just write a driver to link to the high level stuff. A few years of games written to take advantage of the low level registers of a generation of chips might make, a change up product a real issue. Anyway ya we agree.

EDIT... ok I couldn't help this one. There is a solution to all the terrible stagnation of the hardware I am talking about. Anyone remember Transmeta. Bad joke... but a future GPU with a Hardware/Software translation layer so that fancy new ATI XYZ5000 card can emulate a FuryX. Only half joking, if there talking about hardware translation layers in a few years, I'll remember this day. The day the software guys got their hardware access... and that future day as the day the hardware guys put a software layer in between. lol

Yeah... Or just use libraries like GameWorks, AMD's openfx (Radeon fx?) etc etc

UE4 has a fork with nvapi embedded, chill out, it's gonna be fine, but dx11 isn't gonna die out anytime soon

And if amd have really improved their front end (geometry) then there's going to be serious competition for once, from the starr, coupled with new process, and 10nm being pushed for aggressively, I think we're gonna see a (relatively) slow, but steady, ramp up in performance in the next few years.

But dx12 is really overhyped, I'm more interested in shader model 6
 
And now we see the complete mess that is attempting to make a to-the-metal API on a flexible platform which will have an ever increasing number of chips and architectures to deal with. Developers don't and never will have the time to get it all working well. At best I suspect we'll see tightly optimized DX12 paths for the current hotness cards, and a DX11 "fallback path" which will probably work better for everything else.

If you think it is bad now, wait a 5 years and go back to play an older DX12 title which has no idea how to optimize for your GPU.
Because DX11 games had no bugs?
 
Yes, I'm not referring to bugs at all. Someone has to know a lot about the hardware to optimize it well. The burden is split with DX11, but the way the abstraction works it gives the GPU vendor a lot of leeway in handling things, and even working around suboptimal code from the application. This is partially why there are "game-ready" drivers.

The major engine vendors (unreal, crytek, unity, etc) will hopefully make this less of an issue for most game devs. I am skeptical if we'll end up with a net win in most cases though.
 
Yes, I'm not referring to bugs at all. Someone has to know a lot about the hardware to optimize it well. The burden is split with DX11, but the way the abstraction works it gives the GPU vendor a lot of leeway in handling things, and even working around suboptimal code from the application. This is partially why there are "game-ready" drivers.

The major engine vendors (unreal, crytek, unity, etc) will hopefully make this less of an issue for most game devs. I am skeptical if we'll end up with a net win in most cases though.
If it means more jobs for hardware engineers in videogame dev studios I'm all for it :p
 
DX 12 opens up more possibilities with better CPU utilization, combining of different processors etc. The masters of it will come over time which will make DX11 stuff look like Nintendo stuff of yesteryear. There use to be many more hardware vendors so an API that is more abstract from the hardware makes sense back then. Basically today we have three, AMD, Intel and Nvdia where Intel uses Nvdia patents (maybe AMD patents in the future). Having that extra limiting layer really would hinder progress for only three hardware platforms.

I wonder why some of the developers are releasing these DX12 patched games? Is it really just to see how it flows with the hardware out there currently? For most DX12 enabled games it is not bringing much to the table at all, more like taking some trimmings away. AoTs does show the advantage of it for CPU usage which should mean in the future more taxing code can be written for the higher end CPU's and GPU's out there.
 
DX 12 opens up more possibilities with better CPU utilization, combining of different processors etc. The masters of it will come over time which will make DX11 stuff look like Nintendo stuff of yesteryear. There use to be many more hardware vendors so an API that is more abstract from the hardware makes sense back then. Basically today we have three, AMD, Intel and Nvdia where Intel uses Nvdia patents (maybe AMD patents in the future). Having that extra limiting layer really would hinder progress for only three hardware platforms.

I wonder why some of the developers are releasing these DX12 patched games? Is it really just to see how it flows with the hardware out there currently? For most DX12 enabled games it is not bringing much to the table at all, more like taking some trimmings away. AoTs does show the advantage of it for CPU usage which should mean in the future more taxing code can be written for the higher end CPU's and GPU's out there.
I remember seeing slides last year with things like "Developers have more responsibility, drivers matter less" and my initial thought was... Boy, PC gamers are screwed. lmao.

AMD & Nvidia have entire teams dedicated to improving driver performance in DX11 and previous. There was an interview recently where one of the driver team guys talks about how broken AAA game code is on day one and they have to fix basic coding errors just to make the games run properly at launch. Putting all of that into the devs' hands is going to create problems, at least for now.

DX12... Great for consoles, great for Microsoft's attempt to 'unify' PC and Xbox. PC optimization goes out the window in a one-size-fits-all approach. Especially for Nvidia GPUs, since they don't have any correlation with the PS4/XBO. You can't optimize for GCN-based consoles and neglect 80% of PC gamers and expect to have success in the PC market.

Pay attention, Microsoft.
 
Last edited:
I remember seeing slides last year with things like "Developers have more responsibility, drivers matter less" and my initial thought was... Boy, PC gamers are screwed. lmao.

AMD & Nvidia have entire teams dedicated to improving driver performance in DX11 and previous. There was an interview recently where one of the driver team guys talks about how broken AAA game code is on day one and they have to fix basic coding errors just to make the games run properly at launch. Putting all of that into the devs' hands is going to create problems, at least for now.

DX12... Great for consoles, great for Microsoft's attempt to 'unify' PC and Xbox. PC optimization goes out the window in a one-size-fits-all approach. Especially for Nvidia GPUs, since they don't have any correlation with the PS4/XBO. You can't optimize for GCN-based consoles and neglect 80% of PC gamers and expect to have success in the PC market.

Pay attention, Microsoft.

The IHV advice is literally "don't use it if you don't want to deal with it"

I wonder if its possible to code a DX 12 path that uses very minimal basic calls that tend to be more or less standard.

Yes, it's called DX12. This isn't DOS, you're not directly interfacing with hardware.
 
Given how the gaming industry works, that's terrible advice.
It's a quality of life problem... Developers don't want to deal with it, publishers don't care. PC gamers suffer as a result.

Keep in mind we're talking about MONEY. "Deal with it" = "Get paid to deal with it."
 
Engineering costs money

Tech advantage can be a good thing


Which outweighs the other?
 
Yes, it's called DX12. This isn't DOS, you're not directly interfacing with hardware.

Well DOS isn't really a comparison in any way... DOS was an OS not at all the same thing. If you want to compare it to that though, well in the days of MS-DOS there where well used assemblers converting high level programming code into DOS assembler language. There where very high level languages like BASIC and C++ and Pascal... and even assembler compilers. Back then programming directly in assembler was the best option to create fast optimized software, but it was the most difficult to work with... and if you wanted to take that same code and recompile it for another system good luck with that. DX 12 opens some new options that allow you to get closer to the hardware like programming in assembler, the issue is there isn't just 1 target hardware system. Even for cards from the same manufacturer there isn't one standard hardware layout there is no Nvidia version x86 card setup one generation isn't the same as the next.

DX12 is different then all the DX versions before it. Yes it has high level calls, but those are almost completely the same as the older versions of DX and don't really add all that much new. What is new in DX12 is it allows the programmer to directly call the hardware (or more directly anyway). If you have ever done any programming... its like you have this great BASIC computer language for graphics, and now they have extended it to speed it up adding direct assembler calls. If your old enough to remember the days of early home computers DX12 it is like they added an advanced version of PEEK and POKE. Being able to address the systems IO as mapped memory gave you tons of power, and DX12 does the same... but recompiling those calls for hardware that was even just a small bit different almost always had negative effects. (as in it just didn't work)

The advantage of a high level API or computer language is being able to recompile those instructions for different hardware. That is exactly what DX or OpenGL or any other API does... it takes C++ level code and the hardware drivers compile it on the fly. DX12(and vulkan) extend that code and give the developer more power, to by pass that software translation a bit and call things direct, but the reason the APIs exist in the first place was to save them having to do that. Honestly the more I think about DX12 and the low level APIs the more I think its going to be a massive failure.
 
Last edited:
Except you're way over thinking it, DX12 in no way means you're abandoning the abstraction. It's not even low level like you're thinking with the assembler comparison.

Here's a DX12 particle system I wrote almost a year ago. It uses the exact same shader as its DX11 brother and runs fine on all the hardware I've tried.



If you're after absolute peak performance then of course you can find yourself writing different implementations. This is nothing new...
 
If you're after absolute peak performance then of course you can find yourself writing different implementations. This is nothing new...

Cool particle system. Did the DX 12 you wrote though really offer you anything that DX 11 wouldn't have ? I do understand it is possible to optimize, but then your aiming at driver hooks not actual hardware... or am I way off on that as well. I admit I don't do any 3d work.
 
Cool particle system. Did the DX 12 you wrote though really offer you anything that DX 11 wouldn't have ? I do understand it is possible to optimize, but then your aiming at driver hooks not actual hardware... or am I way off on that as well. I admit I don't do any 3d work.

For this it was likely slower. The extent of my effort was to just get it working and I skipped even "easy" optimizations. It works, but I was stalling the GPU. In DX11, the driver could do some of this for me.

There's clear wins in the design though and it seems much easier to map an efficient DX12 design backwards to an efficient DX11 than the other way around. This is a big one.
 
What is concerning about DX12 the more I read benchmarks both here and elsewhere. Is how vastly different the numbers can look when you change one piece of the hardware equation. People are getting very different numbers on the same GPUs with slightly different processors. My guess is, if you hit the random lotto and just happen to be running a system very close to the one used by the developers you can expect some nice numbers. This is starting to remind me of the early days of 3d acceleration when a game would be blazing fast on X hardware and trash on another, and in a different game be completely the reverse.

I think the only real possible winner when it comes to DX 12 is the Xbox. The more I read about what exactly it is that makes DX 12 tick.. the more it seems MS only real goal with it was to compete with the PS4 API.

The PS4 API is fairly well liked in the industry... and because Sony don't care about other hardware and the PC world, it has plenty of hardware specific tweaks and allows the programmers much more control of the hardware directly. I know its not on topic but I wish it where possible to stick the faster ram in a Xbox (or slower in the PS4) and run some tests to see how much of the PS4 advantage is API.
 
For this it was likely slower. The extent of my effort was to just get it working and I skipped even "easy" optimizations. It works, but I was stalling the GPU. In DX11, the driver could do some of this for me.

There's clear wins in the design though and it seems much easier to map an efficient DX12 design backwards to an efficient DX11 than the other way around. This is a big one.
I definitely liked your point of back porting to DX11 from DX12. It is what I surmised was giving AMD this previously unseen DX11 boost we have seen in these new DX12/11 games. (although I don't necessarily mean backporting as much as I mean using what was learned from DX12 in programming for DX11- Protection from the LITERAL Nazis).
 
I definitely liked your point of back porting to DX11 from DX12. It is what I surmised was giving AMD this previously unseen DX11 boost we have seen in these new DX12/11 games. (although I don't necessarily mean backporting as much as I mean using what was learned from DX12 in programming for DX11- Protection from the LITERAL Nazis).
Hitman was developed in DX11 then half-assedly ported to dx12
 
Not that you guys would (or should) necessarily revisit this, but the latest patch seems to have helped performance quite a bit in DX11. I haven't tried DX12 though so I'm not sure about that.
 
Well here is an interesting thing, pcgameshardware just done a benchmark test of Chapter 2 and the performance has improved overall for both manufacturers.
However what is interesting, the NVIDIA 980ti showed notable improvements with DX12 at 1080p, and then vanished at 1440p onwards.
Hitman (2016) Episode 2: DX12 mit bis zu 60 Prozent Leistungsplus - und Problemen
Scroll down to the first chart and use the tabs to change resolution and also switch between DX11 and DX12, no need to translate unless you wish to read the article.

Maybe an area for Kyle/Brent to investigate if they revisit Hitman.

Also of interest is how they noticed a graphics quality change for both manufacturers dependant upon a couple of variables, this is shown before the performance chart.
Cheers
 
Well here is an interesting thing, pcgameshardware just done a benchmark test of Chapter 2 and the performance has improved overall for both manufacturers.
However what is interesting, the NVIDIA 980ti showed notable improvements with DX12 at 1080p, and then vanished at 1440p onwards.
Hitman (2016) Episode 2: DX12 mit bis zu 60 Prozent Leistungsplus - und Problemen
Scroll down to the first chart and use the tabs to change resolution and also switch between DX11 and DX12, no need to translate unless you wish to read the article.

Maybe an area for Kyle/Brent to investigate if they revisit Hitman.

Also of interest is how they noticed a graphics quality change for both manufacturers dependant upon a couple of variables, this is shown before the performance chart.
Cheers

Guru3D just updated results for Episode 2.

Check them out, totally different data

Hitman 2016: PC graphics performance benchmark review

980ti showing more improvement in 1080p makes sense though, 1440p onwards is more gpu bound
 
Wonder if it comes down to the memory protection option aspect pcgameshardware identified as behaving strange, including how it affects the visuals.
Also I notice the German site using a newer NVIDIA driver.
Thanks for the heads-up from Guru3d site as all good to know and compare.
Just to add, is pcgameshardware using the internal benchmark like Guru3d?
I raise this because they mention on their own test: PCGH benchmark 'Grave Tidings'.
I know they have been looking at the ways to monitor DX12 games, just like PCPer has.

And I must say I am a bit leery of internal benchmark ever since Dragon Age Origins where the benchmark gave lower results for NVIDIA compared to the busy areas of the actual game play (performance wins switched from AMD to NVIDIA in the busy zone of the actual game, where in the internal benchmark AMD was top) - I cannot remember which review site picked up on this.

Cheers
 
Last edited:
PCGH stated they are doing a custom run-through of an area with very heavy drawcalls and cpu load. Most likely using Intel's PresentMon to record the fps, since that's what they used in their earlier Quantum Break DX12 review.
Thanks for confirming and no idea how I missed that doh.
Yeah they have been working using that Intel PresentMon like PCPer, (while both still checking for dropped frames as well); great to know this is what they did with chapter 2.

So again we have another game where in the internal benchmark test AMD runs faster and within actual game in a heavy drawcalls zone NVIDIA performance is better - relative to preset benchmark rather than say 4k vs AMD (although still good).
Really not liking these preset internal benchmarks....

It will be interesting to see if further official information comes out regarding the lesser detail seen with DX12 compared to DX11 in their testing, and also the memory protection behaviour.
Cheers
 
Last edited:
DX12 seems totally broken after last update, just added game to my steam.

dX11 benchmark runs but it doesn't give me the result, is it saved as text somewhere ?
 
Just tested it:



1440p max settings, no memory limits

I strongly suspect something goes wrong on NVIDIA DX12 at the start of the benchmark; the screen flickers twice and when it loads the average fps displayed is very low compared to DX11 ( 65fps vs ~ 48 fps) then the average goes up very rapidly as you can see in this screenshot

PgBYEUf.jpg


index.php



DX12 1300/7000
tS2UtHv.jpg

DX11 1300/7000
A6LTNCb.jpg

DX12 1490/7000
b1u3ZYE.jpg

DX11 1490/7000
uzbU2pw.jpg

DX12 1490/8000:
oQ9y5yY.jpg

DX11 1490/8000
hitman_2016_05_04_01_20_29_922.jpg

11% perf increase from 15.5% clock increase

3% perf increase from 14% mem clock increase
 
Last edited:
Leldra which driver are you using?
pcgameshardware used 364.96 hotfix.
That said bear in mind they actually have the NVIDIA card beating AMD, but they are using PresentMon rather than the internal preset test.
Also there is something strange about visual quality for both manufacturers, might be worth checking as well the memory protection option.
Hitman (2016) Episode 2: DX12 mit bis zu 60 Prozent Leistungsplus - und Problemen

Also if interested try comparing performance between 1080 and 1440 with DX11-to-12.
Cheers
 
Leldra which driver are you using?
pcgameshardware used 364.96 hotfix.
That said bear in mind they actually have the NVIDIA card beating AMD, but they are using PresentMon rather than the internal preset test.
Also there is something strange about visual quality for both manufacturers, might be worth checking as well the memory protection option.
Hitman (2016) Episode 2: DX12 mit bis zu 60 Prozent Leistungsplus - und Problemen

Also if interested try comparing performance between 1080 and 1440 with DX11-to-12.
Cheers

It's ieldra by the way ;)

I just tested 4k This is 1490/7000
iSC5VZB.jpg

I'm running 365.10 driver

Yeah I didn't really understand what they were saying in german, something to do with texture filtering but afaik it was solved when they added the 'override memory limitation' setting in the options for dx12
 
Oops sorry, was late when I posted :)
I just use the automatic translator option available with Chrome.
The most interesting performance going by their site is 1080 that does show improvements followed by 1440 (which is the threshold they start to show little difference with their DX11 to DX12).
Cheers
 
It's ieldra by the way ;)

I just tested 4k This is 1490/7000
iSC5VZB.jpg

I'm running 365.10 driver

Yeah I didn't really understand what they were saying in german, something to do with texture filtering but afaik it was solved when they added the 'override memory limitation' setting in the options for dx12
No the issue is now there is heavy blur with AMD and the memory limitation didn't change that. No one truly knows what the issue is, just a lot of guessing.
 
No the issue is now there is heavy blur with AMD and the memory limitation didn't change that. No one truly knows what the issue is, just a lot of guessing.
No the issue is now there is heavy blur with AMD and the memory limitation didn't change that. No one truly knows what the issue is, just a lot of guessing.
No the issue is now there is heavy blur with AMD and the memory limitation didn't change that. No one truly knows what the issue is, just a lot of guessing.


Well they were forcing settings from the driver control panel, so I understood it to be related to texture filtering, and all the rest was German so...

Any Germans here?

Or just people who speak German?

Or even more simply, people with amd cards who can test post screenpots
 
Well they were forcing settings from the driver control panel, so I understood it to be related to texture filtering, and all the rest was German so...

Any Germans here?

Or just people who speak German?

Or even more simply, people with amd cards who can test post screenpots
I just use Chrome with the translation option enabled within settings.
Not ideal but sort of works.
Cheers
 
with 1gb frame buffer probably not very well. to many options turned down
 
Some screenshots from the latest episode: Marrakesh

1440p max settings DX12

 
Back
Top