Deus Ex: Mankind Divided DX12 Performance Review @ [H]

I don't understand how this title is so demanding at 4k. Battlefield 1 plays great at 4k and is visually a much better looking game.
 
I'd venture a wild guess and say poor optimization. BF1 is one ot the best if not the best optimized game on PC nowadays. Virtually every game will compare unfavourably against it.

It doesn’t matter if you are intensely GPU bound, or if you try to ease the burden and run at low settings, no matter what NVIDIA GPUs suffer under DX12 in this game by a large degree.

What CPU was this tested on? Isn't there a possibility that if you had a much slower CPU then when CPU was the bottleneck the DX12 version could appear faster?
I mean that was the whole point of DX12, to allow better performance on slow CPU. Benching on an extremely fast CPU is not the only use case.

For example, my friend who has Xeon Haswell, 3.5 GHz, DDR3-1600 tested a location in The Lost City while standing still (sorry, I don't know at what settings):

DX11: 50 FPS, GPU usage 72%, CPU usage 60%
DX12: 60 FPS, GPU usage 90%, CPU usage 80%

After he enabled SSAA 2x, his performance in DX12 was worse than in DX11:
DX11: 37 FPS
DX12: 31 FPS
 
Last edited:
Good work!

Thanks for posting the VRAM usage figures. Do you happen to have a Titan X - either Maxwell or Pascal - available to see what the usage is on that GPU?
 
Nvidia beat AMD in DX12 in a Gaming Evolved title, Civ6.

All the PR BS about AMD being better at DX12 is simply PR BS. In reality it pretty much ends up as it always does.

Given that you've been a flag-waving green team member for a while now, pardon me if I take you are observation with a grain of salt.

After all, what I was talking about was the percentage of gain from dx11 to dx12. Nvidia cards tend to be very bad with dx12 and either Break Even or get worse under it. AMD cards tend to be better and see significant increases in a lot of dx12 games, however not all. However, in the dx12 games that AMD does not see a performance increase, usually Nvidia also suffers as well. There has not been a case where Nvidia has gained in performance going to dx12 over dx11 & AMD suffered to my knowledge.

In terms of which cards are faster currently, I would think that was clear by my initial post. Nvidia has the faster cards right now. That's not even Up For Debate. However, what is also not up for debate is that Nvidia cards have a hell of a time with the dx 12. To try and say otherwise is just plain fanboyism.
 
In terms of which cards are faster currently, I would think that was clear by my initial post. Nvidia has the faster cards right now. That's not even Up For Debate. However, what is also not up for debate is that Nvidia cards have a hell of a time with the dx 12. To try and say otherwise is just plain fanboyism.


Zion there is a problem with that statement, because you don't understand the programmer's paradigm when it comes to DX12, I think it has been talked to death and still people don't understand it because they are NOT programmers lol, they never will understand it and they will blindly follow what they "think" is correct even though there is much evidence to the contrary.

Do you want to talk about async compute? Do you want to talk about the different queues in DX12? if you want to make blank statements like you just did, you might want to learn about those things first before you talk about what DX12 (or LLAPI's in general) is and if it favors or doesn't favor a certain IHV's.

This was the problem in the past because there wasn't enough information out there and the knowledge base wasn't there for most programmers to comment on it either as it was too new. And a certain marketing group took FULL advantage of that and used it to their benefit (which in my view was a brilliant move, as there are still residual effects of this marketing today). Now lets not keep banging on a broken drum.
 
Last edited:
Not quite. It isn't all about the LOW LEVEL API. You wish it was so your asinine comments would have merit but well... they don't.

DX12 adds the ability for more than one core of the CPU to talk to the GPU at any given time. This is the biggest performance gain that can be brought as it is the next logical step after the single core gaming we have dealt with for years/decades. It doesn't require a great deal of low level programming but does require knowledge of the hardware's functions and ability to handle multiple core communications.


LOL just asking what happened to async compute I thought that was the great performance gainer for DX12? Sorry had do some gentle ribbing there ;)

Now you are talking about the Frostbite engine I am assuming here because this is a Deux thread. Its still a DX11 engine with DX12 addons..... This is why we see the erratic results across different vendors and different gens and brackets of the same gen of cards.

This is why Andersson wanted to go to DX12 as quickly as possible, I'm pretty sure he saw the problems with porting from DX11 to 12 and having them coexist, any experienced programmer could have foreseen this problem.

Regards to the asinine statement, I would dial that back a bit, mainly because, do you know what is required to change a graphics engine from single threaded DX11 to multithreaded DX12? Even if the engine was performing some what decently with multiple cores in DX11? The change is fairly great, its not as "simple" as you are saying. It is easier then the other features of LL API's that give more performance, but its not that easy when you have to rewrite an entire engine for scalability over different amounts of CPU cores.

What is LL programming in these "LLAPI's" is not Low Level programming by any means, LL programming traditionally means using languages that are directly associated with machine code and no abstraction going on and the need to know how it affects different hardware is PARAMOUNT, that was the only reason LL programming was used so much in the past, because the hardware differences to extract that performance it had to be used. LL programming in these LL API's means something completely different, you are still using a high level language to write to different hardware types, that is all that these LL API's give you the ability to have flexibility and access to different hardware types without as much abstraction. What has changed is the amount of abstraction layers (reduced amount of abstraction layers, there are still abstraction layers) that are available to you to accomplish that, and thus more work for the programmer when things go not as expected for different hardware types. Do not try to correlate LL programming with LL API's, they are NOT the same thing.

Now from a CPU side, AMD to Intel or vice versa, shouldn't be much of a problem, as both of them are quite similar in capabilities and features. Hardware wise they are fairly close too (Zen will bridge this gap even more because of their inclusion of HT).

Mutli threaded code in graphics engines going from DX11 to 12, there needs to be much change because the traditional graphics pipeline (at a microscopic level is different, at a macroscopic level it hasn't changed), could only do certain things in certain order so the programmer didn't need to worry about it but the driver could only access certain things at certain times, so the data that is being parsed from the GPU to the CPU and back after what ever needed to be done was fairly static, with DX12 this is no longer the case, the programmer has to explicitly tell what stage the GPU is at and what the CPU has to do and vice versa, so when things are using multi cores, things stay in sync. While its not "hard" to do its a different way of thinking. Its a major change in the way an engine is written.

Now back to Deux, the programmers of that game didn't make the engine, again these are the same problems that any inexperienced team (engine team) will come across with porting something that is not theirs, we have see it countless times. Now there is a valid reason I agree, for these results, but if a team that has multi million dollar budget, makes a commitment like porting something over to DX12 and make billions of dollars from a game? Hell ya they should be able to get it done, just take their time and get it done, not a fuckin marketing half baked BS like we have been seeing from so many of these DX12 ports.

We are paying for their product right? I think we should have a say in that. No for the people that don't care about DX12 LL API's what ever, and only care about that game, great they still have a game they like ok?

While LL API's are the future for high performance games, they are not meant for everyone until that knowledge is gained by those teams. And its all about time to learn and build ones libraries, that is it.
 
Last edited:
It would be interesting to see a 6850K used in a benchmark like this. DX12 has demonstrated performance improvements for up to 6 cores (example).

I suggest a 6850 rather than a 6800 since DX12 seems to have around 8% more draw calls for 6 physical cores over 8 logical cores, and the difference in clock frequency between 6850 and 6700K is 10%.

It would be extra interesting to see 6700K against 6850K in this scenario to see if they are closer together than they are under DX11.


On second thoughts comparing a 6600K to a 6700K in DX12 would be even more relevant since DX11 shows no benefit between them, but DX12 should.

Basically any test that shows an optimal difference, rather than the worst case scenario.
 
Zion there is a problem with that statement, because you don't understand the programmer's paradigm when it comes to DX12, I think it has been talked to death and still people don't understand it because they are NOT programmers lol, they never will understand it and they will blindly follow what they "think" is correct even though there is much evidence to the contrary.

Do you want to talk about async compute? Do you want to talk about the different queues in DX12? if you want to make blank statements like you just did, you might want to learn about those things first before you talk about what DX12 (or LLAPI's in general) is and if it favors or doesn't favor a certain IHV's.

This was the problem in the past because there wasn't enough information out there and the knowledge base wasn't there for most programmers to comment on it either as it was too new. And a certain marketing group took FULL advantage of that and used it to their benefit (which in my view was a brilliant move, as there are still residual effects of this marketing today). Now lets not keep banging on a broken drum.

Nail on the head here.

Almost everyone here, and sadly even Brent Justice, have shown they have no clear computer science background to justify their blanket statements about DX12. As I stated before, an API in and of itself is not magic. It truly requires programmers that know how to utilize it efficiently to get the most out of it.

These DX12 patched games are the absolute worst examples of the API and it is more an exercise for the developers' use of the API than it is a benefit to any of us gamers. Give development teams more time with the API and we will soon begin to see the benefits but do not write it off because of these clear practice sessions.
 
Nail on the head here.

Almost everyone here, and sadly even Brent Justice, have shown they have no clear computer science background to justify their blanket statements about DX12. As I stated before, an API in and of itself is not magic. It truly requires programmers that know how to utilize it efficiently to get the most out of it.

These DX12 patched games are the absolute worst examples of the API and it is more an exercise for the developers' use of the API than it is a benefit to any of us gamers.

That is true, we have seen this many times with new DX versions too, it just takes time to get developers used to the new API. The API just exposes what is there, everything else is up to the programmer and their capabilities.
 
As the programmers settle in and learn the ropes or new tricks, plus now much more is exposed to them to work with - I expect some amazing things to happen. Just need that one spark game where the developer/programmer just blows it out of the part then eyes will be open. If Vulkan was actually used more Doom would be a rather good case for that. We need a DX 12 version not of Doom but of that type of improvement. Biggest issue with Doom is that it provided no additional rendering or IQ enhancement other then that you can use one to two levels higher IQ settings in the program - so yes better visuals for most but nothing really new.
 
Calling DX12 a 'lame duck' is a bit much. "Other than benchmarking software"...uh..games like Forza Horizon 3 and Gears of War 4 were built from the ground up with DX12 and perform phenomenally and look extremely great. Where are those evaluations and why haven't we seen [H] articles on them? I'm disappointed in this article's blanket conclusion on the API to be honest.

And how do you know those games wouldn't perform even better on DX11 ?

We already seen it with QB when game started as DX12 actually has much better performance in DX11 mode.
 
Quantum Break was a mess and was rushed out. It was forced into DX12 when it was likely not built for it. Very big difference.
 
And how do you know those games wouldn't perform even better on DX11 ?

We already seen it with QB when game started as DX12 actually has much better performance in DX11 mode.

My guess is that he means that these games have a smooth gameplay on the contrary to others like BF1 in DX12 mode (which has severe performance issues at DX12 )
I don't think he compared DX11 with DX12, he just said about great performance at DX12 API.
 
Calling DX12 a 'lame duck' is a bit much. "Other than benchmarking software"...uh..games like Forza Horizon 3 and Gears of War 4 were built from the ground up with DX12 and perform phenomenally and look extremely great. Where are those evaluations and why haven't we seen [H] articles on them? I'm disappointed in this article's blanket conclusion on the API to be honest.

The problem here may simply be half-baked implementation after the fact. DX12 appears to perform worse when it's patched in but looks to perform wonders when the game is built from the ground up with it. Considering we have no DX11 to compare performance with Forza and Gears, well, we may never really know but the fact of the matter is that both of these games look more than next-gen and run silky smooth on even a 1070.

I suspect that DX11 coders that are translating their code to DX12 in these types of instances aren't doing so in the most efficient way possible. They may very well be adding a lot of overhead with existing, non-Direct X, related code which could easily explain why patched in DX12 is worse than their out-of-the-box DX11.

An API on it's own isn't magic. It requires people that know how to use it well to really have an impact. I took a game programming class once and I was blown away at how much code I used to write simple applications when others used much more efficient code to do the same thing. You can't label the API a bust just because existing devs aren't good with it.
Forza Horizon 3 is a feature level 11_0 game. Gears 4 is, too, but unlike Horizon 3 it has options to take advantage of video cards that support feature levels 12_0 and 12_1. The former has memory management issues and stutters. The latter would be a much better example. But this also reinforces the commitment developers must make to implementing DX12. It was clear early on that The Coalition was giving the PC version of Gears 4 extra love while the PC version of Horizon 3 looks like a half-hearted attempt at bringing a console game to PC. Gears 4 also had the advantage of being built on Unreal Engine 4.
 
Quite timely. I just downloaded this game after getting the promo code for the AMD processor I purchased. I'm using a bit of an older video card, the R9-285, and as it is a 2GB card, I can only do the "High" quality setting. I have only spent a few minutes of gameplay (was fighting a stomach bug the last couple of days), and selected DX12, my resolution (monitor native 1600x1200), and left all other settings at default. So far, the gameplay seems smooth, but I haven't been tracking the FPS.

Part of my understanding for DX12 is that, at least initially, it would be of bigger benefit to overall lower-end systems. It would be interesting to see if DX12 had any tangible benefits for people running APU's, Pentiums with low-end GPU's, AMD FX8xxx processors, etc., over DX11.

I think I have a system that fits that description. FX-8350 (non-OC) with a Radeon Fury (mild OC) on a QHD, non-freesync monitor.

Just to try it out real quick - and I recognize this is not a full or scientific test at all - I parked myself in Prague with most settings on High. Not a whole lot of action on the screen, but lots of NPCs milling about, light sources, foliage, etc... It seemed like a stable spot for some basic testing. It also was taxing enough for my system not to run at 60fps.

DX11 - 41fps
DX12 - 48fps

... a not insignificant 17% improvement. I had earlier left the game in DX11 since all the reviews said that DX12 actually hurt performance, but for my setup, it seems like it helps. Switched it on for the rest of my playthrough. I wish I had time for a more in-depth test.

EDIT:

Also, in response to the comments made about low-end CPU's not being an issue for enthusiasts, etc...
I find the game reviews to be helpful primarily in discovering what the most computation/RAM expensive options are, so that I can then work around them to get the best performance on my games. So, even without an "enthusiast"-grade CPU, I benefit from the work done here.

It may be the case, however, that low-level API implementations fall into multiple categories.

1. It may be an across-the-board improvement (Doom with Vulkan)
2. It may do nothing except give some programming experience to the developers (BF1?)
3. It may benefit those without Skylake or Broadwell-E platforms who are running into CPU/driver bottlenecks on their systems.

I suspect, given my little test (and some anecdotal info from other forums) that DeusEx:MD is of this third sort. In that case, testing on a 6700k at 4.7ghz simply won't reveal the benefit of switching to DX12. I recognize that a bottom line of "shows no difference on an OC'd current-gen i7, but will be great on an APU, i3, etc..." is, perhaps, not fitting for [H] - but may be warranted in cases of DX12 implementation of that third sort.
 
Last edited:
I'm surprised that DX12 performs worse because I thought that developers were used to making lots of small optimizations for the consoles. Does none of that expertise carry over?
 
I'm surprised that DX12 performs worse because I thought that developers were used to making lots of small optimizations for the consoles. Does none of that expertise carry over?

Making optimization for 1 or maybe 2 fixed setups with known GPU, CPU, memory and OS is somewhat easy. Making said optimization with random combinations of much wider range of options is pretty much mission impossible.
 
Making optimization for 1 or maybe 2 fixed setups with known GPU, CPU, memory and OS is somewhat easy. Making said optimization with random combinations of much wider range of options is pretty much mission impossible.

Yet iDTech6 does so - Doom2016 is almost always better than Vulkan.
 
I think this is it. Devs under schedules are used to cutting corners and half assing it. You can tell a half-baked implementation of DX12 to one done right, because the bad implementations look like this.

Basically, it looks like they just tried to slap it on for marketing purposes.
Exactly. All the hype about D12 is much ado about, in many cases. less than zero.
 
Probably so, once again the canned benchmark can mislead. Agree contorting to find where DX 12 is better would be missing the point. The game just plays better with DX 11. I would like to know more the reasons or the why and if future games with DX 12 will be better performing because of it. So far DX 12 has just been a let down in general. Plus we can do our own testing and give results/feedback here for others interested in verifying in this game anyways.

yes, the benchmark can mislead, but it might also shows it is possible to get higher framerate in DX12 then DX11 (maybe specially tailored for DX12 ?) ... i still find this interesting, so i have done some tests to give feedback (used MSI AB to map general fps+frametimes, and used PresentMon for the final data, each test has been done 3x and then averaged ... if somebody is interested in the graphs, just let me know)

all tests DX11 vs DX12 in preset "high" patch 616.0
AMD 290 (16.11.3), i5 3570K @4.1, 16GB ram, SDD

1. tested if the results form the "in-game benchmark" can be trusted:
- in-game benchmark
DX11 min 43.7 | avg 55.6
DX12 min 52.4 | avg 64.3 > min 19.91% | avg 15.65% faster
- PresentMon
DX11 min 42.7 | avg 57.5
DX12 min 51.1 | avg 66.0 > min 19.66% | avg 14.92% faster

> reporting seems to be ok, so the results are not fake, but it still might be specially prepared for DX12


2. tested "breach" game mode, first level (no enemies):

- PresentMon
DX11 min 73.3 | avg 96.3
DX12 min 75.9 | avg 98.1 > min 3.66% | avg 1.80% faster

> ok, completely different now, not really any tangible benefits from DX12 (but at the same time it doesn't seem to be slower)


3. now, the most important, some "real in-game" test ... also done in the first level, where you start the game in Dubai:
walking from the start point to the room with the elevator door, also no enemies (one run is about 110-115 sec)
- PresentMon
DX11 min 52.8 | avg 72.8
DX12 min 58.8 | avg 80.8 > min 11.42% | avg 10.97% faster

> so it seems i get a consistent +10% fps with DX12 ? ... nice
(i have not started playing the game yet, i'm waiting to play it in the best possible way, and hope to do that in stereo 3D ... so for now, i can't go much further for another/better part to test :D)

i also made a comparison of the frametimes (ms), also nothing special here, but DX has some longer frametimes from the last 0.01x% (edit: average number of frames in one run +/- 8000-8500):

last % 10% | 1% | 0.1% | 0.01% | max
DX11 17.61 | 20.63 | 23.63 | 27.59 | 40.37
DX12 14.76 | 17.01 | 20.48 | 31.97 | 43.26
 
Last edited:
Quite frankly, you can't run high enough settings at 1440p that it affects gameplay. I noticed no difference or impact to my gameplay on 6GB vs. 8GB at the settings shown. It's a non-issue.

I mentioned the VRAM difference between a 1060 (6GB) and a 480 (8GB) and said all things being equal it doesn't make sense to buy the 1060.

Brent responded to my comment with the quote listed above. My question to Brent is do you still think it is a non-factor⸮
 
Last edited:
yes, the benchmark can mislead, but it might also shows it is possible to get higher framerate in DX12 then DX11 (maybe specially tailored for DX12 ?) ... i still find this interesting, so i have done some tests to give feedback (used MSI AB to map general fps+frametimes, and used PresentMon for the final data, each test has been done 3x and then averaged ... if somebody is interested in the graphs, just let me know)

all tests DX11 vs DX12 in preset "high" patch 616.0
AMD 290 (16.11.3), i5 3570K @4.1, 16GB ram, SDD

1. tested if the results form the "in-game benchmark" can be trusted:
- in-game benchmark
DX11 min 43.7 | avg 55.6
DX12 min 52.4 | avg 64.3 > min 19.91% | avg 15.65% faster
- PresentMon
DX11 min 42.7 | avg 57.5
DX12 min 51.1 | avg 66.0 > min 19.66% | avg 14.92% faster

> reporting seems to be ok, so the results are not fake, but it still might be specially prepared for DX12


2. tested "breach" game mode, first level (no enemies):

- PresentMon
DX11 min 73.3 | avg 96.3
DX12 min 75.9 | avg 98.1 > min 3.66% | avg 1.80% faster

> ok, completely different now, not really any tangible benefits from DX12 (but at the same time it doesn't seem to be slower)


3. now, the most important, some "real in-game" test ... also done in the first level, where you start the game in Dubai:
walking from the start point to the room with the elevator door, also no enemies (one run is about 110-115 sec)
- PresentMon
DX11 min 52.8 | avg 72.8
DX12 min 58.8 | avg 80.8 > min 11.42% | avg 10.97% faster

> so it seems i get a consistent +10% fps with DX12 ? ... nice
(i have not started playing the game yet, i'm waiting to play it in the best possible way, and hope to do that in stereo 3D ... so for now, i can't go much further for another/better part to test :D)

i also made a comparison of the frametimes (ms), also nothing special here, but DX has some longer frametimes from the last 0.01x% (edit: average number of frames in one run +/- 8000-8500):

last % 10% | 1% | 0.1% | 0.01% | max
DX11 17.61 | 20.63 | 23.63 | 27.59 | 40.37
DX12 14.76 | 17.01 | 20.48 | 31.97 | 43.26
Very interesting to say the lease, so for some DX 12 could make a significant difference while on HardOCP system it did not. Especially with Nvidia but still AMD loss some as well. Could it also be just in a different part of the game DX 11 does better and vice versa?
 
Yesterday I downloaded the latest MSI Afterburner and Rivatuner Statistics Server from Guru3d - ran it with DX12 mode in this game.

I did not once see a single drop below 45fps on my RX480 - settings set to ultra preset's max, 16x AF, no AA, no motionblur and chromatic aberation, at 1440p. In most conversations framerate at 60, in gameplay at 45-60 in the Prague city area at nighttime.

RX 480, OC'd to 2150mhz on mem, 1370 on core. i5 3570k @ 4.3, 16gig 1866mhz ddr3.... Did not try DX11 switch because I got engrossed by the game.
 
... Did not try DX11 switch because I got engrossed by the game.
no, no, no ... you should stop playing the game and start testing DX11 ! :D
it might even run better :)

but interesting and nice DX12 results ... curious how DX11 will compare
 
yes, they can be inaccurate compared to real game scenarios, but the results i get seem to be very consistent:
i've done multiple runs for each preset (to average it) but the delta between runs is generally very low (mostly 0-3%)

when checking, quick and dirty, the CPU usage seems to be the same DX12 vs DX11
1x290 on low > DX12 +/- 90% CPU vs DX11 +/- 90% CPU
1x290 on ultra > DX12 +/- 50% CPU vs DX11 +/- 50% CPU
(for reference 2x290 on low > DX12 100% CPU)

testing with a lower end CPU might prove interesting, and when comparing, an i5 3570 @4.1 (instead of 4.2 apparently :D) might be significantly lower-end then an i7 6700 @4.7, IF the game uses more then 4 cores ...

as a side note, it might be that the build-in benchmark favors DX12, but that could also be found out :) ?
i know it's against [H] testing methodology (for which we are grateful) but there is no harm in testing the in-game benchmark for reference purposes ?
this way ppl that own the game, can know what to expect from the in-game benchmark compared to real gameplay ... it might be rubbish, or it might also prove to be somewhat relevant
if it's rubbish, than it's nice to know :) ... if it's ok, ppl could have a simple way to compare data, and relate to the real gameplay benchmarks provided by [H]

The other aspect is what the benchmark does in terms of mechanisms used (they could do aspects more synthetic and never used in-game) and critically how they decide to capture the frames (is it monitored at internal engine level and then decisions what counts or more at driver level) to represent performance and behaviour, this is why myself and some review sites stress using an independent 3rd party utility such as PresentMon and in-game play.
Problem is PresentMon is not very user friendly for most gamers, not really designed for them.
Cheers
 
The other aspect is what the benchmark does in terms of mechanisms used (they could do aspects more synthetic and never used in-game) and critically how they decide to capture the frames (is it monitored at internal engine level and then decisions what counts or more at driver level) to represent performance and behaviour, this is why myself and some review sites stress using an independent 3rd party utility such as PresentMon and in-game play.
Cheers

yes, this is what i wanted to find out: if the in-game benchmark shows different results than other reporting tools ... but seems not to be the case:
when comparing the results of the in-game benchmark vs MSI AfterBurner & PresentMon, the results are similar for DX11 and DX12 (preset "high")

- in-game benchmark
DX11 min 43.7 | avg 55.6
DX12 min 52.4 | avg 64.3
- MSI AB
DX11 min 44.3 | avg 55.8 > min +1.45% | avg +0.33%
DX12 min 53.2 | avg 64.4 > min +1.59% | avg +0.14%
- PresentMon
DX11 min 42.7 | avg 57.5 > min -2.19% | avg +3.35%
DX12 min 51.1 | avg 66.0 > min -2.40% | avg +2.70%

> the delta seems to be max +/- 3% where PresentMon reports +3.35% higher average then the in-game benchmark, and -2.40% in case of min fps
... which seems also logical because PresentMon analyzes each frame, where MSI AB checks data at fixed intervals ?


Problem is PresentMon is not very user friendly for most gamers, not really designed for them.
Cheers
right, so i actually like the fact that there is an in-game benchmark ... more games should have it, that way even casual users can play with some settings and find out what effect they have ... and it's up to the tech/game press and reviewers to find out if these in-game benchmarks are 'honest' and report it if they are not :)

MSI AfterBurner can easily record a nice graph of frametime and fps, which is very useful to get a visual preview of the raw numbers you get with PresentMon when creating Excel graphs ...
but to fair, from the moment you start with Excel, there is very little difference between working with the MSI AfterBurner or the PresentMon generated file, you just have the choose the right columns and go from there
 
Last edited:
Many of the comments on this thread bring out what concerns me most about DX12.
When the burden of wringing the most out of DX was on the GPU companies, I feel as if there was a higher motivation to do so. The premise of brand X being able to exploit new DX features that brand Y could not translated directly into card sales.

Game developers might have the best intentions going in to a project, but pressure to produce and get the product to market ends up providing less motivation to learn the intricacies needed to exploit DX if the burden to program for a wider range of GPU variables falls on their heads, as opposed to the game just making a call to the driver.

This is exacerbated by the already-existing mentality of producing foremost for consoles and their lower graphics requirements.

All that said, I'm not a programmer, so I might be completely full of it...
 
All that said, I'm not a programmer, so

None of us are...

I think it will take time just like it took for devs with consoles. The early games releases during the lifespan of a console were generally subpar. But as the console aged and devs had time to become proficient, the game releases towards the end of the cycle became highly optimized and were great examples of this synergy.
 
...

right, so i actually like the fact that there is an in-game benchmark ... more games should have it, that way even casual users can play with some settings and find out what effect they have ... and it's up to the tech/game press and reviewers to find out if these in-game benchmarks are 'honest' and report it if they are not :)

MSI AfterBurner can easily record a nice graph of frametime and fps, which is very useful to get a visual preview of the raw numbers you get with PresentMon when creating Excel graphs ...
but to fair, from the moment you start with Excel, there is very little difference between working with the MSI AfterBurner or the PresentMon generated file, you just have the choose the right columns and go from there

Ah cool then BF1 looks to be pretty good from a benchmark and game correlation context.
However there are quite a few games that do give different results between their internal benchmark and real game, sometimes it can mean a manufacturer being slower in the benchmark but ironically faster in the actual game, and this has happened.
Unfortunately because of those it makes it unreliable unless used for a gamer just to test their settings with their card to find quickly an optimal setup, but then again it would only be a rough guideline for those games where benchmark and real game do not correlate well.

Cheers
 
Ah cool then BF1 looks to be pretty good from a benchmark and game correlation context.

yes the option "perfoverlay.frametimelogenable" from BF4 is still there in BF1 :) ... with me DX11 is ok, but DX12 seems to show only CPU frames
could be interesting to see it the frametimes are generated in the same way as PresentMon

at that time, somebody made a nice BF4 csv tool that simply read the log file to show some numerical analysis and could also produce graphs ... btw, it still seems to works for the BF1 csv
(you could probably modify the PresentMon output to read it with this tool ... but then you loose the fun of playing with excel ;) )
 
Brent_Justice since this I'd an enthusiasts forum why don't you game in quad sli/ quad crossfire?

While saying quad sli is just "ignant", I would actually like to see 4k benchmarks on 1080 SLI. I haven't found anything I can't lock at 60fps 4k ultra yet.
I'm not saying there aren't games out there that won't be a solid 60 - just that I haven't played them yet.
 
Many of the comments on this thread bring out what concerns me most about DX12.
When the burden of wringing the most out of DX was on the GPU companies, I feel as if there was a higher motivation to do so. The premise of brand X being able to exploit new DX features that brand Y could not translated directly into card sales.

Game developers might have the best intentions going in to a project, but pressure to produce and get the product to market ends up providing less motivation to learn the intricacies needed to exploit DX if the burden to program for a wider range of GPU variables falls on their heads, as opposed to the game just making a call to the driver.

This is exacerbated by the already-existing mentality of producing foremost for consoles and their lower graphics requirements.

All that said, I'm not a programmer, so I might be completely full of it...

It's just simple connect-the-dots: Optimization is low on the list of dev priorities so pushing the burden of LLAPIs to the devs instead of the GPU vendor isn't a exactly a good idea.
 
It's just simple connect-the-dots: Optimization is low on the list of dev priorities so pushing the burden of LLAPIs to the devs instead of the GPU vendor isn't a exactly a good idea.
For a developer a LLAPI should make it a lot easier to troubleshoot code, fine tune it for better performance vice a black box that causes your code to crash and you not knowing it is your code or the driver at fault. In other words once experience and exposure is more mature, DX 12 will most likely be faster to use then a more black box driver optimize API which drivers changes occur all the time.
 
It's just simple connect-the-dots: Optimization is low on the list of dev priorities so pushing the burden of LLAPIs to the devs instead of the GPU vendor isn't a exactly a good idea.

Exactly. Its all about money. Same reason why CPU optimizations often lack badly.

People have to understand that they pretty much ask developers to spend a lot more time and an incredible amount of more resources for free to make this happen. Not to mention the issues ahead with missing optimizations and paths for future graphics cards. Intels DX12 IGP is supported in 1-2 cases for the same reason. And 3DMark being one of those.
 
For a developer a LLAPI should make it a lot easier to troubleshoot code, fine tune it for better performance vice a black box that causes your code to crash and you not knowing it is your code or the driver at fault. In other words once experience and exposure is more mature, DX 12 will most likely be faster to use then a more black box driver optimize API which drivers changes occur all the time.

LLAPI is always much harder. Its never going to be easier. And the people you need on the team needs to be much better than regular. Top money, top crop. And then you have to add a lot more time to it as well, not to mention future support.

DX12 will never be cheaper, less time consuming or easier that DX11.

I also doubt in a neutral setting that it will be better than DX11 in performance. The only place DX12 will ever excel, should it ever happen, is when they truly do something DX11 cant. But we haven't seen any of this and we are not going to anytime soon. And by then we will have DX13 or whatever.

Even DICE cant make a good DX12. The reality is there.
 
Exactly. Its all about money. Same reason why CPU optimizations often lack badly.

People have to understand that they pretty much ask developers to spend a lot more time and an incredible amount of more resources for free to make this happen. Not to mention the issues ahead with missing optimizations and paths for future graphics cards. Intels DX12 IGP is supported in 1-2 cases for the same reason. And 3DMark being one of those.
That is why Nvidia and AMD will need to support the developers more for these optimizations. I am sure Nvidia is very active in this (they have the money) not sure about AMD. Now it is not if developers where not making optimizations anyways with DX 11 and other API's because well they have been having different paths for different vendors at times. The problem comes when you have too many different enough hardware designs or platforms that need to be specifically programmed for. If AMD GCN arch from 1.1 and up are virtually the same from a programming standpoint then it should not take much effort there. Nvidia Maxwell and Pascal? So far it looks like Pascal can do DX 12 just fine but going back to Kepler and Fermi maybe wasted effort anyways for LLAPI's at this stage.

Would like to know what some of the developers think of DX 12 and Vulkan - everything I've heard seems to be more positive then negative.
 
Last edited:
LLAPI is always much harder. Its never going to be easier. And the people you need on the team needs to be much better than regular. Top money, top crop. And then you have to add a lot more time to it as well, not to mention future support.

DX12 will never be cheaper, less time consuming or easier that DX11.

I also doubt in a neutral setting that it will be better than DX11 in performance. The only place DX12 will ever excel, should it ever happen, is when they truly do something DX11 cant. But we haven't seen any of this and we are not going to anytime soon. And by then we will have DX13 or whatever.

Even DICE cant make a good DX12. The reality is there.
Dice results in BF1 is disappointing but not done yet. Once I get 1070 SLI I will do some testing, looks like BF1 is on sell for a rather great price now.

DX 12 will come of age when GPU power is dramatically increased where DX 11 will become the restriction. Start tripling, no even doubling the draw calls will kill DX 11. The more complex your scenes from objects, special shaders etc. the more restrictive DX 11 will become. At this time I would kinda agree with you since there is not a clear example showing that DX 12 can do something gaming wise above DX 11. Does that mean it can't - not at all. Plus you have to consider driver developement incurs a cost as well as more complex the gpu gets and how restrictive a given API can become blocking out hardware ability without a LLAPI. A LLAPI will allow access to new hardware capability much easier for example Async Compute (AMD method) with DX 12, not available with DX 11 - a 4%-7% gain in DX 12 for AMD in Gears Of War.
 
Even Microsoft says DX12 will never replace DX11. That's the entire case behind DX11.3. DX12 is for the sub 1% developers to begin with.

And every time you talk about the async gains. Remember the power increase. Not to mention the work behind needed is most likely not worth the 5% or whatever gain there is and requires fat sponsorships.

With DX12 you ask developers to do with job Nvidia, AMD or Intel does one time. They need to do it every time, including backwards for games when new GPUs comes out. At least if you exclude the constant reuse of old uarchs.

That is why Nvidia and AMD will need to support the developers more for these optimizations. I am sure Nvidia is very active in this (they have the money) not sure about AMD. Now it is not if developers where not making optimizations anyways with DX 11 and other API's because well they have been having different paths for different vendors at times. The problem comes when you have too many different enough hardware designs or platforms that need to be specifically programmed for. If AMD GCN arch from 1.1 and up are virtually the same from a programming standpoint then it should not take much effort there. Nvidia Maxwell and Pascal? So far it looks like Pascal can do DX 12 just fine but going back to Kepler and Fermi maybe wasted effort anyways for LLAPI's at this stage.

Would like to know what some of the developers think of DX 12 and Vulkan - everything I've heard seems to be more positive then negative.

DX10 was also the perfect API, praised by developers when asked in public. Hated by developers when asked in private. And we all know the rest of the history.

You are pretty much saying that the entire success of the API depends on sponsorship money that often screw the result in favour of the sponsor. I remember when some people blindly thought it was as easy as a checkbox in the game engine. Oh have the times changed.

What happens when Volta comes in terms of older games and DX12? What happens if AMD ever moves on from the same reused GCN since 2012? Even between GCN versions with tiny changes it can go bad fast as we have seen with 1.2 and Mantle.

Keep the old GPU? Buy the released special version?
 
Last edited:
Even Microsoft says DX12 will never replace DX11. That's the entire case behind DX11.3.

And every time you talk about the async gains. Remember the power increase. Not to mention the work behind needed is most likely not worth the 5% or whatever gain there is.
Once a developer has the code that works, or game engine then that no longer becomes an on going time taking event. We have also been looking at averages and not scene or view points where it makes a bigger difference in % and experience. A 1% increase average could also mean in one area of a game a 20% increase in performance where it is now smooth vice jerky and other areas zero increase. Averages can be somewhat misleading if you don't consider what makes up that average.

Now other hardware features that Nvidia has can also be used if need be. DX 12 will allow Nvidia to expose those new capability much easier then with DX 11.
 
Back
Top