Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
It doesn’t matter if you are intensely GPU bound, or if you try to ease the burden and run at low settings, no matter what NVIDIA GPUs suffer under DX12 in this game by a large degree.
Nvidia beat AMD in DX12 in a Gaming Evolved title, Civ6.
All the PR BS about AMD being better at DX12 is simply PR BS. In reality it pretty much ends up as it always does.
In terms of which cards are faster currently, I would think that was clear by my initial post. Nvidia has the faster cards right now. That's not even Up For Debate. However, what is also not up for debate is that Nvidia cards have a hell of a time with the dx 12. To try and say otherwise is just plain fanboyism.
Not quite. It isn't all about the LOW LEVEL API. You wish it was so your asinine comments would have merit but well... they don't.
DX12 adds the ability for more than one core of the CPU to talk to the GPU at any given time. This is the biggest performance gain that can be brought as it is the next logical step after the single core gaming we have dealt with for years/decades. It doesn't require a great deal of low level programming but does require knowledge of the hardware's functions and ability to handle multiple core communications.
Zion there is a problem with that statement, because you don't understand the programmer's paradigm when it comes to DX12, I think it has been talked to death and still people don't understand it because they are NOT programmers lol, they never will understand it and they will blindly follow what they "think" is correct even though there is much evidence to the contrary.
Do you want to talk about async compute? Do you want to talk about the different queues in DX12? if you want to make blank statements like you just did, you might want to learn about those things first before you talk about what DX12 (or LLAPI's in general) is and if it favors or doesn't favor a certain IHV's.
This was the problem in the past because there wasn't enough information out there and the knowledge base wasn't there for most programmers to comment on it either as it was too new. And a certain marketing group took FULL advantage of that and used it to their benefit (which in my view was a brilliant move, as there are still residual effects of this marketing today). Now lets not keep banging on a broken drum.
Nail on the head here.
Almost everyone here, and sadly even Brent Justice, have shown they have no clear computer science background to justify their blanket statements about DX12. As I stated before, an API in and of itself is not magic. It truly requires programmers that know how to utilize it efficiently to get the most out of it.
These DX12 patched games are the absolute worst examples of the API and it is more an exercise for the developers' use of the API than it is a benefit to any of us gamers.
Calling DX12 a 'lame duck' is a bit much. "Other than benchmarking software"...uh..games like Forza Horizon 3 and Gears of War 4 were built from the ground up with DX12 and perform phenomenally and look extremely great. Where are those evaluations and why haven't we seen [H] articles on them? I'm disappointed in this article's blanket conclusion on the API to be honest.
And how do you know those games wouldn't perform even better on DX11 ?
We already seen it with QB when game started as DX12 actually has much better performance in DX11 mode.
Quantum Break was a mess and was rushed out. It was forced into DX12 when it was likely not built for it. Very big difference.
Forza Horizon 3 is a feature level 11_0 game. Gears 4 is, too, but unlike Horizon 3 it has options to take advantage of video cards that support feature levels 12_0 and 12_1. The former has memory management issues and stutters. The latter would be a much better example. But this also reinforces the commitment developers must make to implementing DX12. It was clear early on that The Coalition was giving the PC version of Gears 4 extra love while the PC version of Horizon 3 looks like a half-hearted attempt at bringing a console game to PC. Gears 4 also had the advantage of being built on Unreal Engine 4.Calling DX12 a 'lame duck' is a bit much. "Other than benchmarking software"...uh..games like Forza Horizon 3 and Gears of War 4 were built from the ground up with DX12 and perform phenomenally and look extremely great. Where are those evaluations and why haven't we seen [H] articles on them? I'm disappointed in this article's blanket conclusion on the API to be honest.
The problem here may simply be half-baked implementation after the fact. DX12 appears to perform worse when it's patched in but looks to perform wonders when the game is built from the ground up with it. Considering we have no DX11 to compare performance with Forza and Gears, well, we may never really know but the fact of the matter is that both of these games look more than next-gen and run silky smooth on even a 1070.
I suspect that DX11 coders that are translating their code to DX12 in these types of instances aren't doing so in the most efficient way possible. They may very well be adding a lot of overhead with existing, non-Direct X, related code which could easily explain why patched in DX12 is worse than their out-of-the-box DX11.
An API on it's own isn't magic. It requires people that know how to use it well to really have an impact. I took a game programming class once and I was blown away at how much code I used to write simple applications when others used much more efficient code to do the same thing. You can't label the API a bust just because existing devs aren't good with it.
Quite timely. I just downloaded this game after getting the promo code for the AMD processor I purchased. I'm using a bit of an older video card, the R9-285, and as it is a 2GB card, I can only do the "High" quality setting. I have only spent a few minutes of gameplay (was fighting a stomach bug the last couple of days), and selected DX12, my resolution (monitor native 1600x1200), and left all other settings at default. So far, the gameplay seems smooth, but I haven't been tracking the FPS.
Part of my understanding for DX12 is that, at least initially, it would be of bigger benefit to overall lower-end systems. It would be interesting to see if DX12 had any tangible benefits for people running APU's, Pentiums with low-end GPU's, AMD FX8xxx processors, etc., over DX11.
I'm surprised that DX12 performs worse because I thought that developers were used to making lots of small optimizations for the consoles. Does none of that expertise carry over?
Making optimization for 1 or maybe 2 fixed setups with known GPU, CPU, memory and OS is somewhat easy. Making said optimization with random combinations of much wider range of options is pretty much mission impossible.
Exactly. All the hype about D12 is much ado about, in many cases. less than zero.I think this is it. Devs under schedules are used to cutting corners and half assing it. You can tell a half-baked implementation of DX12 to one done right, because the bad implementations look like this.
Basically, it looks like they just tried to slap it on for marketing purposes.
Probably so, once again the canned benchmark can mislead. Agree contorting to find where DX 12 is better would be missing the point. The game just plays better with DX 11. I would like to know more the reasons or the why and if future games with DX 12 will be better performing because of it. So far DX 12 has just been a let down in general. Plus we can do our own testing and give results/feedback here for others interested in verifying in this game anyways.
Quite frankly, you can't run high enough settings at 1440p that it affects gameplay. I noticed no difference or impact to my gameplay on 6GB vs. 8GB at the settings shown. It's a non-issue.
Very interesting to say the lease, so for some DX 12 could make a significant difference while on HardOCP system it did not. Especially with Nvidia but still AMD loss some as well. Could it also be just in a different part of the game DX 11 does better and vice versa?yes, the benchmark can mislead, but it might also shows it is possible to get higher framerate in DX12 then DX11 (maybe specially tailored for DX12 ?) ... i still find this interesting, so i have done some tests to give feedback (used MSI AB to map general fps+frametimes, and used PresentMon for the final data, each test has been done 3x and then averaged ... if somebody is interested in the graphs, just let me know)
all tests DX11 vs DX12 in preset "high" patch 616.0
AMD 290 (16.11.3), i5 3570K @4.1, 16GB ram, SDD
1. tested if the results form the "in-game benchmark" can be trusted:
- in-game benchmark
DX11 min 43.7 | avg 55.6
DX12 min 52.4 | avg 64.3 > min 19.91% | avg 15.65% faster
- PresentMon
DX11 min 42.7 | avg 57.5
DX12 min 51.1 | avg 66.0 > min 19.66% | avg 14.92% faster
> reporting seems to be ok, so the results are not fake, but it still might be specially prepared for DX12
2. tested "breach" game mode, first level (no enemies):
- PresentMon
DX11 min 73.3 | avg 96.3
DX12 min 75.9 | avg 98.1 > min 3.66% | avg 1.80% faster
> ok, completely different now, not really any tangible benefits from DX12 (but at the same time it doesn't seem to be slower)
3. now, the most important, some "real in-game" test ... also done in the first level, where you start the game in Dubai:
walking from the start point to the room with the elevator door, also no enemies (one run is about 110-115 sec)
- PresentMon
DX11 min 52.8 | avg 72.8
DX12 min 58.8 | avg 80.8 > min 11.42% | avg 10.97% faster
> so it seems i get a consistent +10% fps with DX12 ? ... nice
(i have not started playing the game yet, i'm waiting to play it in the best possible way, and hope to do that in stereo 3D ... so for now, i can't go much further for another/better part to test)
i also made a comparison of the frametimes (ms), also nothing special here, but DX has some longer frametimes from the last 0.01x% (edit: average number of frames in one run +/- 8000-8500):
last % 10% | 1% | 0.1% | 0.01% | max
DX11 17.61 | 20.63 | 23.63 | 27.59 | 40.37
DX12 14.76 | 17.01 | 20.48 | 31.97 | 43.26
no, no, no ... you should stop playing the game and start testing DX11 !... Did not try DX11 switch because I got engrossed by the game.
yes, they can be inaccurate compared to real game scenarios, but the results i get seem to be very consistent:
i've done multiple runs for each preset (to average it) but the delta between runs is generally very low (mostly 0-3%)
when checking, quick and dirty, the CPU usage seems to be the same DX12 vs DX11
1x290 on low > DX12 +/- 90% CPU vs DX11 +/- 90% CPU
1x290 on ultra > DX12 +/- 50% CPU vs DX11 +/- 50% CPU
(for reference 2x290 on low > DX12 100% CPU)
testing with a lower end CPU might prove interesting, and when comparing, an i5 3570 @4.1 (instead of 4.2 apparently) might be significantly lower-end then an i7 6700 @4.7, IF the game uses more then 4 cores ...
as a side note, it might be that the build-in benchmark favors DX12, but that could also be found out?
i know it's against [H] testing methodology (for which we are grateful) but there is no harm in testing the in-game benchmark for reference purposes ?
this way ppl that own the game, can know what to expect from the in-game benchmark compared to real gameplay ... it might be rubbish, or it might also prove to be somewhat relevant
if it's rubbish, than it's nice to know... if it's ok, ppl could have a simple way to compare data, and relate to the real gameplay benchmarks provided by [H]
The other aspect is what the benchmark does in terms of mechanisms used (they could do aspects more synthetic and never used in-game) and critically how they decide to capture the frames (is it monitored at internal engine level and then decisions what counts or more at driver level) to represent performance and behaviour, this is why myself and some review sites stress using an independent 3rd party utility such as PresentMon and in-game play.
Cheers
right, so i actually like the fact that there is an in-game benchmark ... more games should have it, that way even casual users can play with some settings and find out what effect they have ... and it's up to the tech/game press and reviewers to find out if these in-game benchmarks are 'honest' and report it if they are notProblem is PresentMon is not very user friendly for most gamers, not really designed for them.
Cheers
All that said, I'm not a programmer, so
...
right, so i actually like the fact that there is an in-game benchmark ... more games should have it, that way even casual users can play with some settings and find out what effect they have ... and it's up to the tech/game press and reviewers to find out if these in-game benchmarks are 'honest' and report it if they are not
MSI AfterBurner can easily record a nice graph of frametime and fps, which is very useful to get a visual preview of the raw numbers you get with PresentMon when creating Excel graphs ...
but to fair, from the moment you start with Excel, there is very little difference between working with the MSI AfterBurner or the PresentMon generated file, you just have the choose the right columns and go from there
Ah cool then BF1 looks to be pretty good from a benchmark and game correlation context.
Brent_Justice since this I'd an enthusiasts forum why don't you game in quad sli/ quad crossfire?
Many of the comments on this thread bring out what concerns me most about DX12.
When the burden of wringing the most out of DX was on the GPU companies, I feel as if there was a higher motivation to do so. The premise of brand X being able to exploit new DX features that brand Y could not translated directly into card sales.
Game developers might have the best intentions going in to a project, but pressure to produce and get the product to market ends up providing less motivation to learn the intricacies needed to exploit DX if the burden to program for a wider range of GPU variables falls on their heads, as opposed to the game just making a call to the driver.
This is exacerbated by the already-existing mentality of producing foremost for consoles and their lower graphics requirements.
All that said, I'm not a programmer, so I might be completely full of it...
For a developer a LLAPI should make it a lot easier to troubleshoot code, fine tune it for better performance vice a black box that causes your code to crash and you not knowing it is your code or the driver at fault. In other words once experience and exposure is more mature, DX 12 will most likely be faster to use then a more black box driver optimize API which drivers changes occur all the time.It's just simple connect-the-dots: Optimization is low on the list of dev priorities so pushing the burden of LLAPIs to the devs instead of the GPU vendor isn't a exactly a good idea.
It's just simple connect-the-dots: Optimization is low on the list of dev priorities so pushing the burden of LLAPIs to the devs instead of the GPU vendor isn't a exactly a good idea.
For a developer a LLAPI should make it a lot easier to troubleshoot code, fine tune it for better performance vice a black box that causes your code to crash and you not knowing it is your code or the driver at fault. In other words once experience and exposure is more mature, DX 12 will most likely be faster to use then a more black box driver optimize API which drivers changes occur all the time.
That is why Nvidia and AMD will need to support the developers more for these optimizations. I am sure Nvidia is very active in this (they have the money) not sure about AMD. Now it is not if developers where not making optimizations anyways with DX 11 and other API's because well they have been having different paths for different vendors at times. The problem comes when you have too many different enough hardware designs or platforms that need to be specifically programmed for. If AMD GCN arch from 1.1 and up are virtually the same from a programming standpoint then it should not take much effort there. Nvidia Maxwell and Pascal? So far it looks like Pascal can do DX 12 just fine but going back to Kepler and Fermi maybe wasted effort anyways for LLAPI's at this stage.Exactly. Its all about money. Same reason why CPU optimizations often lack badly.
People have to understand that they pretty much ask developers to spend a lot more time and an incredible amount of more resources for free to make this happen. Not to mention the issues ahead with missing optimizations and paths for future graphics cards. Intels DX12 IGP is supported in 1-2 cases for the same reason. And 3DMark being one of those.
Dice results in BF1 is disappointing but not done yet. Once I get 1070 SLI I will do some testing, looks like BF1 is on sell for a rather great price now.LLAPI is always much harder. Its never going to be easier. And the people you need on the team needs to be much better than regular. Top money, top crop. And then you have to add a lot more time to it as well, not to mention future support.
DX12 will never be cheaper, less time consuming or easier that DX11.
I also doubt in a neutral setting that it will be better than DX11 in performance. The only place DX12 will ever excel, should it ever happen, is when they truly do something DX11 cant. But we haven't seen any of this and we are not going to anytime soon. And by then we will have DX13 or whatever.
Even DICE cant make a good DX12. The reality is there.
That is why Nvidia and AMD will need to support the developers more for these optimizations. I am sure Nvidia is very active in this (they have the money) not sure about AMD. Now it is not if developers where not making optimizations anyways with DX 11 and other API's because well they have been having different paths for different vendors at times. The problem comes when you have too many different enough hardware designs or platforms that need to be specifically programmed for. If AMD GCN arch from 1.1 and up are virtually the same from a programming standpoint then it should not take much effort there. Nvidia Maxwell and Pascal? So far it looks like Pascal can do DX 12 just fine but going back to Kepler and Fermi maybe wasted effort anyways for LLAPI's at this stage.
Would like to know what some of the developers think of DX 12 and Vulkan - everything I've heard seems to be more positive then negative.
Once a developer has the code that works, or game engine then that no longer becomes an on going time taking event. We have also been looking at averages and not scene or view points where it makes a bigger difference in % and experience. A 1% increase average could also mean in one area of a game a 20% increase in performance where it is now smooth vice jerky and other areas zero increase. Averages can be somewhat misleading if you don't consider what makes up that average.Even Microsoft says DX12 will never replace DX11. That's the entire case behind DX11.3.
And every time you talk about the async gains. Remember the power increase. Not to mention the work behind needed is most likely not worth the 5% or whatever gain there is.