Deus Ex: Mankind Divided DX12 Performance Review @ [H]

Discussion in 'Video Cards' started by Brent_Justice, Nov 18, 2016.

  1. Shintai

    Shintai 2[H]4U

    Messages:
    3,253
    Joined:
    Jul 1, 2016
    That story have been said multiple times before in the history. We never needed a low level console API for that reason.

    In 2-3 years we may have DX13 instead as well.

    DX11 beats DX12. Just as DX9 did with DX10.

    But it shouldn't be a surprise, everything in the PC industry tells you low level isn't the answer to anything. Same reason why pretty much no DX12 game runs on the most feature advanced GPU, the gen 9/9.5 IGP ;)
     
    Last edited: Nov 20, 2016
  2. JustReason

    JustReason [H]ard|Gawd

    Messages:
    1,640
    Joined:
    Oct 31, 2015
    Not quite. It isn't all about the LOW LEVEL API. You wish it was so your asinine comments would have merit but well... they don't.

    DX12 adds the ability for more than one core of the CPU to talk to the GPU at any given time. This is the biggest performance gain that can be brought as it is the next logical step after the single core gaming we have dealt with for years/decades. It doesn't require a great deal of low level programming but does require knowledge of the hardware's functions and ability to handle multiple core communications.
     
  3. lolfail9001

    lolfail9001 [H]ard|Gawd

    Messages:
    1,145
    Joined:
    May 31, 2016
    And what? Btw, Dx11 allows 2 threads submitting draw calls, last time i checked.
    Newsflash guys: Frostbite is apparently single threaded engine.
    Yep, and that makes it bad, just saying.
     
    Armenius likes this.
  4. homernoy

    homernoy Limp Gawd

    Messages:
    208
    Joined:
    Jan 31, 2007
    I don't understand how this title is so demanding at 4k. Battlefield 1 plays great at 4k and is visually a much better looking game.
     
  5. Simplex

    Simplex Limp Gawd

    Messages:
    259
    Joined:
    Sep 6, 2006
    I'd venture a wild guess and say poor optimization. BF1 is one ot the best if not the best optimized game on PC nowadays. Virtually every game will compare unfavourably against it.

    What CPU was this tested on? Isn't there a possibility that if you had a much slower CPU then when CPU was the bottleneck the DX12 version could appear faster?
    I mean that was the whole point of DX12, to allow better performance on slow CPU. Benching on an extremely fast CPU is not the only use case.

    For example, my friend who has Xeon Haswell, 3.5 GHz, DDR3-1600 tested a location in The Lost City while standing still (sorry, I don't know at what settings):

    DX11: 50 FPS, GPU usage 72%, CPU usage 60%
    DX12: 60 FPS, GPU usage 90%, CPU usage 80%

    After he enabled SSAA 2x, his performance in DX12 was worse than in DX11:
    DX11: 37 FPS
    DX12: 31 FPS
     
    Last edited: Nov 20, 2016
  6. Quartz-1

    Quartz-1 2[H]4U

    Messages:
    4,000
    Joined:
    May 20, 2011
    Good work!

    Thanks for posting the VRAM usage figures. Do you happen to have a Titan X - either Maxwell or Pascal - available to see what the usage is on that GPU?
     
  7. Zion Halcyon

    Zion Halcyon [H]ard|Gawd

    Messages:
    1,768
    Joined:
    Dec 28, 2007
    Given that you've been a flag-waving green team member for a while now, pardon me if I take you are observation with a grain of salt.

    After all, what I was talking about was the percentage of gain from dx11 to dx12. Nvidia cards tend to be very bad with dx12 and either Break Even or get worse under it. AMD cards tend to be better and see significant increases in a lot of dx12 games, however not all. However, in the dx12 games that AMD does not see a performance increase, usually Nvidia also suffers as well. There has not been a case where Nvidia has gained in performance going to dx12 over dx11 & AMD suffered to my knowledge.

    In terms of which cards are faster currently, I would think that was clear by my initial post. Nvidia has the faster cards right now. That's not even Up For Debate. However, what is also not up for debate is that Nvidia cards have a hell of a time with the dx 12. To try and say otherwise is just plain fanboyism.
     
  8. razor1

    razor1 [H]ardness Supreme

    Messages:
    7,644
    Joined:
    Jul 14, 2005

    Zion there is a problem with that statement, because you don't understand the programmer's paradigm when it comes to DX12, I think it has been talked to death and still people don't understand it because they are NOT programmers lol, they never will understand it and they will blindly follow what they "think" is correct even though there is much evidence to the contrary.

    Do you want to talk about async compute? Do you want to talk about the different queues in DX12? if you want to make blank statements like you just did, you might want to learn about those things first before you talk about what DX12 (or LLAPI's in general) is and if it favors or doesn't favor a certain IHV's.

    This was the problem in the past because there wasn't enough information out there and the knowledge base wasn't there for most programmers to comment on it either as it was too new. And a certain marketing group took FULL advantage of that and used it to their benefit (which in my view was a brilliant move, as there are still residual effects of this marketing today). Now lets not keep banging on a broken drum.
     
    Last edited: Nov 20, 2016
    Nima84 and Armenius like this.
  9. razor1

    razor1 [H]ardness Supreme

    Messages:
    7,644
    Joined:
    Jul 14, 2005

    LOL just asking what happened to async compute I thought that was the great performance gainer for DX12? Sorry had do some gentle ribbing there ;)

    Now you are talking about the Frostbite engine I am assuming here because this is a Deux thread. Its still a DX11 engine with DX12 addons..... This is why we see the erratic results across different vendors and different gens and brackets of the same gen of cards.

    This is why Andersson wanted to go to DX12 as quickly as possible, I'm pretty sure he saw the problems with porting from DX11 to 12 and having them coexist, any experienced programmer could have foreseen this problem.

    Regards to the asinine statement, I would dial that back a bit, mainly because, do you know what is required to change a graphics engine from single threaded DX11 to multithreaded DX12? Even if the engine was performing some what decently with multiple cores in DX11? The change is fairly great, its not as "simple" as you are saying. It is easier then the other features of LL API's that give more performance, but its not that easy when you have to rewrite an entire engine for scalability over different amounts of CPU cores.

    What is LL programming in these "LLAPI's" is not Low Level programming by any means, LL programming traditionally means using languages that are directly associated with machine code and no abstraction going on and the need to know how it affects different hardware is PARAMOUNT, that was the only reason LL programming was used so much in the past, because the hardware differences to extract that performance it had to be used. LL programming in these LL API's means something completely different, you are still using a high level language to write to different hardware types, that is all that these LL API's give you the ability to have flexibility and access to different hardware types without as much abstraction. What has changed is the amount of abstraction layers (reduced amount of abstraction layers, there are still abstraction layers) that are available to you to accomplish that, and thus more work for the programmer when things go not as expected for different hardware types. Do not try to correlate LL programming with LL API's, they are NOT the same thing.

    Now from a CPU side, AMD to Intel or vice versa, shouldn't be much of a problem, as both of them are quite similar in capabilities and features. Hardware wise they are fairly close too (Zen will bridge this gap even more because of their inclusion of HT).

    Mutli threaded code in graphics engines going from DX11 to 12, there needs to be much change because the traditional graphics pipeline (at a microscopic level is different, at a macroscopic level it hasn't changed), could only do certain things in certain order so the programmer didn't need to worry about it but the driver could only access certain things at certain times, so the data that is being parsed from the GPU to the CPU and back after what ever needed to be done was fairly static, with DX12 this is no longer the case, the programmer has to explicitly tell what stage the GPU is at and what the CPU has to do and vice versa, so when things are using multi cores, things stay in sync. While its not "hard" to do its a different way of thinking. Its a major change in the way an engine is written.

    Now back to Deux, the programmers of that game didn't make the engine, again these are the same problems that any inexperienced team (engine team) will come across with porting something that is not theirs, we have see it countless times. Now there is a valid reason I agree, for these results, but if a team that has multi million dollar budget, makes a commitment like porting something over to DX12 and make billions of dollars from a game? Hell ya they should be able to get it done, just take their time and get it done, not a fuckin marketing half baked BS like we have been seeing from so many of these DX12 ports.

    We are paying for their product right? I think we should have a say in that. No for the people that don't care about DX12 LL API's what ever, and only care about that game, great they still have a game they like ok?

    While LL API's are the future for high performance games, they are not meant for everyone until that knowledge is gained by those teams. And its all about time to learn and build ones libraries, that is it.
     
    Last edited: Nov 20, 2016
    tungt88, Armenius, Sith'ari and 3 others like this.
  10. t3mporal

    t3mporal n00bie

    Messages:
    3
    Joined:
    Aug 29, 2014
    It would be interesting to see a 6850K used in a benchmark like this. DX12 has demonstrated performance improvements for up to 6 cores (example).

    I suggest a 6850 rather than a 6800 since DX12 seems to have around 8% more draw calls for 6 physical cores over 8 logical cores, and the difference in clock frequency between 6850 and 6700K is 10%.

    It would be extra interesting to see 6700K against 6850K in this scenario to see if they are closer together than they are under DX11.


    On second thoughts comparing a 6600K to a 6700K in DX12 would be even more relevant since DX11 shows no benefit between them, but DX12 should.

    Basically any test that shows an optimal difference, rather than the worst case scenario.
     
    Sith'ari likes this.
  11. Sonicks

    Sonicks Gawd

    Messages:
    777
    Joined:
    Jul 24, 2005
    Nail on the head here.

    Almost everyone here, and sadly even Brent Justice, have shown they have no clear computer science background to justify their blanket statements about DX12. As I stated before, an API in and of itself is not magic. It truly requires programmers that know how to utilize it efficiently to get the most out of it.

    These DX12 patched games are the absolute worst examples of the API and it is more an exercise for the developers' use of the API than it is a benefit to any of us gamers. Give development teams more time with the API and we will soon begin to see the benefits but do not write it off because of these clear practice sessions.
     
    noko, Araxie and razor1 like this.
  12. razor1

    razor1 [H]ardness Supreme

    Messages:
    7,644
    Joined:
    Jul 14, 2005
    That is true, we have seen this many times with new DX versions too, it just takes time to get developers used to the new API. The API just exposes what is there, everything else is up to the programmer and their capabilities.
     
  13. noko

    noko 2[H]4U

    Messages:
    2,277
    Joined:
    Apr 14, 2010
    As the programmers settle in and learn the ropes or new tricks, plus now much more is exposed to them to work with - I expect some amazing things to happen. Just need that one spark game where the developer/programmer just blows it out of the part then eyes will be open. If Vulkan was actually used more Doom would be a rather good case for that. We need a DX 12 version not of Doom but of that type of improvement. Biggest issue with Doom is that it provided no additional rendering or IQ enhancement other then that you can use one to two levels higher IQ settings in the program - so yes better visuals for most but nothing really new.
     
    razor1 likes this.
  14. Michaelius

    Michaelius [H]ardness Supreme

    Messages:
    4,657
    Joined:
    Sep 8, 2003
    And how do you know those games wouldn't perform even better on DX11 ?

    We already seen it with QB when game started as DX12 actually has much better performance in DX11 mode.
     
    razor1, Armenius and Shintai like this.
  15. Flogger23m

    Flogger23m [H]ardness Supreme

    Messages:
    7,944
    Joined:
    Jun 19, 2009
    Quantum Break was a mess and was rushed out. It was forced into DX12 when it was likely not built for it. Very big difference.
     
  16. Sith'ari

    Sith'ari Limp Gawd

    Messages:
    437
    Joined:
    Oct 13, 2013
    My guess is that he means that these games have a smooth gameplay on the contrary to others like BF1 in DX12 mode (which has severe performance issues at DX12
    )
    I don't think he compared DX11 with DX12, he just said about great performance at DX12 API.
     
  17. Shintai

    Shintai 2[H]4U

    Messages:
    3,253
    Joined:
    Jul 1, 2016
    That's not what the developers said. Even DICE cant make a proper DX12 from the looks of it.
     
    razor1 and Armenius like this.
  18. Armenius

    Armenius [H]ardForum Junkie

    Messages:
    9,474
    Joined:
    Jan 28, 2014
    Forza Horizon 3 is a feature level 11_0 game. Gears 4 is, too, but unlike Horizon 3 it has options to take advantage of video cards that support feature levels 12_0 and 12_1. The former has memory management issues and stutters. The latter would be a much better example. But this also reinforces the commitment developers must make to implementing DX12. It was clear early on that The Coalition was giving the PC version of Gears 4 extra love while the PC version of Horizon 3 looks like a half-hearted attempt at bringing a console game to PC. Gears 4 also had the advantage of being built on Unreal Engine 4.
     
    Flogger23m, Shintai and razor1 like this.
  19. Morphish

    Morphish n00bie

    Messages:
    33
    Joined:
    Oct 14, 2013
    I think I have a system that fits that description. FX-8350 (non-OC) with a Radeon Fury (mild OC) on a QHD, non-freesync monitor.

    Just to try it out real quick - and I recognize this is not a full or scientific test at all - I parked myself in Prague with most settings on High. Not a whole lot of action on the screen, but lots of NPCs milling about, light sources, foliage, etc... It seemed like a stable spot for some basic testing. It also was taxing enough for my system not to run at 60fps.

    DX11 - 41fps
    DX12 - 48fps

    ... a not insignificant 17% improvement. I had earlier left the game in DX11 since all the reviews said that DX12 actually hurt performance, but for my setup, it seems like it helps. Switched it on for the rest of my playthrough. I wish I had time for a more in-depth test.

    EDIT:

    Also, in response to the comments made about low-end CPU's not being an issue for enthusiasts, etc...
    I find the game reviews to be helpful primarily in discovering what the most computation/RAM expensive options are, so that I can then work around them to get the best performance on my games. So, even without an "enthusiast"-grade CPU, I benefit from the work done here.

    It may be the case, however, that low-level API implementations fall into multiple categories.

    1. It may be an across-the-board improvement (Doom with Vulkan)
    2. It may do nothing except give some programming experience to the developers (BF1?)
    3. It may benefit those without Skylake or Broadwell-E platforms who are running into CPU/driver bottlenecks on their systems.

    I suspect, given my little test (and some anecdotal info from other forums) that DeusEx:MD is of this third sort. In that case, testing on a 6700k at 4.7ghz simply won't reveal the benefit of switching to DX12. I recognize that a bottom line of "shows no difference on an OC'd current-gen i7, but will be great on an APU, i3, etc..." is, perhaps, not fitting for [H] - but may be warranted in cases of DX12 implementation of that third sort.
     
    Last edited: Nov 21, 2016
    JustReason and pendragon1 like this.
  20. atmartens

    atmartens n00bie

    Messages:
    44
    Joined:
    Jan 25, 2012
    I'm surprised that DX12 performs worse because I thought that developers were used to making lots of small optimizations for the consoles. Does none of that expertise carry over?
     
  21. Shintai

    Shintai 2[H]4U

    Messages:
    3,253
    Joined:
    Jul 1, 2016
    Making optimization for 1 or maybe 2 fixed setups with known GPU, CPU, memory and OS is somewhat easy. Making said optimization with random combinations of much wider range of options is pretty much mission impossible.
     
    Armenius and Nima84 like this.
  22. Morphish

    Morphish n00bie

    Messages:
    33
    Joined:
    Oct 14, 2013
    Yet iDTech6 does so - Doom2016 is almost always better than Vulkan.
     
  23. trajan2448

    trajan2448 [H]Lite

    Messages:
    79
    Joined:
    Feb 22, 2013
    Exactly. All the hype about D12 is much ado about, in many cases. less than zero.
     
  24. bl4C3y3

    bl4C3y3 n00bie

    Messages:
    16
    Joined:
    Nov 18, 2016
    yes, the benchmark can mislead, but it might also shows it is possible to get higher framerate in DX12 then DX11 (maybe specially tailored for DX12 ?) ... i still find this interesting, so i have done some tests to give feedback (used MSI AB to map general fps+frametimes, and used PresentMon for the final data, each test has been done 3x and then averaged ... if somebody is interested in the graphs, just let me know)

    all tests DX11 vs DX12 in preset "high" patch 616.0
    AMD 290 (16.11.3), i5 3570K @4.1, 16GB ram, SDD

    1. tested if the results form the "in-game benchmark" can be trusted:
    - in-game benchmark
    DX11 min 43.7 | avg 55.6
    DX12 min 52.4 | avg 64.3 > min 19.91% | avg 15.65% faster
    - PresentMon
    DX11 min 42.7 | avg 57.5
    DX12 min 51.1 | avg 66.0 > min 19.66% | avg 14.92% faster

    > reporting seems to be ok, so the results are not fake, but it still might be specially prepared for DX12


    2. tested "breach" game mode, first level (no enemies):

    - PresentMon
    DX11 min 73.3 | avg 96.3
    DX12 min 75.9 | avg 98.1 > min 3.66% | avg 1.80% faster

    > ok, completely different now, not really any tangible benefits from DX12 (but at the same time it doesn't seem to be slower)


    3. now, the most important, some "real in-game" test ... also done in the first level, where you start the game in Dubai:
    walking from the start point to the room with the elevator door, also no enemies (one run is about 110-115 sec)
    - PresentMon
    DX11 min 52.8 | avg 72.8
    DX12 min 58.8 | avg 80.8 > min 11.42% | avg 10.97% faster

    > so it seems i get a consistent +10% fps with DX12 ? ... nice
    (i have not started playing the game yet, i'm waiting to play it in the best possible way, and hope to do that in stereo 3D ... so for now, i can't go much further for another/better part to test :D)

    i also made a comparison of the frametimes (ms), also nothing special here, but DX has some longer frametimes from the last 0.01x% (edit: average number of frames in one run +/- 8000-8500):

    last % 10% | 1% | 0.1% | 0.01% | max
    DX11 17.61 | 20.63 | 23.63 | 27.59 | 40.37
    DX12 14.76 | 17.01 | 20.48 | 31.97 | 43.26
     
    Last edited: Nov 21, 2016
    noko and pendragon1 like this.
  25. Singularity_Survivor

    Singularity_Survivor Limp Gawd

    Messages:
    188
    Joined:
    Apr 7, 2016
    I mentioned the VRAM difference between a 1060 (6GB) and a 480 (8GB) and said all things being equal it doesn't make sense to buy the 1060.

    Brent responded to my comment with the quote listed above. My question to Brent is do you still think it is a non-factor⸮
     
    Last edited: Nov 22, 2016
  26. noko

    noko 2[H]4U

    Messages:
    2,277
    Joined:
    Apr 14, 2010
    Very interesting to say the lease, so for some DX 12 could make a significant difference while on HardOCP system it did not. Especially with Nvidia but still AMD loss some as well. Could it also be just in a different part of the game DX 11 does better and vice versa?
     
  27. Malurt

    Malurt n00bie

    Messages:
    3
    Joined:
    Jul 12, 2016
    Yesterday I downloaded the latest MSI Afterburner and Rivatuner Statistics Server from Guru3d - ran it with DX12 mode in this game.

    I did not once see a single drop below 45fps on my RX480 - settings set to ultra preset's max, 16x AF, no AA, no motionblur and chromatic aberation, at 1440p. In most conversations framerate at 60, in gameplay at 45-60 in the Prague city area at nighttime.

    RX 480, OC'd to 2150mhz on mem, 1370 on core. i5 3570k @ 4.3, 16gig 1866mhz ddr3.... Did not try DX11 switch because I got engrossed by the game.
     
  28. bl4C3y3

    bl4C3y3 n00bie

    Messages:
    16
    Joined:
    Nov 18, 2016
    no, no, no ... you should stop playing the game and start testing DX11 ! :D
    it might even run better :)

    but interesting and nice DX12 results ... curious how DX11 will compare
     
  29. CSI_PC

    CSI_PC [H]ard|Gawd

    Messages:
    1,876
    Joined:
    Apr 3, 2016
    The other aspect is what the benchmark does in terms of mechanisms used (they could do aspects more synthetic and never used in-game) and critically how they decide to capture the frames (is it monitored at internal engine level and then decisions what counts or more at driver level) to represent performance and behaviour, this is why myself and some review sites stress using an independent 3rd party utility such as PresentMon and in-game play.
    Problem is PresentMon is not very user friendly for most gamers, not really designed for them.
    Cheers
     
  30. bl4C3y3

    bl4C3y3 n00bie

    Messages:
    16
    Joined:
    Nov 18, 2016
    yes, this is what i wanted to find out: if the in-game benchmark shows different results than other reporting tools ... but seems not to be the case:
    when comparing the results of the in-game benchmark vs MSI AfterBurner & PresentMon, the results are similar for DX11 and DX12 (preset "high")

    - in-game benchmark
    DX11 min 43.7 | avg 55.6
    DX12 min 52.4 | avg 64.3
    - MSI AB
    DX11 min 44.3 | avg 55.8 > min +1.45% | avg +0.33%
    DX12 min 53.2 | avg 64.4 > min +1.59% | avg +0.14%
    - PresentMon
    DX11 min 42.7 | avg 57.5 > min -2.19% | avg +3.35%
    DX12 min 51.1 | avg 66.0 > min -2.40% | avg +2.70%

    > the delta seems to be max +/- 3% where PresentMon reports +3.35% higher average then the in-game benchmark, and -2.40% in case of min fps
    ... which seems also logical because PresentMon analyzes each frame, where MSI AB checks data at fixed intervals ?


    right, so i actually like the fact that there is an in-game benchmark ... more games should have it, that way even casual users can play with some settings and find out what effect they have ... and it's up to the tech/game press and reviewers to find out if these in-game benchmarks are 'honest' and report it if they are not :)

    MSI AfterBurner can easily record a nice graph of frametime and fps, which is very useful to get a visual preview of the raw numbers you get with PresentMon when creating Excel graphs ...
    but to fair, from the moment you start with Excel, there is very little difference between working with the MSI AfterBurner or the PresentMon generated file, you just have the choose the right columns and go from there
     
    Last edited: Nov 23, 2016
  31. gathagan

    gathagan Gawd

    Messages:
    681
    Joined:
    Oct 30, 2004
    Many of the comments on this thread bring out what concerns me most about DX12.
    When the burden of wringing the most out of DX was on the GPU companies, I feel as if there was a higher motivation to do so. The premise of brand X being able to exploit new DX features that brand Y could not translated directly into card sales.

    Game developers might have the best intentions going in to a project, but pressure to produce and get the product to market ends up providing less motivation to learn the intricacies needed to exploit DX if the burden to program for a wider range of GPU variables falls on their heads, as opposed to the game just making a call to the driver.

    This is exacerbated by the already-existing mentality of producing foremost for consoles and their lower graphics requirements.

    All that said, I'm not a programmer, so I might be completely full of it...
     
  32. thesmokingman

    thesmokingman 2[H]4U

    Messages:
    3,523
    Joined:
    Nov 22, 2008
    None of us are...

    I think it will take time just like it took for devs with consoles. The early games releases during the lifespan of a console were generally subpar. But as the console aged and devs had time to become proficient, the game releases towards the end of the cycle became highly optimized and were great examples of this synergy.
     
  33. CSI_PC

    CSI_PC [H]ard|Gawd

    Messages:
    1,876
    Joined:
    Apr 3, 2016
    Ah cool then BF1 looks to be pretty good from a benchmark and game correlation context.
    However there are quite a few games that do give different results between their internal benchmark and real game, sometimes it can mean a manufacturer being slower in the benchmark but ironically faster in the actual game, and this has happened.
    Unfortunately because of those it makes it unreliable unless used for a gamer just to test their settings with their card to find quickly an optimal setup, but then again it would only be a rough guideline for those games where benchmark and real game do not correlate well.

    Cheers
     
  34. bl4C3y3

    bl4C3y3 n00bie

    Messages:
    16
    Joined:
    Nov 18, 2016
    yes the option "perfoverlay.frametimelogenable" from BF4 is still there in BF1 :) ... with me DX11 is ok, but DX12 seems to show only CPU frames
    could be interesting to see it the frametimes are generated in the same way as PresentMon

    at that time, somebody made a nice BF4 csv tool that simply read the log file to show some numerical analysis and could also produce graphs ... btw, it still seems to works for the BF1 csv
    (you could probably modify the PresentMon output to read it with this tool ... but then you loose the fun of playing with excel ;) )
     
    CSI_PC likes this.
  35. robble

    robble [H]ardForum Junkie

    Messages:
    8,229
    Joined:
    Jun 6, 2004
    While saying quad sli is just "ignant", I would actually like to see 4k benchmarks on 1080 SLI. I haven't found anything I can't lock at 60fps 4k ultra yet.
    I'm not saying there aren't games out there that won't be a solid 60 - just that I haven't played them yet.
     
  36. Palladium@SG

    Palladium@SG Limp Gawd

    Messages:
    283
    Joined:
    Feb 8, 2015
    It's just simple connect-the-dots: Optimization is low on the list of dev priorities so pushing the burden of LLAPIs to the devs instead of the GPU vendor isn't a exactly a good idea.
     
    Armenius and Shintai like this.
  37. noko

    noko 2[H]4U

    Messages:
    2,277
    Joined:
    Apr 14, 2010
    For a developer a LLAPI should make it a lot easier to troubleshoot code, fine tune it for better performance vice a black box that causes your code to crash and you not knowing it is your code or the driver at fault. In other words once experience and exposure is more mature, DX 12 will most likely be faster to use then a more black box driver optimize API which drivers changes occur all the time.
     
    JustReason likes this.
  38. Shintai

    Shintai 2[H]4U

    Messages:
    3,253
    Joined:
    Jul 1, 2016
    Exactly. Its all about money. Same reason why CPU optimizations often lack badly.

    People have to understand that they pretty much ask developers to spend a lot more time and an incredible amount of more resources for free to make this happen. Not to mention the issues ahead with missing optimizations and paths for future graphics cards. Intels DX12 IGP is supported in 1-2 cases for the same reason. And 3DMark being one of those.
     
    Armenius likes this.
  39. Shintai

    Shintai 2[H]4U

    Messages:
    3,253
    Joined:
    Jul 1, 2016
    LLAPI is always much harder. Its never going to be easier. And the people you need on the team needs to be much better than regular. Top money, top crop. And then you have to add a lot more time to it as well, not to mention future support.

    DX12 will never be cheaper, less time consuming or easier that DX11.

    I also doubt in a neutral setting that it will be better than DX11 in performance. The only place DX12 will ever excel, should it ever happen, is when they truly do something DX11 cant. But we haven't seen any of this and we are not going to anytime soon. And by then we will have DX13 or whatever.

    Even DICE cant make a good DX12. The reality is there.
     
    Armenius likes this.
  40. noko

    noko 2[H]4U

    Messages:
    2,277
    Joined:
    Apr 14, 2010
    That is why Nvidia and AMD will need to support the developers more for these optimizations. I am sure Nvidia is very active in this (they have the money) not sure about AMD. Now it is not if developers where not making optimizations anyways with DX 11 and other API's because well they have been having different paths for different vendors at times. The problem comes when you have too many different enough hardware designs or platforms that need to be specifically programmed for. If AMD GCN arch from 1.1 and up are virtually the same from a programming standpoint then it should not take much effort there. Nvidia Maxwell and Pascal? So far it looks like Pascal can do DX 12 just fine but going back to Kepler and Fermi maybe wasted effort anyways for LLAPI's at this stage.

    Would like to know what some of the developers think of DX 12 and Vulkan - everything I've heard seems to be more positive then negative.
     
    Last edited: Nov 26, 2016