Deus Ex: Mankind Divided

yeah something like that is doable but again, its something that is explicit and planned for.
 
I'm not surprised that the DX12 version is going to be a bit late. You see, people think the DX11 version is out '2 weeks' before and ask 'what can be done in 2 weeks lol' when actually, the DX11 version went gold over a month ago, and the DX 12 patch will be committed nearly two months after that. Not to mention, DX12 takes more effort to code for, as there is a lot less hand-holding if you want to truly take advantage of the low-level arch.
 
I'm not surprised that the DX12 version is going to be a bit late. You see, people think the DX11 version is out '2 weeks' before and ask 'what can be done in 2 weeks lol' when actually, the DX11 version went gold over a month ago, and the DX 12 patch will be committed nearly two months after that. Not to mention, DX12 takes more effort to code for, as there is a lot less hand-holding if you want to truly take advantage of the low-level arch.

The game was supposed to be released with DX12 working on launch day.

They claim it is in no fit state for release, therefore will delay it two weeks. I ain't surprised it's delayed either.

guy on neogaf with a review key says game runs fine a gtx 780 so I'm assuming this won't be one of those Kepler disasters
 
5bf.jpg


That seems hideously CPU bottlenecked ahahaha
 
If somebody wants to translate the article, feel free. They might be using 8x MSAA or something ridiculous.
Their settings window is in Russian so gl.
 
Its quite clear that DX12 is using a lot more CPU than DX11 for both Nvidia and AMD. In 1080p AMD loses around 20% using DX12. In 1440p its around 10%.

You better get the new 7700K Kaby Lake with 4.5Ghz turbo for this title.
 
Its quite clear that DX12 is using a lot more CPU than DX11 for both Nvidia and AMD. In 1080p AMD loses around 20% using DX12. In 1440p its around 10%.

You better get the new 7700K Kaby Lake with 4.5Ghz turbo for this title.
Not necessarily, could just be GPU limited code being less efficient than DX11

Solution is not to buy this game until it's fixed and stop saying '2016 is the year of DX12'
 
Graphics quality and framerate:



not sure if i missed it - where are the specs for the machine running this demo?

nm- at the very end of the video, 6700k and 1070. damn. gonna need a beast for ultra at high res - the video didn't say what res it was running
 
Well the preemption latency is basically the time between giving the command for something to halt and it effectively halting and making space for (potentially) a new kernel, you're describing a context switch. I think what Anarchist was saying in the context of this DOOM discussion is that the preemption should help as you want to interrupt running shaders at vblank and have them work on the frame about to be presented, a little bit like ATW.

Preemption ensures your time critical kernel will be done in time, but it's effects on runtime of other kernels is unpredictable really (without looking at specific case on specific HW)
In this case the compute, or at least the decoupled part, would be a time sensitive task. It should be relatively short, few ms, and intended to fill the blanks when they occur. Yes framerate would be inversely correlated to refresh rate. Framerate could in effect be fixed to whatever rate was desired. 60Hz in most cases as that's all that can be displayed.

yes its extremely unpredictable in that executing it without understanding the needs of all the queues and what ever else they need you can't effectively predict what you might miss, Doom I don't think uses this at all because they were fairly specific on what they use async for, and they never talked about preemption. Even oxide stated that haven't used preemption for anything as of yet but are "looking into it".
They wouldn't have used preemption explicitly. On GCN, including consoles, they could tag it with a high priority and it would filter through in a timely enough manner. For Nvidia the driver would likely want to attempt preemption so a couple ms time sensitive job doesn't get stuck on the end of a frame. Alternatively the optimization could be removed and each frame postprocessed in the traditional sense. This would be async shading, which as we've discussed in AMD's approach as opposed to Nvidia's. "Decoupled" was the word the dev used to describe it. So it isn't the typical async compute job running concurrently. It's a small time sensitive task getting injected into the middle of a frame. I'm still of the mindset this is what causes a significant portion of AMDs advantage in Doom. For a game coming from console it would seem a no-brainer to have implemented there. Deus Ex, along with other console ports, could mimic the behavior to increase performance.

Would they "use it" though ? I imagine driver is in charge, preempting only when some kind of priority flag is present on a queued grid
This would be my thinking. They don't explicitly use it, but could be the optimized path Nvidia's driver chooses given the scenario. It would of course be contingent on performance. It may be more practical to always postprocess as opposed to attempt preemption as an optimization. Or just tack it onto the end of a frame where you get the performance boost, but encounter really strange timing issues. This would likely appear as the vsync always on issue reviewers were trying to figure out initially.

yeah something like that is doable but again, its something that is explicit and planned for.
It doesn't have to be explicit. Drivers can ultimately do just about whatever they want with commands. Working with the dev, they could no what behavior to look for and proceed accordingly. Final path being whatever they decided works best.

wow they weren't kidding about dx12 path not being ready, will 2 weeks be enough lol, watch it being dragged out.......
Not sure the issue is the DX12 path here. Simply looks like they're hammering the CPU under most conditions.
 
DOOM runs exceedingly well without async though, in vulkan, on gcn.

As for deus ex it's still running worse in dx12 than dx11
 
I am honestly excited for this game. I will wait until all and any kinks are worked out. I do not like to be a beta tester for software. Hardware Ill beta test all damn day though lol.
 
It doesn't have to be explicit. Drivers can ultimately do just about whatever they want with commands. Working with the dev, they could no what behavior to look for and proceed accordingly. Final path being whatever they decided works best.


Not sure the issue is the DX12 path here. Simply looks like they're hammering the CPU under most conditions.

That would be explicit if ya know what is going on and do something, you are explicitly telling it to do something :)
 
So would this be another game where one would have to wait another gen or two before they can max it out?

Not sure if good or bad thing.....
 
Performance with High settings doesn't look that bad, and I can barely tell a difference from Ultra.

Is Ultra just running SSAA or something? That would explain the FPS being halved basically.
 
Performance with High settings doesn't look that bad, and I can barely tell a difference from Ultra.

Is Ultra just running SSAA or something? That would explain the FPS being halved basically.

Looking at what's in the video higher quality lighting is part of it, might be even higher resolution texturing on the ultra setting, too.
 
I just read the Ars review on it and he mentioned something about 40 to 50 FPS with most eye candy on using a 980 Ti and a rather beefy CPU. I'm really hoping this game meets my expectations (though I am not pre-ordering it).
 
I just read the Ars review on it and he mentioned something about 40 to 50 FPS with most eye candy on using a 980 Ti and a rather beefy CPU. I'm really hoping this game meets my expectations (though I am not pre-ordering it).

I am already set on buying it so I might as well get it discounted while its available on GMG for 25% off.
 
I am already set on buying it so I might as well get it discounted while its available on GMG for 25% off.

Its clearly shader intensive because the fury X lands roughly in line with its throughput, while the performance sucks overall at least it's pretty much even stevens between amd/nv

My guess would be lighting effects or stupidly high resolution shadowing, is this game using GI?
 
Kind of hoping "Cloth Physics" is GPU accelerated. If running CPU side that could start to explain part of the load.
 
That is not looking good for my 1080 and 4k monitor. Even with G Sync, with those frame rates, I need more power.
 
The 480 is faster than the Fury X at 4k in those benchmarks. looks like this game needs some more work.

Probably needs... A driver from AMD and Nvidia like all releases nowadays. :)

DX12 is dead in this game until 9/5
 
Back
Top