DX12 v DX 11: The Plot Chickenz! HITMAN Lead Dev: DX12 Possible After Ditching DX11

Status
Not open for further replies.

Zion Halcyon

2[H]4U
Joined
Dec 28, 2007
Messages
2,108
chickens1.jpg





HITMAN Lead Dev: DX12 Gains Will Take Time, But They’re Possible After Ditching DX11

I think it will take a bit of time, and the drivers & games need to mature and do the right things. Just reaching parity with DX11 is a lot of work. 50% performance from CPU is possible, but it depends a lot on your game, the driver, and how well they work together. Improving performance by 20% when GPU bound will be very hard, especially when you have a DX11 driver team trying to improve performance on platform as well. It’s worth mentioning we did only a straight port, once we start using some of the new features of dx12, it will open up a lot of new possibilities – and then the gains will definitely be possible. We probably won’t start on those features until we can ditch DX11, since a lot of them require fundamental changes to our render code.

Read more: http://wccftech.com/hitman-lead-dev-dx12-gains-time-ditching-dx11/#ixzz45ukyOCnt

 
I'm sure those gains are a lot easier to obtain when AMD offers to do the work for you.

The only reason Microsoft even bothered with DX12 is so they can exploit it as a cheap and sloppy way to move games between Xbox One <-> PC. When they start giving devs the resources they need to make proper DX12 games or the devs themselves start showing they care to do so, then we'll get those DX12 improvements.
 
Last edited:
I'm sure those gains are a lot easier to obtain when AMD offers to do the work for you.

The only reason Microsoft even bothered with DX12 is so they can exploit it as a cheap and sloppy way to move games between Xbox One <-> PC. When they start giving devs the resources they need to make proper DX12 games or the devs themselves start showing they care to do so, then we'll get those DX12 improvements.
Unlike all those developers plugging in GameWorks effects rather than programming it themselves. Right?
 
Well its the same thing and not the same thing, DX12 by itself, it isn't like gameworks as gameworks is just additional programs built off of the API. Things like async shaders yeah that is like gameworks because its outside of the API specs and one can say built off what the API gives the ability to do.

But the rest of what DX12 offers, it takes time for programmers to change they way they do things, this is just the norm.
 
AMD and their Gaming Evolved studios are doing their best to make good DX12 games. Meanwhile Microsoft is doing their best to make horrible DX12 games.

So now we sit and watch these two groups working with the same API completely at odds with each other. AMD pushes out talking points about how great DX12 is and there's another article right next to it about the most recent shitty DX12 port Microsoft released.

And Nvidia, the biggest player in the PC gaming market, completely ignores the whole thing.

It's a circus.
 
Nvidia is making self driving cars. They have already figured out a way out of shrinking PC market.
 
AMD and their Gaming Evolved studios are doing their best to make good DX12 games. Meanwhile Microsoft is doing their best to make horrible DX12 games.

So now we sit and watch these two groups working with the same API completely at odds with each other. AMD pushes out talking points about how great DX12 is and there's another article right next to it about the most recent shitty DX12 port Microsoft released.

And Nvidia, the biggest player in the PC gaming market, completely ignores the whole thing.

It's a circus.


Well I agree with a lot of that, but remember the mess of DX11 games when DX11 was released?
 
man I know that, but how can you relate that to say nvidia will go out of the PC GPU market?. that's just insane my friend.. with over 80% market share there's a lot of money to make here.
Not out, just declining sales and exploring other options. If AMD gets a win this round it's possible the market really declines as all those people don't need new overpriced GPUs every 2 years. ;)
 
They're basically diversifying aka spreading out their risk over different industries. In this case, computer graphics is their "cash cow," i.e. low growth, large market share. Abandoning their cash cow completely would likely be disastrous.
 
Well its the same thing and not the same thing, DX12 by itself, it isn't like gameworks as gameworks is just additional programs built off of the API. Things like async shaders yeah that is like gameworks because its outside of the API specs and one can say built off what the API gives the ability to do.

But the rest of what DX12 offers, it takes time for programmers to change they way they do things, this is just the norm.
Only Gameworks adds zero performance gain and quite a big performance hit that should be lower if the devs would optimize their own shaders
 
AMD and their Gaming Evolved studios are doing their best to make good DX12 games. Meanwhile Microsoft is doing their best to make horrible DX12 games.
So now we sit and watch these two groups working with the same API completely at odds with each other. AMD pushes out talking points about how great DX12 is and there's another article right next to it about the most recent shitty DX12 port Microsoft released.
And Nvidia, the biggest player in the PC gaming market, completely ignores the whole thing.
It's a circus.

AMD rules!!!

Microsoft making shitty software that is a total shocker ;)
Nvidia in it for the money another jaw dropping fact ....
 
Only Gameworks adds zero performance gain and quite a big performance hit that should be lower if the devs would optimize their own shaders

The only aspect of gameworks that does that is tessellated effects, and if AMD can't get their architecture fixed for better vertex throughput, that will be hammered on till who knows when. Just as AMD is trying to hammer on async shader performance, same shit different companies.
 
The only aspect of gameworks that does that is tessellated effects, and if AMD can't get their architecture fixed for better vertex throughput, that will be hammered on till who knows when. Just as AMD is trying to hammer on async shader performance, same shit different companies.
are you forgetting about godrays(visualfx) on some games?

and the performance of AMD with Tessellation will be improved with the new changes in GCN architecture(geometry processor)

Only that Asynchronous shaders improve performance while using in parrallel the compute units and over tessellation is used to cripple the competition,since 2011 with crysis 2 tessellation going everywhere ,and exaggerated tessellation levels that bring no visual improvements

gs4106-the-amd-gcn-architecture-a-crash-course-by-layla-mah-61-638.jpg
 
Last edited:
god rays uses tessellation, and the feathering, softening effect is better than not using tessellation.

Ah didn't see the edit.

AMD with Tonga and Fiji did try to improve their tessellation but it was meaningless because the issue isn't tessellation, the issue is their chips have crap polygon through put, abhorrently bad, its worse than Kepler and possible even Fermi.
 
Last edited:
are you forgetting about godrays(visualfx) on some games?

and the performance of AMD with Tessellation will be improved with the new changes in GCN architecture(geometry processor)

Only that Asynchronous shaders improve performance while using in parrallel the compute units and over tessellation is used to cripple the competition,since 2011 with crysis 2 tessellation going everywhere ,and exaggerated tessellation levels that bring no visual improvements

gs4106-the-amd-gcn-architecture-a-crash-course-by-layla-mah-61-638.jpg

Right that's certainly impartial source with no vested interest in tesselation levels you are quoting ;)
 
Hmm honestly people call me crazy but i only see Nvidia getting thrown out and losing mindshare if the 800 pound gorilla in the corner does get phosphorene chips significantly earlier than the other foundries, and they end up being indeed over 10x faster.

The 800 pound gorilla is Intel of course, sure we all joke about them but in that illusory world if they really get that breakthrough they could potentially brute force everything. Feel free to keep calling me crazy but that is just what i see as a possible future. Of course that Intel's greed may make them price themselves stupidly higher than they should even if they get everything for such scenario.
 
yeah well Intel is Intel, one can only hope other foundries can catch them or Intel messes up with a fab process so the others can catch up......
 
Because you prefer the source that offers exaggerated levels of Tessellation which also reduces performance and doesnt deliver any improvement on graphics?
Well when they can't fix their polygon through put for 5 generations, well its bound to be exploited, come on they had tesselation based hardware well before nV, and the xbox 360 had dedicated tessellation support prior to nV's DX11 cards why haven't they fixed their polygon throughput all this time? They must have know it had issues.
 
Well when they can't fix their polygon through put for 5 generations, well its bound to be exploited, come on they had tesselation based hardware well before nV, and the xbox 360 had dedicated tessellation support prior to nV's DX11 cards why haven't they fixed their polygon throughput all this time? They must have know it had issues.
GCN1.2 leveled the Tessellation performance but still those games use a huge amount of tessellation ,many people can reduce them via drivers and notice higher framerate with almost no image quality loss

Now,How AMD can Manage to improve their performance in GE aming Evolved titles without crippling the competition and Nvidia just make use of their "tech" that can be executed in both brand to reduce the performance of the competitor?
 
No they didn't, take a look at tech report's polygon through put numbers, it has NOTHING to do with tessellation factors, it has everything to do with polygon through put.

When a product has a glaring weakness you should expect it to be exploited, that is business. Just as AMD is doing now with async shader performance and that is outside of the DX12 specs. Nothing wrong with that at all. Unfortunately for AMD they didn't factor in that async shaders affect their own GPU line ups differently based on generational differences in architecture.

You tell me they don't do the same shit as nV does, how the F does hitman run better on AMD hardware in DX11 over nV hardware, have we ever seen that happen in other DX11 games? Is this becoming a habit. a 20% advantage for AMD sponsored titles?

You can try this holier then though shit all day long but there has been examples AMD.ATi do the same crap in voice of open source in the past, present and surely the future. This is business. Exploit everything you can. If you don't you end up losing ground that should have been yours to begin with.
 
Last edited:
No they didn't, take a look at tech report's polygon through put numbers, it has NOTHING to do with tessellation factors, it has everything to do with polygon through put.

When a product has a glaring weakness you should expect it to be exploited, that is business. Just as AMD is doing now with async shader performance and that is outside of the DX12 specs. Nothing wrong with that at all. Unfortunately for AMD they didn't factor in that async shaders affect their own GPU line ups differently based on generational differences in architecture.

You tell me they don't do the same shit as nV does, how the F does hitman run better on AMD hardware in DX11 over nV hardware, have we ever seen that happen in other DX11 games? Is this becoming a habit. a 20% advantage for AMD sponsored titles?

You can try this holier then though shit all day long but there has been examples AMD.ATi do the same crap in voice of open source in the past, present and surely the future. This is business. Exploit everything you can. If you don't you end up losing ground that should have been yours to begin with.
Wow! You seem to be taking this pretty personally. The difference here is that AMD doesn't seem to be out to deliberately sabotage Nvidia's performance. The same can't be said about Nvidia, however.
 
Wow! You seem to be taking this pretty personally. The difference here is that AMD doesn't seem to be out to deliberately sabotage Nvidia's performance. The same can't be said about Nvidia, however.

He does. He always seems to leap to team green's defense.
 
Wow! You seem to be taking this pretty personally. The difference here is that AMD doesn't seem to be out to deliberately sabotage Nvidia's performance. The same can't be said about Nvidia, however.


What you don't like facts?

Same to you Zion Halcyon, you seem to disregard facts when it doesn't seem to support what you like, why is that? How the hell don't you know why a % difference is more meaningful than a raw FPS data? I saw your posts on that and let it go, but man, don't ever do that again, it was like a 4th grader arguing with a his/her teacher why percentages don't matter.

Its like you guys can't transfer something you learned from something else to another conversation.

How many times are you going to drop your pants when you post?

What you don't like it when nV has gameworks titles that abuse tessellation, but its ok for AMD sponsored titles that look like shit but run 20% better on AMD products for no apparent reason in DX11? We aren't even talking about async here. Aysnc done right remember? Fiji is loosing performance with Aysnc. And the game is crashing everywhere! What Game devolved? Would that be analogous to gimp works?

You guys talk alot of shit, but when it comes down to it, thats all you got.
 
Last edited:
What you don't like facts?

Same to you Zion Halcyon, you seem to disregard facts when it doesn't seem to support what you like, why is that? How the hell don't you know why a % difference is more meaningful than a raw FPS data? I saw your posts on that and let it go, but man, don't ever do that again, it was like a 4th grader arguing with a his/her teacher why percentages don't matter.

Its like you guys can't transfer something you learned from something else to another conversation.

How many times are you going to drop your pants when you post?

What you don't like it when nV has gameworks titles that abuse tessellation, but its ok for AMD sponsored titles that look like shit but run 20% better on AMD products for no apparent reason in DX11? We aren't even talking about async here. Aysnc done right remember? Fiji is loosing performance with Aysnc. And the game is crashing everywhere! What Game devolved? Would that be analogous to gimp works?

You guys talk alot of shit, but when it comes down to it, thats all you got.
Geez, is it your time of month? Calm down before you burst something.

How about you tell us in which titles AMD is performing 20% better than Nvidia in DX11 and why this is such a crime. As long as AMD isn't sabotaging Nvidia's performance, I don't see the problem.
 
There is nothing wrong with that, other than the fact its optimized for AMD cards. And both companies do the same thing. I was pretty specific about it wasn't I
No they didn't, take a look at tech report's polygon through put numbers, it has NOTHING to do with tessellation factors, it has everything to do with polygon through put.

When a product has a glaring weakness you should expect it to be exploited, that is business. Just as AMD is doing now with async shader performance and that is outside of the DX12 specs. Nothing wrong with that at all. Unfortunately for AMD they didn't factor in that async shaders affect their own GPU line ups differently based on generational differences in architecture.

You tell me they don't do the same shit as nV does, how the F does hitman run better on AMD hardware in DX11 over nV hardware, have we ever seen that happen in other DX11 games? Is this becoming a habit. a 20% advantage for AMD sponsored titles?

You can try this holier then though shit all day long but there has been examples AMD.ATi do the same crap in voice of open source in the past, present and surely the future. This is business. Exploit everything you can. If you don't you end up losing ground that should have been yours to begin with.


That is a direct response to this

GCN1.2 leveled the Tessellation performance but still those games use a huge amount of tessellation ,many people can reduce them via drivers and notice higher framerate with almost no image quality loss

Now,How AMD can Manage to improve their performance in GE aming Evolved titles without crippling the competition and Nvidia just make use of their "tech" that can be executed in both brand to reduce the performance of the competitor?

Is it upsetting you I'm putting AMD and nV in the same boat when it comes to business tactics?

Sure looks like it is to me?

Is it my time of month, or you only looking at one side of what I stated and you can't make any reasonable post so you go down the rabbit hole? And if your telling me DX11 games run like what Hitman ran like on nV hardware, and % variation of AMD hardware, wtf is that? Have you seen the relative performance of DX11 games on AMD and nV hardware in past year since Maxwell 2 was released? At least you can turn off gameworks, you can't turn anything off in Hitman well unless you really want the game to look like crap.....

Then you look at AOTS, ya know what another AMD sponsored title that shows the same type of results in DX11 as Hitman! Coincidence? I think not, as driver overhead on DX11 for AMD has mysteriously disappeared but where there was less for nV before, has now been increased! The opposite that happens in all other cases.
 
Last edited:
You can try to put AMD and Nvidia in the same boat, but it won't float. Nvidia has their black box GameWorks which they refuse to allow AMD access to in order to optimize for their hardware. Nvidia has deliberately set over-tessellation in games in order to kill AMD performance. Nvidia has claimed proprietary ownership of simple AA coding in order to hurt AMD performance and keep their users from using the feature. Nvidia has locked out hardware Physx. Etc, etc, etc... You can spin it however you like, but Nvidia is by far the "dirty tactics" leader of the GPU world. So unless you have proof that AMD is somehow sabotaging Nvidia's performance here, I doubt anybody is going to take your rants seriously.

AMD has its hardware in all three of this generation's game consoles. That alone could account for a shift in programming techniques that now favor GCN's architecture. I'm pretty sure that was AMD's goal from the beginning.
 
No they didn't, take a look at tech report's polygon through put numbers, it has NOTHING to do with tessellation factors, it has everything to do with polygon through put.

When a product has a glaring weakness you should expect it to be exploited, that is business. Just as AMD is doing now with async shader performance and that is outside of the DX12 specs. Nothing wrong with that at all. Unfortunately for AMD they didn't factor in that async shaders affect their own GPU line ups differently based on generational differences in architecture.

You tell me they don't do the same shit as nV does, how the F does hitman run better on AMD hardware in DX11 over nV hardware, have we ever seen that happen in other DX11 games? Is this becoming a habit. a 20% advantage for AMD sponsored titles?

You can try this holier then though shit all day long but there has been examples AMD.ATi do the same crap in voice of open source in the past, present and surely the future. This is business. Exploit everything you can. If you don't you end up losing ground that should have been yours to begin with.

You were talking about tessellation and games which use a low to moderate level of tessellation wont have a big difference with GCN3(and newer GPUs)

tessmark.gif

Like Metro LL

Now if you only are measuring the polygon throughput yes GCN3 is behind but that can change with the new geometry processor on Polaris/GCN4, but if you compare AMD GCN to Maxwell, Nvidia can discard those which card too small with backface culling...
 
You can try to put AMD and Nvidia in the same boat, but it won't float. Nvidia has their black box GameWorks which they refuse to allow AMD access to in order to optimize for their hardware. Nvidia has deliberately set over-tessellation in games in order to kill AMD performance. Nvidia has claimed proprietary ownership of simple AA coding in order to hurt AMD performance and keep their users from using the feature. Nvidia has locked out hardware Physx. Etc, etc, etc... You can spin it however you like, but Nvidia is by far the "dirty tactics" leader of the GPU world. So unless you have proof that AMD is somehow sabotaging Nvidia's performance here, I doubt anybody is going to take your rants seriously.

AMD has its hardware in all three of this generation's game consoles. That alone could account for a shift in programming techniques that now favor GCN's architecture. I'm pretty sure that was AMD's goal from the beginning.

Right, lets see. You want to list out AMD sponsored games and nV sponsored games, then list out what and where the performance disparity comes from with and without gameworks titles, then we can't look at what AMD did to help their performance on their cards and call that its open to nV? That a long stretch Creig.

Come on, if you and another person are going for a promotion at your job wouldn't you show what you can about what you can do in the best light as possible and show that the other person can't compete with you, to get that promotion. That is the norm.

That line of thought, why wouldn't you want to make the other guy look bad? You don't want that promotion? You do feel its ok, to show whats best in the eyes of your boss? This is the same situation. nV has tesselation performance for now, AMD has async shaders for now. Both exploit those features just like they should. But nV sponsered titles those features can be disabled. AMD titles those features can't be disabled without a very large change in graphics. and even then you will still see a performance advantage for AMD from other points where traditionally they don't or shouldn't have that advantage anymore.
 
You were talking about tessellation and games which use a low to moderate level of tessellation wont have a big difference with GCN3(and newer GPUs)

tessmark.gif

Like Metro LL

Now if you only are measuring the polygon throughput yes GCN3 is behind but that can change with the new geometry processor on Polaris/GCN4, but if you compare AMD GCN to Maxwell, Nvidia can discard those which card too small with backface culling...

I expect it to change, I actually expected this problem to be fixed a long time before. Let me show you something. This is a new game character I'm created for a game to come out in the next 2 years to 3 years or so. We are using 300k poly for the base mesh well before tessellation is going to be a part of the equation. What do you think is going to happen with AMD hardware, once games start doing stuff like this?

http://i.imgur.com/esihCt3.png
http://i.imgur.com/4iwP9SK.jpg
http://i.imgur.com/Df5ZFyS.png
http://i.imgur.com/WvzUdLu.png
http://i.imgur.com/NeRg3QO.jpg
http://i.imgur.com/rmgE7HT.jpg
http://i.imgur.com/nuaLF9T.jpg
http://i.imgur.com/PW0zm8Z.jpg
http://i.imgur.com/fl4eadd.jpg
http://i.imgur.com/GYZMtgE.png

And one game already started doing things like this its Arkham knight where its main characters use 250k polys and up. now do you see why Arkham Knight has an advantage on nv hardware?

At x2 tessellation factor its up to 2 million polys, at x4 tessellation factor its going to be up to 10 million polys. This is just for one character we are expecting 100's of million of polys to be in the FOV. Of course going to X4 is kinda of ridiculous.
 
Last edited:
Is it upsetting you I'm putting AMD and nV in the same boat when it comes to business tactics?
...
Then you look at AOTS, ya know what another AMD sponsored title that shows the same type of results in DX11 as Hitman! Coincidence? I think not, as driver overhead on DX11 for AMD has mysteriously disappeared but where there was less for nV before, has now been increased! The opposite that happens in all other cases.
Because Nvidia has done this on levels where it cant be disabled, Project cars much?

Now Nvidia wanted that a game developer disable a a feature which architecture is supposed to use
Oxide Developer: "NVIDIA Was Putting Pressure On Us To Disable Certain Settings In The Benchmark"

and yet this is comparing 2 different type of API one manages the physics body and the other allow to developers to fully use the hardware for COMPUTE AND GRAPHICS TASKS allowing them to use better graphics effects instead using 3rd party APIS that clearly are done to cripple performance

Right, lets see. You want to list out AMD sponsored games and nV sponsored games, then list out what and where the performance disparity comes from with and without gameworks titles, then we can't look at what AMD did to help their performance on their cards and call that its open to nV?
That isnt the same that ruining the competitor performance and optimizing the game for a single brand


AMD titles those features can't be disabled without a very large change in graphics.
It has been done and Nvidia did with witht he first DX12 game using Asynchronous compute

Ashes of the Singularity Beta - Benchmark 2.0 mit DirectX 12, Asychronous Compute und pikanten Ergebnissen [Update]
 
Last edited:
Come on, if you and another person are going for a promotion at your job wouldn't you show what you can about what you can do in the best light as possible and show that the other person can't compete with you, to get that promotion. That is the norm.

That line of thought, why wouldn't you want to make the other guy look bad? You don't want that promotion? You do feel its ok, to show whats best in the eyes of your boss?

Well, that's a big difference between you and me, I guess. I prefer to show my boss what I can do and leave it at that. You would obviously prefer to explain to your boss what your co-workers CAN'T do. You must be a real popular guy wherever you work if that's your attitude.

This is the same situation. nV has tesselation performance for now, AMD has async shaders for now. Both exploit those features just like they should. But nV sponsered titles those features can be disabled. AMD titles those features can't be disabled without a very large change in graphics. and even then you will still see a performance advantage for AMD from other points where traditionally they don't or shouldn't have that advantage anymore.
The difference here is that (AFAIK), Async performance for AMD does not come at the expense of Nvidia performance. Nvidia deliberately sets over tessellation that hurts both Nvidia and AMD performance, but hurts AMD performance more.

Besides, Nvidia told us eight months ago that their hardware is already capable of Asynchronous Compute. Why not just petition them to release the driver for it already?
 
Status
Not open for further replies.
Back
Top