AMD Recruits Hitman for Dx12 Workload Management on ACEs

JustReason

razor1 is my Lover
Joined
Oct 31, 2015
Messages
2,483
http://www.gamersnexus.net/news-pc/2309-amd-hitman-dx12-ace-workload-management

Hitman aims to utilize the ACEs on AMD GPUs (Asynchronous Compute Engines) to better manage intense workloads. Asynchronous workload processing reduces overall frame render time and allows for increased quality, a unique feature to DirectX 12. The game will still support DirectX 11 (which is good – since Dx12 requires Windows 10), but it'd be a worthwhile consideration to upgrade in the near future; the Dx12 switch-over is slowly happening.

In its statement, AMD exclaimed:

“With on-staff game developers, source code and effects, the AMD Gaming Evolved program helps developers to bring the best out of a GPU. And now in 2016, Hitman gets the same PC-focused treatment with AMD and IO Interactive to ensure that the series’ newest title represents another great showcase for PC gaming!”

Very short piece unfortunately. I would have liked more indepth info, like timeframe from release as this likely will come out after DX11 release date, but heres to hoping it releases at same time.
 
Imagine the shitstorm if Nvidia released an article that they were going to work with a new AAA title to push the boundaries of tessellation. :p

It's nice to see AMD throwing punches for a change. If this works out well for them I hope they can stick it to Nvidia hard over the next few months leading into Polaris to build some proper hype.
 
Imagine the shitstorm if Nvidia released an article that they were going to work with a new AAA title to push the boundaries of tessellation. :p

It's nice to see AMD throwing punches for a change. If this works out well for them I hope they can stick it to Nvidia hard over the next few months leading into Polaris to build some proper hype.

Except check the performance of AMD Gaming Evolved titles compared to Nvidia GameWorks titles - now you know why there aren't shitstorms when AMD does this.

I'm excited to see some more DX12 support since the speculated performance increases are apparently very impressive.
 
That's the positive here: A game with both DX12 and DX11 so we can make some side by side comparisons.
 
Properly using ACE units is as much part of PC DX12 as it is for Gaming consoles. So while AMD may be marketing it on PC, the consoles are actually driving the use of ACE units. There are many PS4 developer articles about this, and Xbox is now also in full integration.

If you want to see the first time Square Enix and AMD used ACE units on PC, look at Thief using Mantle. Gameplay aside, the performance, responsiveness, and smoothness is awesome ESPECIALLY with Crossfire. Yes you read that right, with Crossfire which benefits greatly.

That nvidia didn't properly integrate the use of ACE units per published guidelines is something nvidia has to contend with. As for tessellation, over tessellating past the point of visual usefulness is something else, which BTW just set the default tessellation to 8x or 16x which negates any excessive useless performance degradation.
 
Imagine the shitstorm if Nvidia released an article that they were going to work with a new AAA title to push the boundaries of tessellation. :p

It's nice to see AMD throwing punches for a change. If this works out well for them I hope they can stick it to Nvidia hard over the next few months leading into Polaris to build some proper hype.

The difference being that this tech will be open source so nvidia can release a driver that does the same thing within a week or 2 of the game being released.

If nvidia did this, amd would have to start from scratch taking months to develop this tech.
 
Imagine the shitstorm if Nvidia released an article that they were going to work with a new AAA title to push the boundaries of tessellation.
Except that AMD will improve performance of its own cards without affecting performance of Nvidia cards. Nvidia, on the other hand, seems to prefer methods that reduce framerates on AMD cards as well as their own. They just make sure that AMD's cards suffer more.
 
Except that AMD will improve performance of its own cards without affecting performance of Nvidia cards. Nvidia, on the other hand, seems to prefer methods that reduce framerates on AMD cards as well as their own. They just make sure that AMD's cards suffer more.
The difference being that this tech will be open source so nvidia can release a driver that does the same thing within a week or 2 of the game being released.
Except check the performance of AMD Gaming Evolved titles compared to Nvidia GameWorks titles - now you know why there aren't shitstorms when AMD does this.
It would be interesting to watch this all backfire on AMD if Pascal ends up with better async performance than Polaris. AMD would flip-flop on the issue and within a year or two, start accusing Nvidia of overloading compute pipelines in GameWorks games to cripple AMD hardware for "no visual improvement". And if that sounds ridiculous, remember the same thing happened in the early days of DX11/tessellation.
 
It would be interesting to watch this all backfire on AMD if Pascal ends up with better async performance than Polaris. AMD would flip-flop on the issue and within a year or two, start accusing Nvidia of overloading compute pipelines in GameWorks games to cripple AMD hardware for "no visual improvement". And if that sounds ridiculous, remember the same thing happened in the early days of DX10/tessellation.

No Nvidia actually did a terrible job with Crysis tessellation. AMD and Nvidia users were pissed about it as it killed performance across the board. I hope that Nvidia gets their shit together and can do ASYNC right with Pascal. Gamers need more reasons to upgrade their computers; not crippled games that run on 7 year old hardware as well as the newest stuff.

The i7-900 series and AM3+ stuff would have been EOL and sitting in a trash dump the way the industry moved back in the day. Now it's still cutting edge unless you're living in Siberia and the government allots your X watts a day.
 
No Nvidia actually did a terrible job with Crysis tessellation. AMD and Nvidia users were pissed about it as it killed performance across the board. I hope that Nvidia gets their shit together and can do ASYNC right with Pascal. Gamers need more reasons to upgrade their computers; not crippled games that run on 7 year old hardware as well as the newest stuff.

The i7-900 series and AM3+ stuff would have been EOL and sitting in a trash dump the way the industry moved back in the day. Now it's still cutting edge unless you're living in Siberia and the government allots your X watts a day.
Yes, AMD supports moving the industry forward but only far enough until it starts benefitting Nvidia. After that, it's too much.
Mark my words, if Pascal is better at async then we will start seeing Huddy use phrases like "Nvidia is over-computing" next year. They'll start picking apart pipelines and point at it and say, "Look at all these instructions being sent down the compute pipeline that don't even belong there! This is what GameWorks does."

And I say this could happen because it's exactly what happened with tessellation. I remember when I bought my 5870 and thought I was future-proof, and then I also distinctly remember the crushing disappointment I felt when I saw Fermi's benchmarks. So don't even try to tell me Nvidia isn't being setup for another slam dunk.
 
Last edited:
Yes, AMD supports moving the industry forward but only far enough until it starts benefitting Nvidia. After that, it's too much.
Mark my words, if Pascal is better at async then we will start seeing Huddy use phrases like "Nvidia is over-computing" next year. They'll start picking apart pipelines and point at it and say, "Look at all these instructions being sent down the compute pipeline that don't even belong there! This is what GameWorks does."

I'm not sure there is really such a thing as "better" async support. It's akin to going from single core to multicore. You start getting diminishing returns pretty quick. Async is sort of the primary goal of DX12. It's almost like saying they support DX9 with only fixed function hardware instead of programmable shaders. The async selling point wasn't just being able to run multiple jobs in parallel, it was scheduling them with little regard to what any other threads were doing to reduce CPU overhead and get results in a timely manner. The second you have to start tracking the progress of a bunch of threads that CPU overhead increases significantly and is why DX11 is serial/synchronous.

As for Pascal, I'm not sure it's even supposed to support async. No more than current cards do. It was supposed to be a beefed up maxwell2 from everything I've seen. Volta was the one getting the architecture update. All the features Nvidia are pushing are more in line with a new point release of DX11, not necessarily the DX12 model. Still significant, but nowhere near the fundamental shift of DX12/Async.
 
Yes, AMD supports moving the industry forward but only far enough until it starts benefitting Nvidia. After that, it's too much.
What orifice are you pulling this from? Care to provide some examples? Unless you happen to think that default x64 tessellation in Witcher 3 or the world's most perfectly modeled concrete barrier in Crysis 2 were examples of good programming.
 
Last edited:
What orifice are you pulling this from? Care to provide some examples?

Prob refering to tessellation but I believe that was ATI. AMDs first architecture was GCN 7000 series. ATI was first with hardware Tess support then NVIDIA, if I remember correctly.
 
the HD5xxx line was the first DX11 chips by AMD which supported DX11 tessellation. Its been 5 generations since introduction of Dx11 tessellation.
 
What orifice are you pulling this from? Care to provide some examples? Unless you happen to think that default x64 tessellation in Witcher 3 or the world's most perfectly modeled concrete barrier in Crysis 2 were examples of good programming.
AMD claims there is no visual improvement, but we already know 64X looks much better especially on close-up objects. AMD promoted tessellation in the early days before Fermi, but apparently they have a cut-off somewhere that, in their definition, switches from "moving the industry forward" immediately to "excessive for the purpose of hindering our hardware" when it turned out Nvidia was better.

AMD is suggesting that Nvidia has to ride the fine line between those 2 extremes, otherwise they get unhappy because their performance suffers, as if that's Nvidia's responsibility in the first place. In reality, if AMD simply had better tessellation performance it would be a complete non-issue. When something goes wrong for AMD, it's Nvidia's fault. When something goes wrong for Nvidia, it's also Nvidia's fault. This is AMD's marketing strategy -- put the blame on someone else. With Ashes, at least Nvidia acknowledged it was their problem (drivers, specifically). I can't remember the last time AMD admitted to having a driver problem... Maybe Crimson's fan issue, but they couldn't dodge that bullet.

We're seeing the same story play out today with async, and I'm suggesting that if the roles reverse next cycle (as it did with tessellation), AMD will again flip-flop and start accusing Nvidia of too much async compute when they have spent the last year singing its praises. AMD takes credit for everything when its in their favor, they drop it like a rock when it's Nvidia's favor, and if anything goes wrong as a result, they pretend like they hated it all along and it's totally Nvidia's fault. AMD is a proven hypocrite and they need to be careful about biting off more than they can chew. Let sleeping dogs lie, etc.

The async stuff is GREAT marketing material today... Between now and Pascal, specifically. I don't blame them for taking advantage of the situation.
 
Last edited:
Yes, AMD supports moving the industry forward but only far enough until it starts benefitting Nvidia.

At least AMD is moving the industry forward for everyone.

the HD5xxx line was the first DX11 chips by AMD which supported DX11 tessellation. Its been 5 generations since introduction of Dx11 tessellation.

Tessellation (truform) was supported by ATI since the 8000 series.
 
At least AMD is moving the industry forward for everyone.



Tessellation (truform) was supported by ATI since the 8000 series.


That has nothing to do with how tessellation is done Dx11 onward, there was no geometry pipeline prior to Dx11 well nothing that was very useful. This is why even in Dx10 the geometry pipeline wasn't robust enough to alleviate the pressure it put on the shader array. And prior to that, with Truform it was horrible at tessellation and displacement unless artists paid close attention to how the models were made and it was done by trial and error, even without displacement you still needed to pay attention to how the artwork was created otherwise artifacts would pop up. This artifacts went from pinching to ballooning of art assets, so it wasn't an issue of how the displacement map was created but specifically with the mesh and how it was interpreted by the GPU.

From the 8xxx series to the 9x00 series, Trufrom shifted to a software solution as the hardware level hardware support was dropped and soon after that, the software portion was dropped in the drivers too (I think that happened with the 18xx series)
 
At least AMD is moving the industry forward for everyone.



Tessellation (truform) was supported by ATI since the 8000 series.

Just to make sure people aren't confused he means the Radeon 8500 series, from 2001. Not the HD 8000 series of OEM GCN cards. Of course, the hardware was not really used much since it was not part of directx until dx11, but ATI has had h/w tessalation support for a LONG time.
 
I'm not sure there is really such a thing as "better" async support. It's akin to going from single core to multicore. You start getting diminishing returns pretty quick. Async is sort of the primary goal of DX12. It's almost like saying they support DX9 with only fixed function hardware instead of programmable shaders. The async selling point wasn't just being able to run multiple jobs in parallel, it was scheduling them with little regard to what any other threads were doing to reduce CPU overhead and get results in a timely manner. The second you have to start tracking the progress of a bunch of threads that CPU overhead increases significantly and is why DX11 is serial/synchronous.

As for Pascal, I'm not sure it's even supposed to support async. No more than current cards do. It was supposed to be a beefed up maxwell2 from everything I've seen. Volta was the one getting the architecture update. All the features Nvidia are pushing are more in line with a new point release of DX11, not necessarily the DX12 model. Still significant, but nowhere near the fundamental shift of DX12/Async.

This was my concern as well. Based on events leading up to the 750Ti and later the 900series, it seemed that Maxwell was a preliminary version of Pascal, maybe because of still being on 28nm. And if this is the case it makes any game showcasing DX12 with DX11 path as well a real entertaining set of events. Add to that if Pascal is indeed no better at DX12 than their Maxwell predecessors, we may have front row tickets to a greater forum/online blowup than the 970 3.5Gb Vram event.

Of course that being said I doubt it will an extreme issue. It will or may just limit the extras on screen. Whereas the new benchmark is ensuring that both vendors are showcasing the same results. Kinda how I feel about driver updates where likely, using AMD in this example, the driver caps the tessellation factor to 8x or 16x and gives equal performance to the competitor that is still running the original 64x. Of course I still argue 64x is unnecessary at most resolutions below 4K and still may be unnecessary at it, but it doesn't help when benching for equality sake.
 
Except check the performance of AMD Gaming Evolved titles compared to Nvidia GameWorks titles - now you know why there aren't shitstorms when AMD does this.

Not really true..

For game works it's up to the dev to optimize it correctly. Look at H's FC4 review, AMD and nVidia had the same perf hits except IIRC God rays on ultra hit AMD 10% harder (neglible) probably from them being tensalated.

Game works is overblown and easy to shut off.

AMD does more damage to themselves...

If they kept up their progress with crossfire I may have went with AMD. [H] even praised them with their 290x. Then they just started shitting all over their own drivers - the most important thing. I liked to speculate they moved their engineers to mantle/dx12/VR and hoped to get by a little better than they did on current drivers.
 
I wouldn't even consider CrossfireX or SLi support when buying a new video card. When Windows 10 / Vulkan performs the load balancing for mGPU natively is when I'll pay attention to it again.
 
We are aware of performance and stability issues with GeForce GPUs running Tomb Raider with maximum settings. Unfortunately, NVIDIA didn’t receive final game code until this past weekend which substantially decreased stability, image quality and performance over a build we were previously provided. We are working closely with Crystal Dynamics to address and resolve all game issues as quickly as possible.
Please be advised that these issues cannot be completely resolved by an NVIDIA driver. The developer will need to make code changes on their end to fix the issues on GeForce GPUs as well. As a result, we recommend you do not test Tomb Raider until all of the above issues have been resolved.

In the meantime, we would like to apologize to GeForce users that are not able to have a great experience playing Tomb Raider, as they have come to expect with all of their favorite PC games.

How quick they forget...

http://techreport.com/news/24463/nvidia-acknowledges-tomb-raider-performance-issues

:rolleyes:
 
Exactly. Do what every other Nvidia fanboy did. It is the developer who didn't let Nvidia know!!! Don't blame anyone but the developer!

Fanboy? Yeah, the developer should be held accountable for releasing a pile of shit. At the end of the day it's their responsibility that the game runs well for their fan base.
 
How quick they forget...
We are aware of performance and stability issues with GeForce GPUs running Tomb Raider with maximum settings. Unfortunately, NVIDIA didn’t receive final game code until this past weekend which substantially decreased stability, image quality and performance over a build we were previously provided. We are working closely with Crystal Dynamics to address and resolve all game issues as quickly as possible.
Please be advised that these issues cannot be completely resolved by an NVIDIA driver. The developer will need to make code changes on their end to fix the issues on GeForce GPUs as well. As a result, we recommend you do not test Tomb Raider until all of the above issues have been resolved.

In the meantime, we would like to apologize to GeForce users that are not able to have a great experience playing Tomb Raider, as they have come to expect with all of their favorite PC games.

http://techreport.com/news/24463/nvidia-acknowledges-tomb-raider-performance-issues

:rolleyes:

You know what the real irony with nVidia complaining about not getting the final build code is?

One of the more damning statements made by AMD was this:

“Participation in the Gameworks program often precludes the developer from accepting AMD suggestions that would improve performance directly in the game code—the most desirable form of optimization.”

Nvidia’s Cebenoyan responded directly to this during our conversation: “I’ve heard that before from AMD and it’s a little mysterious to me. We don’t and we never have restricted anyone from getting access as part of our agreements. Not with Watch Dogs and not with any other titles. Our agreements focus on interesting things we’re going to do together to improve the experience for all PC gamers and of course for Nvidia customers. We don’t have anything in there restricting anyone from accessing source code or binaries. Developers are free to give builds out to whoever they want. It’s their product.”

Seeking some clarification, I asked if perhaps AMD’s concern was that they’re not allowed to see the game’s source code. Cebenoyan says that a game’s source code isn’t necessary to perform driver optimization. “Thousands of games get released, but we don’t need to look at that source code,” he says. “Most developers don’t give you the source code. You don’t need source code of the game itself to do optimization for those games. AMD’s been saying for awhile that without access to the source code it’s impossible to optimize. That’s crazy.”

As for the Nvidia-specific source code: “The way that it works is we provide separate levels of licensing,” Cebenoyan explains. “We offer game developers source licensing, and it varies whether or not game developers are interested in that. Now, like any other middleware on earth, if you grant someone a source license, you grant it to them. We don’t preclude them from changing anything and making it run better on AMD.”

Can't have your cake and eat it too nVidia.
 
That specific talk was about what? I think it was about extreme tessellation levels, and Witcher 3? That was just BS on AMD's part :). AMD bitched about it then added in the tessellation factors in their drivers? The people that complain about something without saying there is fault on their own end (less performance at doing something than their competitors), is just bitching. No likes a person that bitches, fix the freakin problem and move on. Gameworks so far only uses tessellation to nV's benefits as far as I have seen, it is a known limitation in AMD hardware for the past 5 gens and they have yet to address the true issue, triangle through put, why is this? If we follow the logic of people saying AMD had tessellation prior to nV which is valid, why are they behind now? How are they pushing tech forward when they haven't fixed an issue that was easily seen since their first DX11 card? So far DX12 cards, nV only has 1 generation with "async" issues, will they wait 5 gens of cards before they fix it?

It depends on where the issues are, if the optimization can be done in drivers. Doing a shader replacement will take more time than a simple tessellation factor limiter.
 
Fanboy? Yeah, the developer should be held accountable for releasing a pile of shit. At the end of the day it's their responsibility that the game runs well for their fan base.

Not if they are being paid buy Nvidia or AMD to not send the competition the source code before a game is released.
 
Didn't really want to see another WHOS FAULT IS IT THREAD but I do have some insight I read last night as a good example. The Black box discussion over Game works is best explained as We know what comes out and what goes in but not what is being done with it whilst inside. That inside part is what makes driver tweaking easier and profoundly more accurate. Nvidia can just go and release a driver update KNOWING EXACTLY what is being done when the game asks for said operation. AMD must guess, and granted most of it educated, but they hardly know if it is exact as much as Nvidia does. It is like having 5 cards layed down and asked to pick the King. Nvidia knows the order the cards are in and AMD must guess. Sometimes AMD gets lucky.
 
That specific talk was about what? I think it was about extreme tessellation levels, and Witcher 3? That was just BS on AMD's part :). AMD bitched about it then added in the tessellation factors in their drivers? The people that complain about something without saying there is fault on their own end (less performance at doing something than their competitors), is just bitching. No likes a person that bitches, fix the freakin problem and move on. Gameworks so far only uses tessellation to nV's benefits as far as I have seen, it is a known limitation in AMD hardware for the past 5 gens and they have yet to address the true issue, triangle through put, why is this? If we follow the logic of people saying AMD had tessellation prior to nV which is valid, why are they behind now? How are they pushing tech forward when they haven't fixed an issue that was easily seen since their first DX11 card? So far DX12 cards, nV only has 1 generation with "async" issues, will they wait 5 gens of cards before they fix it?

It depends on where the issues are, if the optimization can be done in drivers. Doing a shader replacement will take more time than a simple tessellation factor limiter.

No it was about Watch Dogs and a complaint about the black box nature of Gameworks in general.
 
Not if they are being paid buy Nvidia or AMD to not send the competition the source code before a game is released.

That doesn't occur at least not directly with money changing hands.

If a person benchmarking can see the problem within the time they benchmark why couldn't AMD see it and address it before hand? And how long did it take them to get a tessellation limiter into their driver once Witcher 3 came out? It was pretty quick if I remember correctly.
 
That doesn't occur at least not directly with money changing hands.

If a person benchmarking can see the problem within the time they benchmark why couldn't AMD see it and address it before hand? And how long did it take them to get a tessellation limiter into their driver once Witcher 3 came out? It was pretty quick if I remember correctly.

There is no way you can say that does not occur. No one does, not even me.

But I wouldn't be shocked to find out that it did happen. I would even say I expect it.
 
There is no way you can say that does not occur. No one does, not even me.

But I wouldn't be shocked to find out that it did happen. I would even say I expect it.

I know of one company that definitely did it in the past and it didn't start with an N :D but that was a long time ago, it was for an engine license, as they wanted to show off 3DC texturing capabilities by making a short demo. But its few and far between outside of cross promotional game and card sales.
 
That doesn't occur at least not directly with money changing hands.

If a person benchmarking can see the problem within the time they benchmark why couldn't AMD see it and address it before hand? And how long did it take them to get a tessellation limiter into their driver once Witcher 3 came out? It was pretty quick if I remember correctly.

TW3 was the first GW fiasco. A user found the Tess fix because AMD had it in their drivers/software and then about 3 weeks AMD released a driver fix.
 
A few thought after reading other forums for the past hour and a half.

1. DX12 in this game may not add to frame rate but rather what you see on screen. (hence my Benchmarks are going to be on image quality/visuals rather than straight up FPS- between DX12 and DX11 in this game)

2. A positive to the PASCAL having async issues. Volta might come quicker.

3. I wonder if anyone can remove GW from a game and bench it to see if GW, even when off, has an adverse effect?

And other things no related closely enough to this thread for discussion.
 
You know what the real irony with nVidia complaining about not getting the final build code is?

Can't have your cake and eat it too nVidia.

They said that a) They didn't have enough time to optimise for the latest build and b) Some things can't be optimised via drivers and have to be changed by the developers.

Both points are valid. So what's the problem here? You think you can optimise a completely new build of a complex piece of software like a game in under a week, without errors? Heck, QA can take more than a week alone.

So that's ONE game where AMD may or may not have used some contracts to block a competitor from receiving final code... I wonder how many times that's happened with Nvidia-sponsored games...

You want to start keeping score?

But but but you can optimise without source code. But but but that means you can optimise for GameWorks.

That means GameWorks being a blackbox is not an excuse for poor performance. You can still optimise for it, like every other case.

What's the next accusation against GameWorks?
 
They said that a) They didn't have enough time to optimise for the latest build and b) Some things can't be optimised via drivers and have to be changed by the developers.

Both points are valid. So what's the problem here? You think you can optimise a completely new build of a complex piece of software like a game in under a week, without errors? Heck, QA can take more than a week alone.

Well you pretty much laid out the "problem" yourself in the next paragraph:

But but but you can optimise without source code. But but but that means you can optimise for GameWorks.

That means GameWorks being a blackbox is not an excuse for poor performance. You can still optimise for it, like every other case.

If you can optimize without source code, why did they need the final build code before they could optimize? If certain things can't be optimized via drivers, how does Gameworks fit into this?

(not trying to be combative here, but I really am curious how one reconciles these apparently contradictory statements made by nVidia)
 
But but but you can optimise without source code. But but but that means you can optimise for GameWorks.

That means GameWorks being a blackbox is not an excuse for poor performance. You can still optimise for it, like every other case.

What's the next accusation against GameWorks?

While quoting me does stroke the more megalomaniacal parts of my ego, I don't understand how what you said is relevant to what I said when you quoted me when I quoted you quoting a news article...
 
Back
Top