Async compute gets 30% increase in performance. Maxwell doesn't support async.

its not the type of games, all games can use async for performance benefits.

Lighting algorithms/post process that use compute shaders to do calculations are what is being used now, but physics can be done with compute as well.

If you look at VXGI the GI system it uses heavy compute, I would say in the magnitude of 500+ instructions..... (guessing, haven't looked too closely at it, not my job to do that)
POM with self shadowing will use it which uses around 350 instructions. Flex uses it. Its definitely something that will be used in future games but to what degree is the main issue for Maxwell 2 based cards. I don't think you will see any game that will really hurt Maxwell 2 in the near future if the developers aren't dumb, 82% of the market is not a small number to ignore.
 
its not the type of games, all games can use async for performance benefits.

Lighting algorithms/post process that use compute shaders to do calculations are what is being used now, but physics can be done with compute as well.

If you look at VXGI the GI system it uses heavy compute, I would say in the magnitude of 500+ instructions..... (guessing, haven't looked too closely at it, not my job to do that)
POM with self shadowing will use it which uses around 350 instructions. Flex uses it. Its definitely something that will be used in future games but to what degree is the main issue for Maxwell 2 based cards. I don't think you will see any game that will really hurt Maxwell 2 in the near future if the developers aren't dumb, 82% of the market is not a small number to ignore.

It used to be the consoles that was holding back progress in pc gaming graphics, now it's nvidia !!!!!!!!


Ok, time to stop piling on, but it's kind of nice having these small wins for amd against the oceans of nvidia wins in sales and mind share.
 
well its not really holding back the industry, its all about timing, nV has always seemed to make architectures that fit with the timing for the developer needs, with Xbox one (something was missing from the sdk until Dx12 arrives), so they ddin't need to worry about that, PS4 does use async code extensively, so games being ported from that to PC's you might see some issues but again it takes time to port, and then we have Vulkan which took a while to get released. So nV is safe on that side too and Vulkan is small in the gaming market, it will grow though hopefully stem Dx's hold on the market.

AMD seems to have better capabilities in the regard for over a certain amount of workload for async shaders, but games aren't going to come out that will need that workload because time was just a little off.

This happened with the x1800 series, it was better with ROP though put than the 7800 series and the x1900 series was better at ALU through put to the 7900 series, but games that needed that kind of ALU and ROP through put didn't show up until the 2900, and 8800 series of cards.
 
I predict that once more DX12 games start coming out nVidia is going to totally shit-stomp AMD like they always have and it won't even matter that AMD supports async compute and nVidia doesn't.

Loll, developers develop first for consoles then port to pc... I bet that next xbox and ps4 games will be taking advantage of async.
 
well its not really holding back the industry, its all about timing, nV has always seemed to make architectures that fit with the timing for the developer needs, with Xbox one (something was missing from the sdk until Dx12 arrives), so they ddin't need to worry about that, PS4 does use async code extensively, so games being ported from that to PC's you might see some issues but again it takes time to port, and then we have Vulkan which took a while to get released. So nV is safe on that side too and Vulkan is small in the gaming market, it will grow though hopefully stem Dx's hold on the market.

AMD seems to have better capabilities in the regard for over a certain amount of workload for async shaders, but games aren't going to come out that will need that workload because time was just a little off.

This happened with the x1800 series, it was better with ROP though put than the 7800 series and the x1900 series was better at ALU through put to the 7900 series, but games that needed that kind of ALU and ROP through put didn't show up until the 2900, and 8800 series of cards.


Pretty much this. If games get ported over that make extensive use of async shaders and the dev is lazy by not optimizing for the market leader, then it may show some weaknesses in current gen hardware. However, by the time any of this matters, most of us will be on 16 nm Pascal. I'll be ditching my Titan X's for the next iteration of Pascal Titans when the time comes.
 
Here is a good question: Are you sure Pascal will be better for DX12? I mean it has been in the works for years, likely before any finalization of DX functions/features, so it is likely it may still be hampered in Async if used to a high degree.

Also after reading all the hoopla over this I am surprised that not many if any made the parallel between Bulldozer and GCN, as in multi thread architectures. Bulldozer didn't get the software change AMD hoped for but finally GCN got DX12. They both would have been drawn up at about the same time.
 
It's funny we've been having this debate back and forth for days now and this is the first time I've seen that link. It seems incredibly relevant, surprised it took this long to pop up.

I actually put that link up, as well as several other titles, which will use Async Compute over at Overclockers.net

Mirror's Edge: http://gearnuke.com/mirrors-edge-catalyst-reach-new-levels-gpu-optimizations-via-async-compute/
Deus Ex: http://gearnuke.com/deus-ex-mankind-divided-use-async-compute-enhance-pure-hair-simulation/
Rise of Tomb Raider
Fable Legends

And a whole slew of others...

E3 2015 had the Fable Legends dev talking about Async Compute... gushing over it really (18mins in): https://www.youtube.com/watch?v=7MEgJLvoP2U

I know I'm not the most popular guy around these days but there's a reason I made such a big deal out of it. I even have Johan Andersson gushing over it (but then again he's been gushing about it for years).

I think only the Unreal4 engine is iffy on the topic. Tim Sweeney says he likes it but that developers need to use it with caution (though he did advocate its use for Unreal4 on the Xbox One).

As for the performance one can expect? Maybe a larger boost than what we saw on Ashes of the Singularity if the Oxide developer is correct in his statement. But who knows really.
 
Last edited:
I actually put that link up, as well as several other titles, which will use Async Compute over at Overclockers.net

Mirror's Edge: http://gearnuke.com/mirrors-edge-catalyst-reach-new-levels-gpu-optimizations-via-async-compute/
Deus Ex: http://gearnuke.com/deus-ex-mankind-divided-use-async-compute-enhance-pure-hair-simulation/
Rise of Tomb Raider
Fable Legends

And a whole slew of others...

E3 2015 had the Fable Legends dev talking about Async Compute... gushing over it really (18mins in): https://www.youtube.com/watch?v=7MEgJLvoP2U

I know I'm not the most popular guy around these days but there's a reason I made such a big deal out of it. I even have Johan Andersson gushing over it (but then again he's been gushing about it for years).

I think only the Unreal4 engine is iffy on the topic. Tim Sweeney says he likes it but that developers need to use it with caution (though he did advocate its use for Unreal4 on the Xbox One).

As for the performance one can expect? Maybe a larger boost than what we saw on Ashes of the Singularity if the Oxide developer is correct in his statement. But who knows really.


Aren't all of these games Q1 of next year for the PC outside of Fable legends?

Epic is not against async compute, Unreal 4 was made for indi developers and people that aren't professional game developers as well as full size professional studios, the warning you have read in the Unreal Engine 4 guide is for the former not the later.
 
Aren't all of these games Q1 of next year for the PC outside of Fable legends?

Epic is not against async compute, Unreal 4 was made for indi developers and people that aren't professional game developers as well as full size professional studios, the warning you have read in the Unreal Engine 4 guide is for the former not the later.

They're all for Q1 2016.. I didn't look beyond that.

At least I kinda figure I'll be able to play these games before Greenland and Pascal hit. I'm still on old Hawaii cards (290x Crossfire).

Fable Legends is the title I'm looking forward too the most. I get to play with my wife, she likes those games, and there's nothing more amusing than gaming with your wife (less yelling at ya to get off the PC :p ).
 
They're all for Q1 2016.. I didn't look beyond that.

At least I kinda figure I'll be able to play these games before Greenland and Pascal hit. I'm still on old Hawaii cards (290x Crossfire).

Fable Legends is the title I'm looking forward too the most. I get to play with my wife, she likes those games, and there's nothing more amusing than gaming with your wife (less yelling at ya to get off the PC :p ).

After the disaster that was Fable 3, I'm surprised you can still look forward to Legends.

Personally I'm really looking forward to Mirror's Edge and Deus Ex, really liked the predecessors.

I wonder how much DX12 will be used for performance vs moar pretty frames.
 
Thank you for your response TaintedSquirrel and Razor, I always wonder how long those it take for software developers to mass adopt new feature in each iteration of dx.
 
Yea I am only looking forward to Deus Ex and Battlefront. Never was a fan of Fable or Mirrors Edge.

Does anyone know of Star Wars Battlefront will use Async?

Edit: Ok who gives a fuck about battlefront. Matchmaking on the PC?......yea my day just went to shit.

Fuck you EA!!

Back on topic.
 
Last edited:
Yea I am only looking forward to Deus Ex and Battlefront. Never was a fan of Fable or Mirrors Edge.

Does anyone know of Star Wars Battlefront will use Async?

Edit: Ok who gives a fuck about battlefront. Matchmaking on the PC?......yea my day just went to shit.

Fuck you EA!!

Back on topic.

My guess is yes to whether it will make use of async

http://www.gamepur.com/news/18531-s...-10-plus-dx12-minimum-specs-holiday-2016.html

Does not speak specifically to async there, but being so aggressive to even CONSIDER having dx12 be a min spec for the game suggests to me they are targeting more advanced rendering techniques.
 
Well, looks like Johan Andersson is like a kid in a candy store

http://hardforum.com/showpost.php?p=1041723681&postcount=1

If anybody is looking to, and capable, to push developement forward is him and his team. I wouldn't be surprised to see async compute being used and everything else he can get his hands on. Also wouldn't be surprised the Frostbite engine to be capable of unlocking Fiji's, ahem multiple Fiji's, full performance.
 
It is great seeing all the games that are going to be supporting the latest and greatest features. But I have to admit, I try not to look at what is coming out in the future because I have a hard enough time as it is finishing games that I have now. It has taken me almost 3 months just to log 30 hours in Middle Earth : Shadow of Mordor and I haven't even finished that game yet.
 
Will Pascal doesn't support async?

Sucks for them if it doesn't .
hqdefault.jpg
 
Scott Wasson and David Kanter talk briefly about AOTS and asynch compute

https://www.youtube.com/watch?v=tTVeZlwn9W8#t=1h18m55s


Like I've said elsewhere, god help anyone who bought kepler or maxwell for vr.


Due to the severe vr issues, that makes me think nvidia would have change course more for pascal, I hope they did anyway, otherwise this could be a disaster for them next year.
 
Well, a smart person probably wouldn't have already bought a setup for VR, and would wait until final hardware is out. This way they'd know what the best hardware would be to use.

In the meantime though, I'm sure everyone (who isn't arguing over which hardware is better...) is just enjoying what they have in games and situations that exist now.

But... then again maybe not.
 
Since nobody has updated on the B3D test, it seems the compute operations in that benchmark are being handled as Graphics by Maxwell. So it's effectively processing as Graphics+Graphics (serial).
Unknown if it's caused by a drivers or hardware.

Full disclosure I haven't looked at the thread in days. I pulled this info from elsewhere.
 
its not fully serial and not fully async either depending on the profiler output or the raw data you look at, drivers seem to be playing a big roll in it as the compute portion of the shader is being put into the graphics pipeline where it should be in the compute pipeline.

as for async compute only that fine it works, its been there for a while too older generations cards.

forgot to add, the profiler also might being doing something wrong too, in essence, it is looking at what is going on from the outside, so it doesn't know exactly what is going on at a hardware level, although it gives some insight.

The API doesn't dictate how the hardware should be run, only states what needs to be done, so how that happens in hardware is totally independent.
 
Last edited:
They're all for Q1 2016.. I didn't look beyond that.

At least I kinda figure I'll be able to play these games before Greenland and Pascal hit. I'm still on old Hawaii cards (290x Crossfire).

Fable Legends is the title I'm looking forward too the most. I get to play with my wife, she likes those games, and there's nothing more amusing than gaming with your wife (less yelling at ya to get off the PC :p ).

Fable Legends is coming this fall. Als, Arma 3 is getting dx12 support in Q1 2016.
 
Scott Wasson and David Kanter talk briefly about AOTS and asynch compute

https://www.youtube.com/watch?v=tTVeZlwn9W8#t=1h18m55s


Like I've said elsewhere, god help anyone who bought kepler or maxwell for vr.


Due to the severe vr issues, that makes me think nvidia would have change course more for pascal, I hope they did anyway, otherwise this could be a disaster for them next year.

nice find!

Kanter also says that he is not aware of any chip die from TSMC with +400mm2 that is in production with acceptable yield at 16nm. (keep in mind that Pascal is rumored to be 600mm2)

So if there is any delay in 16nm production, AMD wins regardless of whatever fixes come with Pascal.


It looks like Pascal needs to possess major architecture changes or nvidia is going to be under serious assault in 2016-2017 from AMD + intel.
 
Scott Wasson and David Kanter talk briefly about AOTS and asynch compute

https://www.youtube.com/watch?v=tTVeZlwn9W8#t=1h18m55s


Like I've said elsewhere, god help anyone who bought kepler or maxwell for vr.

Oh yes, god help them! :)

However I think I'll wait for real benchmarks of real games, rather than actually believing an alpha techdemo originally based on mantle is suddenly the authority on what the next 5 years of graphics and game development is going to look like. That's just me though.

By the time actual DX12 games actually designed for DX12 hit their stride in years from now, none of this shit will even matter, regardless of camp.
 
Last edited:
Does anyone know if Maxwell is the iteration of Pascal or is Pascal completely different? Kinda thought they were similar but not sure why I thought that.
 
This looks like some amateur hour shit. Not sure what Nvidia was thinking, if all of that is true.
 
pascal is already silicon, no async shaders for nV in 2016

:eek:
 
Last edited:
Loll, developers develop first for consoles then port to pc... I bet that next xbox and ps4 games will be taking advantage of async.

If I had a nickel for every time I've read "AMD is in the con-souls, so Pee Cee ports will be better 4 AMD cardz" in the past few years. Never happened.

Now simples have latched onto a new buzzword they don't even understand and its off to the races again. Utter comedy.
 
If I had a nickel for every time I've read "AMD is in the con-souls, so Pee Cee ports will be better 4 AMD cardz" in the past few years. Hasn't happened.

Now simples have latched onto a new buzzword they don't even understand and its off to the races again. Utter comedy.

It's always off to the races for some people. :D Oh well. I guess if that's how they get what they need to be happy out of hardware and gaming enthusiasm, then so be it. I don't get it, but to quote Lo Pan, I was not brought upon this world to "get it". :p
 
New post from Oxide.
Probably deserves its own thread.

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/2130#post_24379702

We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.

I suppose the B3D thread was mostly accurate. Driver issue.
If that statement had been included with the previous post, this whole situation could have been avoided. Or Nvidia could have just let people know their Async was fully implemented when the original benchmarks went live 2 weeks ago! Smells like some damage control here, but we'll see.
 
Last edited:
New post from Oxide.
Probably deserves its own thread.

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/2130#post_24379702



I suppose the B3D thread was mostly accurate. Driver issue.
If that statement had been included with the previous post, this whole situation could have been avoided. Or Nvidia could have just let people know their Async was fully implemented when the original benchmarks went live 2 weeks ago! Smells like some damage control here, but we'll see.


Lets see what Hallock has to say about it, he might have to do some damage control on his end too ;)
 
Lets see what Hallock has to say about it, he might have to do some damage control on his end too ;)
I think I'm more upset with Hallock in this whole situation.
We have one post from Oxide, which clearly states they haven't worked through the issues yet, and a buggy benchmark at B3D which they even acknowledge was unreliable.

And Hallock takes to social media spouting nonsense as fact. If this gets resolved over the next few weeks, I hope the handful of extra sales AMD got will outweigh the damage control they have to do for Hallock. Even if he ends up being right, it's obvious he jumped the gun. Embarrassing and shameful.

They lie to consumers, misrepresent products, to boost their sales. I will never buy another product from AMD. /s
 
Sounds like they're patching in something... I'm curious if nvidia will impose any restrictions on the use of async on their hardware.
 
I think I'm more upset with Hallock in this whole situation.
We have one post from Oxide, which clearly states they haven't worked through the issues yet, and a buggy benchmark at B3D which they even acknowledge was unreliable.

And Hallock takes to social media spouting nonsense as fact. If this gets resolved over the next few weeks, I hope the handful of extra sales AMD got will outweigh the damage control they have to do for Hallock. Even if he ends up being right, it's obvious he jumped the gun. Embarrassing and shameful.

They lie to consumers, misrepresent products, to boost their sales. I will never buy another product from AMD. /s


Well I do agree he jumped the gun, if he actually read what we were talking about at B3D the data was inconclusive outside of it seems to be working at times and other times its not, instead he picked bits an pieces that pointed to what he could use as you stated, its in his nature, its his job, he could have been more tactful about things like this from now on, hopefully he will learn a lesson.
 
I think I'm more upset with Hallock in this whole situation.
We have one post from Oxide, which clearly states they haven't worked through the issues yet, and a buggy benchmark at B3D which they even acknowledge was unreliable.

And Hallock takes to social media spouting nonsense as fact. If this gets resolved over the next few weeks, I hope the handful of extra sales AMD got will outweigh the damage control they have to do for Hallock. Even if he ends up being right, it's obvious he jumped the gun. Embarrassing and shameful.

They lie to consumers, misrepresent products, to boost their sales. I will never buy another product from AMD. /s

Im not really upset at Hallock, from his point of view (operational) he was right.

Its not like he is privy to nvidias design specs.

It still remains to be seen anyway.
 
Back
Top