Async compute gets 30% increase in performance. Maxwell doesn't support async.

I think Nvidia uses a different term for it.
And technically he's right, the Oxide dev said they couldn't get it to work in their specific case. Could have something to do with it being a Mantle game originally but who knows.

At this point Oxide criticizing Nvidia is the same thing as a GameWorks dev like Ubisoft criticizing AMD. Except when that happens, AMD is holier than thou and GameWorks/Nvidia are filthy liars.

The double standards are ridiculous. I know Nvidia is a scummy company but that doesn't mean we should treat AMD's word like gospel. After hearing all the nonsense that Richard Huddy has spewed over the last few months, I hesitate to trust AMD/Oxide on this one. With Rob's response this is looking more and more like a collaborative effort by AMD and Oxide to team up on Nvidia for publicity. And we don't even know if any of the info is even true.

Yes! Oxide and Dan Baker want to piss off everyone who owns an nVidia GPU because he doesn't want them to be excited about his game. ...Wait?... What about all the devs are going to naturally optimize for nVidia because of their massive market share? We all know AMD is too broke to bribe the developers. How's this happening? Maybe it isn't.
 
Yes! Oxide and Dan Baker want to piss off everyone who owns an nVidia GPU because he doesn't want them to be excited about his game. ...Wait?... What about all the devs are going to naturally optimize for nVidia because of their massive market share? We all know AMD is too broke to bribe the developers. How's this happening? Maybe it isn't.

Or maybe you know he is using scandal to let world know about his previously unknown company ?

Who even heard about Oxide and Ashes of Singluarity before Mantle and latest DX12 debacle ?
 
The whole situation seems like a perfectly orchestrated shitstorm.
The AotS benchmarks get the "It's just one game" treatment and people stop caring. Then, someone who apparently has a very high interest in GPUs makes a brand new account across multiple tech forums and posts very lengthy details about how Maxwell doesn't support async compute. Then Oxide responds and AMD responds. All of this happens in within about 48 to 72 hours, maybe.

For the rumor mill's sake, I hope there's some truth to the allegations. Otherwise it's going to be really fucking embarrassing for everybody involved. PC building communities have effectively put a hold on Maxwell GPUs entirely. And all of this based on unconfirmed information sourcing back to a single AMD Mantle/DX12 tech demo.

If AotS had been a GameWorks game with Nvidia in the lead, everybody would have shit on the 'black boxes' again and that would have been the end of it. Attach AMD's name to the same situation, however, and suddenly it's a hardware-crippling issue.

When nVidia gives all of the IHV's and Msft access to all of their source code like Oxide and AMD do, then they will be seen in the same light. As long as they lock it up and require licenses and NDA's, people will question their intentions.
 
Keep dreaming. The new consoles have been out HOW LONG and this fabled "PC gaymes will run better on AMD cardz cuz AMD is in the cunsolez" hasn't materialized.

We didn't have the necessary API's. DX11 couldn't do what the consoles and GCN can show.
 
Or maybe you know he is using scandal to let world know about his previously unknown company ?

Who even heard about Oxide and Ashes of Singluarity before Mantle and latest DX12 debacle ?

After reading the whole thread this was my conclusion. Who the hell are these people and why should I care?

If this is true, big name developers will come out with it and benchmarks will show it. Major review sites will cover it in detail just like the GTX970 3.5GB ROP lie. There will be returns allowed by major retailers.

As far as AMD claiming something about nVidia's hardware, shit-slinging isn't anything new in this industry.

I'll buy a new GPU at least every other year, whether it's nVidia or AMD. Whatever's faster. Less rumors and speculation, more evidence from reputable sources.
 
But then again it's still a while away, and there's a chance NVIDIA is willing to delay Pascal to resolve the issue, so who knows.

they would have to delay it if this turns out to be true...no way will they release a new generation of cards that AMD can use as an example of their DX12 superiority
 
they would have to delay it if this turns out to be true...no way will they release a new generation of cards that AMD can use as an example of their DX12 superiority

They will delay it if and only if there are many games which will follow this example. We won't know for sure.

Another possibility is for them to release early the low-end parts, i.e. GPx06, which won't have the fix, and then have some time to put the fix on enthusiast parts like GPx04 and above.
 
My only concern with AMD's DX12 dominance is that we are going to have 3 years of Hawaii and Fiji refreshes while AMD sits on one hand and pats itself on the back with the other.
 
My only concern with AMD's DX12 dominance is that we are going to have 3 years of Hawaii and Fiji refreshes while AMD sits on one hand and pats itself on the back with the other.
Fiji might return as 16nm but I guess that's just Tonga. Don't know much about Pascal but I doubt a 16nm Fiji XT would be able to compete.
As far as I know AMD is rolling out 3 new GPUs in 2016. I wouldn't expect to see any rebrands, especially with HBM on the market.
 
they would have to delay it if this turns out to be true...no way will they release a new generation of cards that AMD can use as an example of their DX12 superiority
They will delay it if and only if there are many games which will follow this example. We won't know for sure.

Another possibility is for them to release early the low-end parts, i.e. GPx06, which won't have the fix, and then have some time to put the fix on enthusiast parts like GPx04 and above.


I think you guys are looking at this wrong above in terms of Pascal resolving this "issue" as that seems to imply there is some errata in the design requiring a respin to address it.

Both AMD and Nvidia have been iterating their Queue Engines. There isn't really a basis to assume that whatever they release next would be identical to the present regardless of DX12/Async shaders.

They've also had access to software samples long before the public has.

My only concern with AMD's DX12 dominance is that we are going to have 3 years of Hawaii and Fiji refreshes while AMD sits on one hand and pats itself on the back with the other.

You'll likely see some sort of iterative changes at the very least for economic reasons.

Also AMD doesn't fully support DX12 either, if that is arguement, according to AMD.
 
I think you guys are looking at this wrong above in terms of Pascal resolving this "issue" as that seems to imply there is some errata in the design requiring a respin to address it.

Both AMD and Nvidia have been iterating their Queue Engines. There isn't really a basis to assume that whatever they release next would be identical to the present regardless of DX12/Async shaders.

They've also had access to software samples long before the public has.



You'll likely see some sort of iterative changes at the very least for economic reasons.

Also AMD doesn't fully support DX12 either, if that is arguement, according to AMD.

The problem I see is that the development of DX12 was very very swift compared to previous iterations, so it's not unlikely that AMD/NVIDIA gets caught by surprise. Of course there'll be improvements, however will the problem go away or just be alleviated is still unknown.
 
The problem I see is that the development of DX12 was very very swift compared to previous iterations, so it's not unlikely that AMD/NVIDIA gets caught by surprise. Of course there'll be improvements, however will the problem go away or just be alleviated is still unknown.

DX12 was publicly announced in Mar 2014 (oddly demoing Forza for PC on Nvidia hardware). Using likely earliest timeframes "for next gen" releases (Q2 2016) this would be 2 years+ from when DX12 was publiclly announced. So it isn't as new as one might think anymore. Coincidentally (or not ;)) Pascal was first publiclly announced in Mar 2014 as well replacing Volta as coming after Maxwell in Nvidia's roadmap.

Also what was available in AMDs hardware and the console hardware has been known publicly longer than that.

You could even say suspiciously that Maxwell seems to be more "GCN" like than Kepler was ;)

Aside from that it seems like it could be just a natural iterative move if you look at some of the trends. Nvidia added compute queues to GK110 (not present in lower Kepler GPUs) but retained this even in the lowest Maxwell GPUs (even if they weren't compute focused offerings). Also if you look at how Maxwell seems to have better all around compute shader performance over Kepler.

The other side of this is that async shaders aren't actually giving more performance but more so that hardware resources are more efficiently leveraged via software. In theory you could offset this gap in other ways. In practice what isn't necessarily important is feature parity but result parity.
 
2 years is relatively "new" in the scope of GPU development cycles.

My concern is that Nvidia will release pascal without fixing the context switching and just buy off all the devs to exclude async shaders.

If what oxide says about the consoles getting 30% performance uplifts, its a loss for all of us.

Thats probably a bit tinfoil hattish, but crazier theories have been put forward.
 
Last edited:
Not-so tinfoil hatish IMO. Over the coarse of the next couple years more titles will be ported from consoles which will use Async Shaders. If nvidia buys out the PC port and they incur a performance penalty, what stops then from disabling Async Shaders and rewriting part of the code to suit them?

Only thing I wonder is if Microsoft built-in systems against this sort of thing to weed this out this kind of shady practice.

Or if AMD will have access to the original Console Code to know something fishy is going on.

As for PC only gameworks games like Ark, I think we can be reasonably sure nvidia won't allow Async Shader code. Especially if it will boost AMD significantly.
 
unfortunately there are a lot more Gameworks titles than there are Gaming Evloved.
 
I don't understand what's all the fuzz about.

Even if Async compute doesn't work or has massive performance hit with nvidia, you can be sure as hell that it won't be supported in any gameworks, unity or UE titles and that's the majority of upcoming games.

Now IF AMD has the same edge with any of the popular engines under DX12 THEN nvidia is in trouble.
 
My only concern with AMD's DX12 dominance is that we are going to have 3 years of Hawaii and Fiji refreshes while AMD sits on one hand and pats itself on the back with the other.
Drinking too much kool-aid AMD has never managed himself properly long enough to stay on top even if DX12 is a boon for GCN. AMD's biggest enemy has always been itself.
 
Not-so tinfoil hatish IMO. Over the coarse of the next couple years more titles will be ported from consoles which will use Async Shaders. If nvidia buys out the PC port and they incur a performance penalty, what stops then from disabling Async Shaders and rewriting part of the code to suit them?

Well people wanted closer to the metal coding right ?

So they will get exactly what they asked for - games where code has been optimized for certain gpu architecture.
 
or not if nvidia pays everyone off, it will just be one gpu architecture.

lol
 
Console engines are getting optimized for Async, and some are already.

So if those aren't ported over to PCs then it will be artificially gimping PC performance and bringing consoles closer, or even exceeding what PCs can do. As in the case of Tomorrow Children with its ultra fine responsiveness and low lag time. Or ultra low impact physics calculations, which PC might suffer greatly doing without Async Shaders. It's not all about fps.
 
I don't understand what's all the fuzz about.

Even if Async compute doesn't work or has massive performance hit with nvidia, you can be sure as hell that it won't be supported in any gameworks, unity or UE titles and that's the majority of upcoming games.

Now IF AMD has the same edge with any of the popular engines under DX12 THEN nvidia is in trouble.

Drinking too much kool-aid AMD has never managed himself properly long enough to stay on top even if DX12 is a boon for GCN. AMD's biggest enemy has always been itself.

There is a lot of hyperbole right now.
(I still remember how Oxide didn't go "full throttle" on DX11 in "Star Swarm" (no DX11 multi-threading for a long time))
.oO(If you wanted to hyperbole it you could argue that Oxide held back DX11 performance so NVIDIA's DX11 performance would look worse against Mantle)
This is the same company now, showing the same thing...

And some technical "mumbo-jumbo" that cannot be overlooked.

DX12 will NOT replace DX11/11.1/11.2 with a flick of a switch.
DX12 coding requires a lot more of the devolper coding wise...a LOT more.
The most used API for the next 3-4 years will be DX11.3

High level API's have their place. (Indigo developers, small studios, large target platforms)
Low Level API's also have their place. (Large studios).

And then there is the architectural/design philosophy differences.

Looking at AMD vs NVIDIA numbers in DX11, one has to wonder what is holding GNC back under DX11 compared to DX12

Looking at NVIDA's DX11 vs DX12 number one has to wonder what stalls/fails in the pipeline with the current code/drivers.

Despite the lower driver overhead under DX12 compared to DX11...lower numbers indicate a major bug...that is not only eating up the reduced overhead, but also making the pipeline slower than under DX11 by quite a substantial amount.

Looking at those two issues might yield some better understanding of the issue at hand.

And one last thing to add.
You can code with DX12 in a lot of different ways, but you also have to be vey diligent as the "closer to the metal" approach means you can "litterally" toss a wrench into the system, quite unintentionally.

If the DX12 "game" comes down to who has the best developer relations, I got a little feeling which company will go into "full gear" then...looking at past history.

But I would still say (as I have done multiple times) that it is way to early to draw any conclusions yet.

But it will definality be fun to read this thread again in 6-12 months time.
Old "fire" threads are always hilarious when gazed upon with history as a lens ;)
 
Console engines are getting optimized for Async, and some are already.

So if those aren't ported over to PCs then it will be artificially gimping PC performance and bringing consoles closer, or even exceeding what PCs can do. As in the case of Tomorrow Children with its ultra fine responsiveness and low lag time. Or ultra low impact physics calculations, which PC might suffer greatly doing without Async Shaders. It's not all about fps.
Which console engines are being optimized for DX12? Names of games and sources?
 
Most of them are cross platform

Frostbyte, UE4, Cryengine. Not sure about Nitrous.

Those are the biggest, and they will all have DX12.
 
They all have DX12 and use ASYNC? Source?
Which games have DX12 and are being released this year? I really don't know so asking?
 
not sure, a few have been announced. ARK, and Fable Legends for sure.

A lot of 2015 released will get DX12 patches however.

CDPR announced witcher 3 will get one

pretty sure frostbyte games will get ones too. Its pretty trivial to move from mantle to dx12.

No one knows how many will use async, we will have to wait and see.

My guess is it will be iterative with multiple patches. They will get core functionality up first, then patch in performance features.
 

What are you talking about? Everything was great because they did have souce code. Then they added gameworks hairworks which... drum roll... killed their performance because its not open source or source available to AMD and it wasn't until after release that it was found it was the massive amounts of unneeded tessellation was the cause.

So yes it runs better now as quoted in the [H] review, because the tessellation was turned way down after release.

Now its still shitty on all hardware, so not sure its a very good example of a feature done right.
 
Drinking too much kool-aid AMD has never managed himself properly long enough to stay on top even if DX12 is a boon for GCN. AMD's biggest enemy has always been itself.

AMD seems to make a hit, then ride on it as long as possible.

-Athlon64 - The didn't do much other then up clocks and die-shrinks. Then fused two together to make the Athlon64 X2. When Intel made the Core 2 Quad, AMD responded with the FX-73, TWO Dual core Processors (based on the same architecture as the original Athlon64).

- Radeon 5870 - Huge hit, embarrassed nVidia almost as much as the 4870 did. The 6970 was barely faster, and was also the first hint of AMD artificially increasing prices. We all knew that the 6970 was the true successor to the 5870, but AMD kept selling the idea that it was a "New Class for them" to justify the price hike over the 5870. The said that the successor to the 5870 was the slower 6870....Nice AMD....

AMD's MO appears to be one of two things:

- Make a product that is slower than the competition, charge the same or more and hope for the best.

- Make an amazing product. Blows away the competition, then do nothing while your competition meets then surpasses you in two years time. Meanwhile AMD acts surprised and "refreshes" their old products in hopes of catching up.

With Hawaii, Fiji, Tonga, Tahiti AMD seems to have done both at the same time.
 
- Make a product that is slower than the competition, charge the same or more and hope for the best.
When was this? I've always associated AMD / ATI products as being a bit cheaper than the top competition. At least with all the AMD products (GPU, / CPU) I've bought are cheaper than an equivalent product from competition.
 
When was this? I've always associated AMD / ATI products as being a bit cheaper than the top competition. At least with all the AMD products (GPU, / CPU) I've bought are cheaper than an equivalent product from competition.

Overpriced FX9590 make some memory refresh? couple of weeks later cut down prices in more than half just because were unable to be competitive (AMD smoked so "green" with that launch) and without go so far.. R9 Fury X?. priced equal to 980TI but being embarrassed slower than the competition.
 
When was this? I've always associated AMD / ATI products as being a bit cheaper than the top competition. At least with all the AMD products (GPU, / CPU) I've bought are cheaper than an equivalent product from competition.

Fury X. Same price as 980Ti but slower, except maybe on dx12 which is why this thread exists.
 
Overpriced FX9590 make some memory refresh? couple of weeks later cut down prices in more than half just because were unable to be competitive (AMD smoked so "green" with that launch) and without go so far.. R9 Fury X?. priced equal to 980TI but being embarrassed slower than the competition.

Not to mention, the Original FX, Original Phenom, Radeon 2900XT, and arguably the 7970(Priced like a GTX680, performed like a GTX670...AMD improved drivers tho and got a lot of extra life out of it as when it was rebadged the 280x it went blow for blow with the GTX770, a rebadged/overclocked GTX680)
 
I dont know what the big deal is with Nvidia people.
If you card does not support a feature just admit it.
Not every card supports every feature. I swear if the Green Teams troll machine wanted to fight this they should be making up reasons why its not needed. Which some of them are starting now, albeit only the intelligent ones.
 
He released a new version of the benchmark and increased the duration, which now shows Nvidia getting lower latency across the board.

https://forum.beyond3d.com/posts/1869416/

I'm struggling to see how NVidia is failing by any sensible metric when Graphics + compute completes in 92ms on GTX980Ti and 444ms on Fury X. Or compute only which is 76 versus 468ms. AMD, whatever it's doing, is just broken.

Don't know what Single Commandlist means but it murders Maxwell's results.
 
Those are rather interesting results. Hmm...

I dont know what the big deal is with Nvidia people.
If you card does not support a feature just admit it.
Not every card supports every feature. I swear if the Green Teams troll machine wanted to fight this they should be making up reasons why its not needed. Which some of them are starting now, albeit only the intelligent ones.
What kind of weird AMD fanboy post is that? No one really knows what's going on. You aren't making AMD fanboys look any better with useless posts like that.
 
The amount of people running unknown software from some guy on a forum who says its his first dx12 software and then use it as a benchmark amazes me. Please tell me there is at least source code for it and people are doing their own compiles of it first and not just running some exe. Not saying its malicious (I'm not going to dl it), but just bad practice in general.
 
Robert Hallock has been citing that benchmark to shit on Nvidia for the last 2 days so I assume that makes it a valid test. Otherwise he would certainly keep his mouth shut. AMD is known for a lot of chatter, though... Richard Huddy never stops talking.
 
The amount of people running unknown software from some guy on a forum who says its his first dx12 software and then use it as a benchmark amazes me. Please tell me there is at least source code for it and people are doing their own compiles of it first and not just running some exe. Not saying its malicious (I'm not going to dl it), but just bad practice in general.

Well here is the link. Buy the game like other people do and then you can run the benchmark progarm.

http://www.ashesofthesingularity.co...FOUNDER&utm_campaign=Ashes+Founders+Benchmark
 
Back
Top