Async compute gets 30% increase in performance. Maxwell doesn't support async.

People are now talking about class action lawsuits against nvidia because their 980 Ti only beats the Fury X by a few frames in some game that nobody will play.

*facepalm*

Really beginning to wonder if 99% of forum accounts out there are fake, or if people are just this stupid. Really hoping it's the former.

I'm wondering the same. It all seems manufactured to me, I've never seen something like this blow up so quick based on the word of a single dev.

Either way, this is going to backfire horribly for AMD if only a select few DX12 games actually end up being faster. People are building up unrealistic expectations now...
 
LOL @ nvidia fanboys sending Mahigan death threats. Stop attacking Mahigan for nvidia's shortcomings. I'm a NVDA shareholder so I stand to lose just as much as anyone else here, probably more so, and yet somehow I've managed to keep myself from threatening anyone.
 
People are now talking about class action lawsuits against nvidia because their 980 Ti only beats the Fury X by a few frames in some game that nobody will play.

*facepalm*

Really beginning to wonder if 99% of forum accounts out there are fake, or if people are just this stupid. Really hoping it's the former.
It's manufactured outrage. People on the internet love to get angry over nothing, PC gamers especially love to get angry at Nvidia.

These performance improvements will never materialize into anything substantial. There will never be any actual game that people play where AMD performs 30% over their Nvidia counterparts. I mean it's barely even true for AotS and for all we know AMD/Oxide did their best to cripple Nvidia's performance.

Nvidia did nasty shit with tessellation in DX11, this looks like AMD doing nasty shit with Async in DX12.
ARK benchmarks will be released this week, Nvidia will be ahead (It's UE4 after all), and this whole issue will disappear.

People are like robots. This shit is so predictable, it's sad.
 
It's manufactured outrage. People on the internet love to get angry over nothing, PC gamers especially love to get angry at Nvidia.

These performance improvements will never materialize into anything substantial. There will never be any actual game that people play where AMD performs 30% over their Nvidia counterparts. I mean it's barely even true for AotS and for all we know AMD/Oxide did their best to cripple Nvidia's performance.

Nvidia did nasty shit with tessellation in DX11, this looks like AMD doing nasty shit with Async in DX12.
ARK benchmarks will be released this week, Nvidia will be ahead (It's UE4 after all), and this whole issue will disappear.

People are like robots. This shit is so predictable, it's sad.

I can't tell if you are bipolar or what. In one post you are hoping for a refund due to the issue and then your next post you are calling those people robots.

How is ANY of this AMD's fault? Nvidia, Microsoft, Intel and AMD have ALL had SOURCE CODE for this game for over a year. Nvidia even gave them optimized code that they used.
 
I can't tell if you are bipolar or what. In one post you are hoping for a refund due to the issue and then your next post you are calling those people robots.

How is ANY of this AMD's fault? Nvidia, Microsoft, Intel and AMD have ALL had SOURCE CODE for this game for over a year. Nvidia even gave them optimized code that they used.
If a game is written in such a way to take full advantage of AMD hardware then no amount of time with the source code would solve Nvidia's problems. AMD knows about this first-hand, they've spent the last 5 years struggling against tessellation.

If a game was originally designed to run under Mantle with the AMD logo securely attached to their marketing material, and then adopted as a DX12 title in Mantle's wake, I'm not too shocked that it runs poorly on Nvidia cards. In this nasty world we live in, it makes perfect sense. In fact if this were Nvidia doing the same thing, it would be just another Monday.

After seeing dozens of GameWorks controversies (eww evil Nvidia) I can't take people seriously when AMD has potentially done the same thing. I'm not going to treat AMD as if they are saints.

As for my refund, it's totally unrelated to this DX12 issue. I would just be piggybacking on the controversy.
 
Oh don't worry about Tainted, he was mad he didn't get a discount price on his 980ti he bought cause the Fury X did not beat the 980ti. Now he is mad that it may have been a bad idea to trade off his AMD card because it's possible this async shader thing could be a huge problem on his shiny new overpriced NVIDIA card. he will get over it at some point and realize he should have waited to see how this directx 12 plays out.
 
If a game is written in such a way to take full advantage of AMD hardware then no amount of time with the source code would solve Nvidia's problems. AMD knows about this first-hand, they've spent the last 5 years struggling against tessellation.

If a game was originally designed to run under Mantle with the AMD logo securely attached to their marketing material, and then adopted as a DX12 title in Mantle's wake, I'm not too shocked that it runs poorly on Nvidia cards. In this nasty world we live in, it makes perfect sense. In fact if this were Nvidia doing the same thing, it would be just another Monday.

After seeing dozens of GameWorks controversies (eww evil Nvidia) I can't take people seriously when AMD has potentially done the same thing. I'm not going to treat AMD as if they are saints.

As for my refund, it's totally unrelated to this DX12 issue. I would just be piggybacking on the controversy.

Except it is different, because AMD can NEVER get source code for gameworks. Even if the Developers PAY nvidia to try to optimize for AMD, they are barred from working with AMD. Completely different scenario from here.

Our code has been reviewed by Nvidia, Microsoft, AMD and Intel. It has passed the very thorough D3D12 validation system provided by Microsoft specifically designed to validate against incorrect usages. All IHVs have had access to our source code for over year, and we can confirm that both Nvidia and AMD compile our very latest changes on a daily basis and have been running our application in their labs for months.

http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/

That was written by Dan Baker who helped design D3D9/D3D10 standards at Microsoft.

has over a decade of experience in the game industry. Dan started his career at Microsoft, where he helped design the D3D9 standard while working as a key member of the original High Level Shading Language Team. Dan led the technical development of HLSL for D3D10, which is now an industry standard. While at Firaxis, Dan developed technology to bring Civilization V to market with the world’s first threaded D3D11 engine. Dan is an active member of the GAB (Graphics Advisory Board), has multiple industry-known publications, and has spoken and lectured at conferences such as SIGGRAPH, AFDS, GDC, and I3D.

If you have proof that this game is purposefully gimping Nvidia hardware please post it, because the developers have explicitly stated multiple times that they are creating their engine to work well on all hardware and directly refute your claims.
 
If you have proof that this game is purposefully gimping Nvidia hardware please post it, because the developers have explicitly stated multiple times that they are creating their engine to work well on all hardware and directly refute your claims.
There's an AMD logo attached to it. It started off as a Mantle game.

If that's enough for me to be skeptical of GameWorks games, it's enough for me to be skeptical of AMD for the same reasons. And the "We've been working with them for a year" line is the same trope that GameWorks devs use, but everybody brushes it off. Go back and read some of CDPR's press releases about poor AMD performance in TW3... They used the same rhetoric. If we know GameWorks devs are full of shit then why should we assume any different about Oxide? Do you honestly expect them to post on a public forum saying "Screw Nvidia, we focused on AMD"? Nobody would say that.

I swear these posts reminds me of stuff PRIME would say but just flip everything to the other side of the aisle. I can't even take it seriously anymore.
Take all of the GameWorks controveries, replace "Nvidia" with "AMD", and "Tessellation" with "Asynchronous Compute" and you're left with this exact discussion.
 
Any Multithreading should be used with caution for the same reasons. Does that means its bad or will cause issues? No. Please let me know when you want to go back to single core CPUs ;)


Coding for different architectures has be carefully done if not there will be problems, and the range of problems are dramatic.
 
The writing has been on the wall for along time as every thing AMD has been doing for the past 4 years leads to DX 12 ..

I also believe we have yet to see the bigger picture which is CrossFire and the way it uses the PCI Express without the need of a ribbon.. as now we are talking 16 ASC engines with 4 + 4 = 8Gb of ram but does this also mean 512Mb memory bus x 2 ? .. or 295x2 CX with a 290x means 24 ASC engines. the scaling or how it scales is Top Secret for now.
 
I think this is more akin to best case scenario for AMD.

Remember how in all of their "benchmarks" the FuryX won against the 980Ti? Yeah, same deal here.

Either way, this is the first time I've seen so many care about a game that 99% won't even play.

We'll see how Deus Ex and ARK performs.
 
I think this is more akin to best case scenario for AMD.

Remember how in all of their "benchmarks" the FuryX won against the 980Ti? Yeah, same deal here.

Either way, this is the first time I've seen so many care about a game that 99% won't even play.

We'll see how Deus Ex and ARK performs.

This is a game I will never play, or any other rts game. But, I think the issue really comes down to async compute. Is this a required dx12 feature? And at any point did nv every say that their gpu supports it (in hardware)?

I also feel that by the time this is widely used, we'll be two generations away from Fiji and maxwell.
 
There's an AMD logo attached to it. It started off as a Mantle game.

If that's enough for me to be skeptical of GameWorks games, it's enough for me to be skeptical of AMD for the same reasons. And the "We've been working with them for a year" line is the same trope that GameWorks devs use, but everybody brushes it off. Go back and read some of CDPR's press releases about poor AMD performance in TW3... They used the same rhetoric. If we know GameWorks devs are full of shit then why should we assume any different about Oxide? Do you honestly expect them to post on a public forum saying "Screw Nvidia, we focused on AMD"? Nobody would say that.

I swear these posts reminds me of stuff PRIME would say but just flip everything to the other side of the aisle. I can't even take it seriously anymore.
Take all of the GameWorks controveries, replace "Nvidia" with "AMD", and "Tessellation" with "Asynchronous Compute" and you're left with this exact discussion.

Ouch compared to PRIME? I guess if I want to be called that I'd have to completely ignore everything you say and just bash Nvidia constantly ;)

I don't care AMD or Nvidia. I just try to promote FACTS and not FUD. Please point out anything that isn't a FACT in my arguments and I'll happily change my stance. I'm for better gaming for everyone and not creating features that depend on specific hardware, as that brings us backwards and not forwards and hurts ALL gamers. How would you like it if you could only tesselate on nvidia hardware, and all VR was limited to AMD? It would be stupid. Lets not go that route.

Where is Nvidia calling them out then? They have the source code being run, why aren't they making a statement and instead their fans are trying to defend them w/o anything to back their argument?

I never said anything about Nvidia failing or being doomed or anything like that. Hell their DX11 performance is great and even the original blog I've linked dozens of times states that the DX12 performance will increase once better drivers are out, and I've stated that multiple times.

So if you have FACTS that contradict anything I say, please state them. But do not claim this is anything like Gameworks as it is very, very different.

Also, it wasn't a mantle game, it was a DX11 game and they hit performance issues so went with Mantle when it was available, and now DX12. If you'd read the blog this would all be spelled out and you'd have FACTS not FUD ;)
 
Last edited:
PRIME would claim the things he says are facts. You can't just find things you agree with and call them facts... It's more like, I don't know, bias.

Oxide devs' posts? Not facts.
Early async benchmarks @ B3D? Not facts.
Even a lot of Mahigan's conclusions aren't factual.

There is a difference between facts and speculation.
If the things we are discussing in this thread were actually proved to be facts, Nvidia's head would be on a spike and credible sites like AnandTech, TomsHardware, and yes HardOCP would be running stories. Newegg and Amazon would be accepting returns (as of yet, they aren't).

But instead, the only place you will find this story is currently the rumor mill... Because they aren't facts. Well not yet anyway. The situation obviously doesn't look very good for Nvidia right now. I really hate bandwagons -- "Everybody else is mad! You're wrong if you're not mad right now! BE MAD!"
 
Again, prove me wrong then. I'm quoting the developers who have stated they've worked with all companies for a year. How is that not a fact?

I never said Nvidia intentionally gimped their dx12 hardware, or anything about their DX12 performance except that it will probably get better in the future since the benchmark first came out.
 
Again, prove me wrong then. I'm quoting the developers who have stated they've worked with all companies for a year. How is that not a fact?

I never said Nvidia intentionally gimped their dx12 hardware, or anything about their DX12 performance except that it will probably get better in the future since the benchmark first came out.

you can be entirely right or entirely wrong, nobody can't actually prove anything because most of the "FACTS" discussed actually are just speculation and "oh this is happening because "I think" this is a reason" and a whole conclusion can't be taken by just assumptions of things.
 
you can be entirely right or entirely wrong, nobody can't actually prove anything because most of the "FACTS" discussed actually are just speculation and "oh this is happening because "I think" this is a reason" and a whole conclusion can't be taken by just assumptions of things.

Again, please point out anything that I said was a fact that wasn't. I never said anything about Nvidia's Async issues, just quotes from the dev article that came out when the benchmark was first released that detailed how they work with Microsoft, Intel, Nvidia and AMD and provided them with source code the whole time and are not biasing their game to any one vendor.

They are facts unless you want to call the developers lairs and prove them wrong.

Again, everyone please read the blog so I won't have to quote it all the time when people say the opposite out of speculation: http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/
 
Again, prove me wrong then. I'm quoting the developers who have stated they've worked with all companies for a year. How is that not a fact?

I never said Nvidia intentionally gimped their dx12 hardware, or anything about their DX12 performance except that it will probably get better in the future since the benchmark first came out.
I can't even recall the last time the pitchfork-wielding manufactured outrage crowd actually got anything right. Everybody went apeshit over the 970s issues but it never actually caused any real problems. That being said, I would still never buy a 970 as long as I could avoid it.

It seems in this community, the bigger the outrage is, the more wrong it is. :rolleyes: In my head, I associate all this panic and wild speculation with bad information. At least Mahigan presented the info in a calm and (mostly) respectful way. It's just too bad everyone else grabbed it and went wild... I spend too much time on Reddit.
 
I can't even recall the last time the pitchfork-wielding manufactured outrage crowd actually got anything right. Everybody went apeshit over the 970s issues but it never actually caused any real problems. That being said, I would still never buy a 970 as long as I could avoid it.

It seems in this community, the bigger the outrage is, the more wrong it is. :rolleyes: In my head, I associate all this panic and wild speculation with bad information. At least Mahigan presented the info in a calm and (mostly) respectful way. It's just too bad everyone else grabbed it and went wild... I spend too much time on Reddit.

You quoted me yet didn't read anything I wrote.

Either provide something to disprove what I've quoted from the developers of the game in question or stop calling it speculation and bad information.
 
Again, please point out anything that I said was a fact that wasn't. I never said anything about Nvidia's Async issues, just quotes from the dev article that came out when the benchmark was first released that detailed how they work with Microsoft, Intel, Nvidia and AMD and provided them with source code the whole time and are not biasing their game to any one vendor.

They are facts unless you want to call the developers lairs and prove them wrong.

Again, everyone please read the blog so I won't have to quote it all the time when people say the opposite out of speculation: http://www.oxidegames.com/2015/08/16/the-birth-of-a-new-api/
Yeah we get it, saying "The developer said <this>" is a fact, since the developer actually did say it. It's not so much that the developers are lying, but rather the things they are saying are inconsequential.

Nobody cares if Nvidia had access to the source code for a year. What exactly are they supposed to do with it? I'm sure AMD had a lot of access to source code involving tessellation and they never fixed it over the course of many years. It was a hardware problem, and you learned to live with it, and it wasn't the end of the world.

Developers can say a lot of things but unless it has any implication in the real world (by way of factual evidence) then it's meaningless. It's a good starting point but Oxide's post tells us nothing conclusive other than the fact that their Mantle code doesn't work with Nvidia's asynch driver feature. People criticized AMD's tessellation issues because there were numerous benchmarks showing obvious problems -- those are facts.
 
Yeah we get it, saying "The developer said <this>" is a fact, since the developer actually did say it. It's not so much that the developers are lying, but rather the things they are saying are inconsequential.

Nobody cares if Nvidia had access to the source code for a year. What exactly are they supposed to do with it? I'm sure AMD had a lot of access to source code involving tessellation and they never fixed it over the course of many years. It was a hardware problem, and you learned to live with it, and it wasn't the end of the world.

Ok so you are saying its Nvidia hardware issues then, and not software issues? And you are claiming that I'm the one speculating?

The only AMD has with tessellation is when games abuse it (Crysis 2, Witcher 3 Hair, etc) so that objects are way, way over tessellated (multiple polygons per pixel) so that AMD (and older Nvidia) hardware are slowed down a lot since they can't do as much tessellation as the latest nvidia hardware. No idea why we are talking about it, but lets at least talk reality.

I don't get why you are trying to give Nvidia a pass on the 970 memory issue, if it was AMD everyone would be trying to get every single person fired (hell they already do just over pricing alone).
 
Developers can say a lot of things but unless it has any implication in the real world (by way of factual evidence) then it's meaningless. It's a good starting point but Oxide's post tells us nothing conclusive other than the fact that their Mantle code doesn't work with Nvidia's asynch driver feature. People criticized AMD's tessellation issues because there were numerous benchmarks showing obvious problems -- those are facts.

Ok and here you are making up stuff again by calling it Mantle code. Its DX 12 code. They have had EVERYONE involved look at the code and they all confirmed that its being written correctly.

I honestly don't understand how you can claim I'm the one speculating when you keep making up shit from out of no where, yet I can quote the developers to prove you wrong again and again.

So lets do so again:

"Our code has been reviewed by Nvidia, Microsoft, AMD and Intel. It has passed the very thorough D3D12 validation system provided by Microsoft specifically designed to validate against incorrect usages. "

"DirectX 11 vs. DirectX 12 performance

There may also be some cases where D3D11 is faster than D3D12 (it should be a relatively small amount). This may happen under lower CPU load conditions and does not surprise us. First, D3D11 has 5 years of optimizations where D3D12 is brand new. Second, D3D11 has more opportunities for driver intervention. The problem with this driver intervention is that it comes at the cost of extra CPU overhead, and can only be done by the hardware vendor&#8217;s driver teams. On a closed system, this may not be the best choice if you&#8217;re burning more power on the CPU to make the GPU faster. It can also lead to instability or visual corruption if the hardware vendor does not keep their optimizations in sync with a game&#8217;s updates.

While Oxide is showing off D3D12 support, Oxide also is very proud of its DX11 engine. As a team, we were one of the first groups to use DX11 during Sid Meier&#8217;s Civilization V, so we&#8217;ve been using it longer than almost anyone and know exactly how to get the get the most performance out of it. However, it took 3 engines and 6 years to get to this point . We believe that Nitrous is one of the fastest, if not the fastest, DX11 engines ever made.

It would have been easy to engineer a game or benchmark that showed D3D12 simply destroying D3D11 in terms of performance, but the truth is that not all players will have access to D3D12, and this benchmark is about yielding real data so that the industry as a whole can learn. We&#8217;ve worked tirelessly over the last years with the IHVs and quite literally seen D3D11 performance more than double in just a few years time. If you happen to have an older driver laying around, you&#8217;ll see just that. Still, despite these huge gains in recent years, we&#8217;re just about out of runway.

Unfortunately, our data is telling us that we are near the absolute limit of what it can do. What we are finding is that if the total dispatch overhead can fit within a single thread, D3D11 performance is solid. But eventually, one core is not enough to handle the rendering. Once that core is saturated, we get no more performance. Unfortunately, the constructs for threading in D3D11 turned out to be not viable. Thus, if we want to get beyond 4 core utilization, D3D12 is critical."
 
I don't see proof, it's just a wall of text from the developer in question.
The same developer we've already established is potentially untrustworthy. They are no different than a GameWorks developer defending their most recent GameWorks trash heap. AotS has AMD's logo attached to it. Aside from sparking an investigation (which they did), I don't care what Oxide has to say about the issue. They've raised the question; now let independent researchers solve it.

In case this isn't obvious yet, the rational among us (myself included) are looking for some non-affiliated, objective results that prove one way or the other what the Oxide devs are claiming. That proof doesn't exist so you are literally wasting your time quoting the developers over and over. There's absolutely nothing Oxide could say that would effect the outcome of this, short of "We were wrong, sorry" which would kill the whole debate.
 
Btw, what is the graphical feature difference that you get in AOTS bench using direct x 12 vs. direct x 11?

I haven't seen a single screenshot showing the difference. If there is no difference why would you use the dx 12 code path? That is like saying that use dx 11 code path on amd card than mantle in games that support it and run faster with it. It seems the AMD performance is so poor it needs dx 12 to come close to nvidia whereas nvidia performance is fine in dx 11 and needs some optimizations for dx 12.

Would be great to see some differences in graphics fidelity.
 
I don't see proof, it's just a wall of text from the developer in question.
The same developer we've already established is potentially untrustworthy.

Yea, potentially untrustworthy maybe in your little head.

An independent DX12 developer and an AMD rep have flat out said that Maxwell doesn't support async compute. The silence from Nvidia is deafening.
 
Yea, potentially untrustworthy maybe in your little head.

An independent DX12 developer and an AMD rep have flat out said that Maxwell doesn't support async compute. The silence from Nvidia is deafening.

It's a developer who's working on an AMD sponsored title, don't you think there might be a little bias there? :rolleyes:

Everybody went apeshit when Witcher 3 with HairWorks arrived, claiming intentional gimping of AMD cards, which have been proven to be false. Back then nobody trusted CDPR, why should you trust Oxide now?

:rolleyes:
 
It's a developer who's working on an AMD sponsored title, don't you think there might be a little bias there? :rolleyes:
The source code has been released for review for over a year to amd, nvidia and Intel and meets Microsoft specs for DirectX 12
The developer even tried to help Nvidia to fix the issue on their GPUs:

Oxide dev said:
Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that.
 
The source code has been released for review for over a year to amd, nvidia and Intel and meets Microsoft specs for DirectX 12
The developer even tried to help Nvidia to fix the issue on their GPUs:

http://za.ign.com/the-witcher-3/90995/news/amd-claims-nvidia-completely-sabotaged-the-witcher

"We've been working with CD Projeckt Red from the beginning," said Huddy. "We've been giving them detailed feedback all the way through. Around two months before release, or thereabouts, the GameWorks code arrived with HairWorks, and it completely sabotaged our performance as far as we're concerned. We were running well before that... it's wrecked our performance, almost as if it was put in to achieve that goal."

Yeah, like that case? Which is actually the opposite?

Performance is equally as slow on NVIDIA and AMD hardware right now. This is the important part, it doesn't matter if you are running the Radeon R9 300 series, the Fury series, or Maxwell series, HairWorks is going to tank your performance no matter what. The way we see it now, HairWorks is an equal opportunity framerate destroyer.

http://www.hardocp.com/article/2015/08/25/witcher_3_wild_hunt_gameplay_performance_review/9#.VeV86ciqpBc

Having source code for a year means nothing. See above.
 
is this true?

It might be or it might not be. There's no concrete info on pascal at the moment other than that it will use HBM2, AFAIK. There's a decent possibility that Pascal will not improve on the problem (much), due to how long it takes to do a new GPU stepping.

But then again it's still a while away, and there's a chance NVIDIA is willing to delay Pascal to resolve the issue, so who knows.
 
Ok so you are saying its Nvidia hardware issues then, and not software issues? And you are claiming that I'm the one speculating?

The only AMD has with tessellation is when games abuse it (Crysis 2, Witcher 3 Hair, etc) so that objects are way, way over tessellated (multiple polygons per pixel) so that AMD (and older Nvidia) hardware are slowed down a lot since they can't do as much tessellation as the latest nvidia hardware. No idea why we are talking about it, but lets at least talk reality.

I don't get why you are trying to give Nvidia a pass on the 970 memory issue, if it was AMD everyone would be trying to get every single person fired (hell they already do just over pricing alone).

So game designed to use strong point of Maxwell is bad because it works worse on AMD- but game designed to fully use GCN is great and unbiased.

Ok I think I get your point.
 
It might be or it might not be. There's no concrete info on pascal at the moment other than that it will use HBM2, AFAIK. There's a decent possibility that Pascal will not improve on the problem (much), due to how long it takes to do a new GPU stepping.

But then again it's still a while away, and there's a chance NVIDIA is willing to delay Pascal to resolve the issue, so who knows.

I doubt they will delay it, it would be a clear indication that their cards are not what they claimed to be.

But with DX12 titles coming in droves next year, benchmarks will upgrade, at that time, they won't have many options.

Maybe AMD only has 18% PC market right now. But it still has a LOT of integrated and console market. Since nowadays vast majority of games developed for those, I bet developers will bend to it.
 
Back
Top