Async compute gets 30% increase in performance. Maxwell doesn't support async.

When did the graphics cards companies take on marketing tactics from the supplement industry?

35% moar gainz with our pill while losing 25% more fat!
 
When did the graphics cards companies take on marketing tactics from the supplement industry?

35% moar gainz with our pill while losing 25% more fat!

It's not AMD. It's the Oxide person, and he's talking about PS4 games.
 
When did the graphics cards companies take on marketing tactics from the supplement industry?

35% moar gainz with our pill while losing 25% more fat!

Nvidya damage control

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995

"P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally."
 
I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally."

This reads so badly it's not even funny. When Oxide didn't capitulate, nvidia proceeded to try to call them out in public over it.

But sure everything is OK and everyone are friends...
 
im sure he will probably get in trouble for airing that bit of dirty laundry...
 
things could get pretty disruptive in a year when graphics engines built around and optimized for AMD’s GCN architecture start making their way to the PC
Developers are going to build game engines around 18% of the market?
I’ve heard of (console) developers getting 30% GPU performance by using Async Compute.
So, an unproven rumor then?
 
- Nvidia customers wasted money on their not dx12 ready gpus

So.. That added performance per watt/dollar/inch/heat that everyone has been experiencing with their 980 and 980ti are imaginary? So the fact that the 980ti is both absolutely and dollar-for-dollar better than anything AMD can produce is just background noise because it doesn't support this Asynch technology... not available in ANY game at the moment?


sorry to be 'that guy' but I freaking LOVE AMD and their products, but saying "ya wasted yer moneyz!" on a undisputed and vastly superior product simply because it does not support an as-yet-unused feature is silliness to the Nth degree.
 
Not that I'm saying he's wrong, but... I'll believe it when I see it.
Until then this is all hot air, as is the case with most AMD rumors.

If Mantle supports asynch compute (as I believe it does), is there any evidence of massive AMD performance games under Mantle? Or is it simply the fact that Mantle was never fully utilized?
 
Not that I'm saying he's wrong, but... I'll believe it when I see it.
Until then this is all hot air, as is the case with most AMD rumors.

If Mantle supports asynch compute (as I believe it does), is there any evidence of massive AMD performance games under Mantle? Or is it simply the fact that Mantle was never fully utilized?

The devs of Tomorrow Children on PS4 stated that by simply using Async Compute they gained nearly 20% more performance.
 
Not that I'm saying he's wrong, but... I'll believe it when I see it.
Until then this is all hot air, as is the case with most AMD rumors.

If Mantle supports asynch compute (as I believe it does), is there any evidence of massive AMD performance games under Mantle? Or is it simply the fact that Mantle was never fully utilized?

Dragon Age Inquisition, Battlefield 4, Civilization Beyond Earth, and Thief all use AMD's Mantle. I see an average of around 25 percent fps increase from using mantle over direct x
 
Dragon Age Inquisition, Battlefield 4, Civilization Beyond Earth, and Thief all use AMD's Mantle. I see an average of around 25 percent fps increase from using mantle over direct x

lol and with that CPU? because I receive negative performance numbers with mantle.. and this is tested with a 280X and a 390X. both tend to perform considerably worse and Thief tend to stutter after ~45min of gaming.. BF4 it's even worse.. the only game I haven't tested yet its CBE..
 
In Dragon Age Inquisition I can on ultra preset with only 1 card under mantle but I have to run in crossfire under directx. If not I keep seeing a stuttering as it drops from 60fps down to 30fps. In Thief I have to run crossfire in both mantle and directx to run the ultra preset. But under directx I get alot of stuttering as the frames drop down. Mantle is able to keep my frames above 60. I just ran the Thief benchmark to see the get the numbers again, DirectX avg 68.2, min 39.4, Mantle avg 84.8, min 61.2.

Battlefield 4 I no longer have installed on my computer, so not sure how its current performance is, and Beyond Earth I have only played for a total of 22 min, really don't care for the game.

As for my cpu, I am running fx-8350 at stock speeds and my 290's are also running at stock speeds.
 
No where is Pascal mentioned.


No one knows if there are uarch improvements in pascal's design to specifically address async compute.
 
I'm seeing elsewhere that UE4 (ARK) doesn't actually support async.
Not sure if that's true, but if it is, then it's a useless test to some degree.
 
Dragon Age Inquisition, Battlefield 4, Civilization Beyond Earth, and Thief all use AMD's Mantle. I see an average of around 25 percent fps increase from using mantle over direct x

diito, i get a 10fps uplift with mantle with my DC...

not to mention the change in overall smoothness....

theif in particular, its like playing a completely different game....
 
In Dragon Age Inquisition I can on ultra preset with only 1 card under mantle but I have to run in crossfire under directx. If not I keep seeing a stuttering as it drops from 60fps down to 30fps. In Thief I have to run crossfire in both mantle and directx to run the ultra preset. But under directx I get alot of stuttering as the frames drop down. Mantle is able to keep my frames above 60. I just ran the Thief benchmark to see the get the numbers again, DirectX avg 68.2, min 39.4, Mantle avg 84.8, min 61.2.

Battlefield 4 I no longer have installed on my computer, so not sure how its current performance is, and Beyond Earth I have only played for a total of 22 min, really don't care for the game.

As for my cpu, I am running fx-8350 at stock speeds and my 290's are also running at stock speeds.

Well there's your problem. Use an intel cpu and the mantle benefits all but disappear. In BF4 I get worse performance using mantle with a 290X, Thief nets 2-5fps using mantle.

As I've said before when DX12 games show up and AMD cards are killing it I'll happily jump back to using AMD. My 980ti is the first nvidia card I've owned in 6 years!

Until then, I will enjoy my 980ti.
 
So.. That added performance per watt/dollar/inch/heat that everyone has been experiencing with their 980 and 980ti are imaginary? So the fact that the 980ti is both absolutely and dollar-for-dollar better than anything AMD can produce.

an AMD card that sold for $300 last year will almost best an nvidia card going for $650 today.

3mPt9sI.png


ktnxbai.
 
Thread title should be:

"Async compute gets 30% increase in performance in "game x". Maxwell doesn't support async in game X"

Would be fun to bookmark the worst "offenders" and hold them responsible for their postings in 6-12 months...but they will make excuses and not care...some wasted time it would be, sadly.

What is funny is that "Asynchronous" and "ACE" all are AMD "marketing terms"....makes you wonder?
 
Asynchronous = AMD marketing term :confused:
Maybe it would have been better if they said parallel instead of asynchronous but then again, asynchronous and parallel computing are completely different things.

ACE just means "Asynchronous Compute Engine" which is a AMD marketing term :p

Someone explained the whole situation really well on reddit:

Think of traffic flow moving from A->B.
NV GPUs: Has 1 road, with 1 lane for Cars (Graphics) and 32 lanes for Trucks (Compute).
But it cannot have both Cars and Trucks on the road at the same time. If the road is being used by Cars, Trucks have to wait in queue until all the Cars are cleared, then they can enter. This is the context switch that programmers refer to. It has a performance penalty.
AMD GCN GPUs: Has 1 Road (CP; Command Processor) with 1 lane for Cars & Trucks. Has an EXTRA 8 Roads (ACEs; Asynchronous Compute Engines) with 8 lanes each (64 total) for Trucks only.
So Cars and Truck can move freely, at the same time towards their destination, in parallel, asynchronously, Trucks through the ACEs, Cars through the CP. There is no context switch required.
NV's design is good for DX11, because DX11 can ONLY use 1 Road, period. GCN's ACEs are doing nothing in DX11, the extra roads are inaccessible/closed. DX12 opens all the roads.

This is what Oxide Dev said:
Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

And he followed up with this:
Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path.

Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that.

And now I'll get my coffee and popcorn ready :D
 
So this thing going to destroy nvidia like Mantle or Tesselation performance of AMD or TressFX or any other hype train of AMD that doesn't live longer than 3-6 months.

Also I really didn't buy 980 ti for dx12 performance just like I didn't buy the 9700 pro for dx9 performance and 5870 for dx11 performance in latest games that came out 1 year after the API release.
 
So this thing going to destroy nvidia like Mantle or Tesselation performance of AMD or TressFX or any other hype train of AMD that doesn't live longer than 3-6 months.

Also I really didn't buy 980 ti for dx12 performance just like I didn't buy the 9700 pro for dx9 performance and 5870 for dx11 performance in latest games that came out 1 year after the API release.


No, what I see is it brings parity to AMD and NVIDIA (for now...notice i said "for now")
Somhow the fact that AMD and NVIDIA are fighting neck by neck = "destroy NVIDIA"?

All aboard the hyberbole train!!!

The focus is the wrong place.
The question pleople should ask themselfes...what the *bleeep* have AMD been doing with their DX11 driver all these years? (that is the real kicker and will properbly be looked into in 6-12 months...when things have normalized.)



(I expect other DX12 titles to show a different performance-delta...people should take a breath...and think a little...things are never static in the GPU world...)
 
Asynchronous = AMD marketing term :confused:
Maybe it would have been better if they said parallel instead of asynchronous but then again, asynchronous and parallel computing are completely different things.

ACE just means "Asynchronous Compute Engine" which is a AMD marketing term :p

Someone explained the whole situation really well on reddit:



This is what Oxide Dev said:


And he followed up with this:


And now I'll get my coffee and popcorn ready :D

The whole rhetoric is colored in AMD "jargon"....it's like taken from a AMD PR meeting ;)

If you know what terms NVIDIA uses for the same techniques, it becomes very apparent ;)
(The heavy use of AMD "Jargon"...)
 
an AMD card that sold for $300 last year will almost best an nvidia card going for $650 today.

3mPt9sI.png


ktnxbai.

In one game that's not even a full retail release. We don't even know if this will be the case with other DX12 titles. That same $300 card from a year ago gets destroyed by that 980Ti every where else. I could see this giving you pause about your choice if your going to buy a GPU now and keep it for two to three years, if not I would consider this virtually meaningless.

Anyone have one info on the Fury X in this same test?
 
Maxwell wasn't built for compute at all.
I'm not seeing how workstation features translate into gaming performance. It never did.

Test an actual DX12 game.
 
No, what I see is it brings parity to AMD and NVIDIA (for now...notice i said "for now")
Somhow the fact that AMD and NVIDIA are fighting neck by neck = "destroy NVIDIA"?

All aboard the hyberbole train!!!

The focus is the wrong place.
The question pleople should ask themselfes...what the *bleeep* have AMD been doing with their DX11 driver all these years? (that is the real kicker and will properbly be looked into in 6-12 months...when things have normalized.)



(I expect other DX12 titles to show a different performance-delta...people should take a breath...and think a little...things are never static in the GPU world...)

Based upon the various DX12 preliminary results and the console exclusivity, it would appear that what AMD has been up to has been to focus on DX12 performance via Mantle, along with all of the various tweaks/optimizations it can offer. NVIDIA's approach has been to increase their DX11 performance, which also makes sense. AMD probably realized a while ago that they were unable to compete in a head-to-head with NVIDIA because they simply lack the finances necessary to research the architectural improvements that would be necessary (similar to how they realized they were unable to catch Intel on the CPU side of things and have instead focused on an area where they can carve themselves out a niche - APU/HSA).

AMD can only push their GPUs so fast before they are hitting a wall. This seems pretty apparent with the 390/390X and to a lesser extent Fury/Fury X. Eventually, power consumption goes through the roof and you cannot dissipate that much heat from a device in a traditional PC case. On the flip-side, reusing an older architecture and binning chips over the years have given them the ability to sell some of their GPUs at competitive prices, even if they have used a brute force method to achieve some semblance of performance-parity.

With improvements in other areas like Mantle/DX12, they might have a shot at getting reasonably competitive again and any benefits that they derive along the way theoretically could work with their APU/HSA technology, as well as squeaking out just a bit more performance from the hardware-challenged consoles of the current generation. At least I hope this is why they have focused so heavily on DX12 performance. It is possible that they are all just idiots over there at AMD (that is the sense I get from reading these GPU sub-forums).
 
Thread title should be:

"Async compute gets 30% increase in performance in "game x". Maxwell doesn't support async in game X"

Would be fun to bookmark the worst "offenders" and hold them responsible for their postings in 6-12 months...but they will make excuses and not care...some wasted time it would be, sadly.

What is funny is that "Asynchronous" and "ACE" all are AMD "marketing terms"....makes you wonder?

how is the word asynchronous a marketing term

like honestly, are you for real
 
how is the word asynchronous a marketing term

like honestly, are you for real
I think Nvidia uses a different term for it.
And technically he's right, the Oxide dev said they couldn't get it to work in their specific case. Could have something to do with it being a Mantle game originally but who knows.

At this point Oxide criticizing Nvidia is the same thing as a GameWorks dev like Ubisoft criticizing AMD. Except when that happens, AMD is holier than thou and GameWorks/Nvidia are filthy liars.

The double standards are ridiculous. I know Nvidia is a scummy company but that doesn't mean we should treat AMD's word like gospel. After hearing all the nonsense that Richard Huddy has spewed over the last few months, I hesitate to trust AMD/Oxide on this one. With Rob's response this is looking more and more like a collaborative effort by AMD and Oxide to team up on Nvidia for publicity. And we don't even know if any of the info is even true.
 
an AMD card that sold for $300 last year will almost best an nvidia card going for $650 today.

When that chart was first posted here I mentioned that it gave the appearance of being a CPU-limited benchmark. And I said that because the results down the line were pretty much identical on both cards, which doesn't make sense.

It has since been demonstrated that in this benchmark a 290X is the same as a Fury X which is the same as a 980 Ti. I think this demonstrates that likely one of two things is true:

1) The benchmark is CPU limited.
2) There is something broken somewhere and the brokenness is not specific to either nvidia or AMD.
 
Back
Top