Async compute gets 30% increase in performance. Maxwell doesn't support async.

What are you talking about? Everything was great because they did have souce code. Then they added gameworks hairworks which... drum roll... killed their performance because its not open source or source available to AMD and it wasn't until after release that it was found it was the massive amounts of unneeded tessellation was the cause.

So yes it runs better now as quoted in the [H] review, because the tessellation was turned way down after release.

Now its still shitty on all hardware, so not sure its a very good example of a feature done right.

Right. So when AMD cries foul it's trustworthy but when NVIDIA cries foul it's not trustworthy? :rolleyes:

The amount of people running unknown software from some guy on a forum who says its his first dx12 software and then use it as a benchmark amazes me. Please tell me there is at least source code for it and people are doing their own compiles of it first and not just running some exe. Not saying its malicious (I'm not going to dl it), but just bad practice in general.

Yeah? I bet you're also one of those people who advise against running the CUDA program which exposed the 970 non-issue, amirite? :rolleyes:
 
Right. So when AMD cries foul it's trustworthy but when NVIDIA cries foul it's not trustworthy? :rolleyes:
AMD doesn't have a track record of shady shit like this so more people are willing to take their word for it. At this point I wouldn't be surprised if Nvidia has duped us all (again) but it's looking more and more like performance depends on the type of workload on the GPUs. Given AMD's dominance in the console market, where DX12 is going to thrive, it could mean Nvidia will have to push GameWorks into overdrive to get the optimizations they need.

Maybe AMD just decided to turn a new leaf for the new DX12 generation and start acting more aggressively. That would be nice for a change.
 
AMD doesn't have a track record of shady shit like this so more people are willing to take their word for it. At this point I wouldn't be surprised if Nvidia has duped us all (again) but it's looking more and more like performance depends on the type of workload on the GPUs. Given AMD's dominance in the console market, where DX12 is going to thrive, it could mean Nvidia will have to push GameWorks into overdrive to get the optimizations they need.

Maybe AMD just decided to turn a new leaf for the new DX12 generation and start acting more aggressively. That would be nice for a change.
DX12 is going to thrive in the console world? Wait PS4 runs DX12? Wait Xbox1 runs DX12? not a DX11 embedded variant with a few changes for direct commands that every console os has? Last time i checked consoles run a shared pooled memory off a APU nothing that resembles computers most of us have.

AMD does have a track record of lies and shady crap just not as much as nvidia, probably because they've never been as successful.
 
He released a new version of the benchmark and increased the duration, which now shows Nvidia getting lower latency across the board.

https://forum.beyond3d.com/posts/1869416/



Don't know what Single Commandlist means but it murders Maxwell's results.


Single command list is everything is synchronized, no async being done, the test prior to single command list is async.

And thus the point, nV's hardware is capable of async programming routines unlike what Oxide has stated.

And latency difference is dependent on how the program is coded. Which I think many of the people here with programming experience have stated time and time again when this topic came up. Its not very hard to understand its not an atypical situation with async programming vs architectures.
 
AMD doesn't have a track record of shady shit like this so more people are willing to take their word for it. At this point I wouldn't be surprised if Nvidia has duped us all (again) but it's looking more and more like performance depends on the type of workload on the GPUs. Given AMD's dominance in the console market, where DX12 is going to thrive, it could mean Nvidia will have to push GameWorks into overdrive to get the optimizations they need.

Maybe AMD just decided to turn a new leaf for the new DX12 generation and start acting more aggressively. That would be nice for a change.

AMD isn't turning a new leaf they did it with HL2 before where one of their shaders hurt the 6800's vs the 9800's, and someone found that out on a forum days after launch of HL2.
 
AMD doesn't have a track record of shady shit like this so more people are willing to take their word for it. At this point I wouldn't be surprised if Nvidia has duped us all (again) but it's looking more and more like performance depends on the type of workload on the GPUs. Given AMD's dominance in the console market, where DX12 is going to thrive, it could mean Nvidia will have to push GameWorks into overdrive to get the optimizations they need.

Maybe AMD just decided to turn a new leaf for the new DX12 generation and start acting more aggressively. That would be nice for a change.

Wait what?

People still believe AMD after Bulldozer? After Fury X? After the GameWorks non-issue? Seriously, give me 1 site where it shows in benchmarks GameWorks gimp performance?

Wow. Just wow. I don't know why you'd trust either AMD or NVIDIA at this point, especially considering the about of FUD both sides are able to spew out in recent memory.
 
I don't know about playstation 4, but xbox one is suppose to be gettting window 10 and directx 12 just like the pc.
 
PS4 has its own async code through its own API, Xbox one doesn't yet..... until as you stated.
 
Wait what?

People still believe AMD after Bulldozer? After Fury X? After the GameWorks non-issue? Seriously, give me 1 site where it shows in benchmarks GameWorks gimp performance?

Wow. Just wow. I don't know why you'd trust either AMD or NVIDIA at this point, especially considering the about of FUD both sides are able to spew out in recent memory.
Out in the real world, AMD are viewed as heroes of the hardware industry. The GPU David vs the Nvidia Goliath.
AMD is known for exaggerating their own hardware, yeah. But not for intentionally going after Nvidia in nefarious ways.
 
Out in the real world, AMD are viewed as heroes of the hardware industry. The GPU David vs the Nvidia Goliath.
AMD is known for exaggerating their own hardware, yeah. But not for intentionally going after Nvidia in nefarious ways.


This is the way AMD markets itself, just the way ATi did in the past too. Neither of these companies are good or evil, they are in it to wanting our money that's it.
 
Single command list is everything is synchronized, no async being done, the test prior to single command list is async.

And thus the point, nV's hardware is capable of async programming routines unlike what Oxide has stated.

And latency difference is dependent on how the program is coded. Which I think many of the people here with programming experience have stated time and time again when this topic came up. Its not very hard to understand its not an atypical situation with async programming vs architectures.

You really believe that nv hardware is capable of async? Really? If it was, and oxide said what they said, don't you think nv would have come out by now and shot them down?
 
You really believe that nv hardware is capable of async? Really? If it was, and oxide said what they said, don't you think nv would have come out by now and shot them down?


The program shows it is capable of it, its pretty simple to see that, all of the cuda documentation even states it too, So nV doesn't need to say anything, Who ever this Oxide "dev" is should come out and tell us what he does for Oxide give us the shader profiler information for what he has stated that doesn't work well on nV's cards, and AMD's Hallock should just shut up, because now he is misrepresenting what the program does by looking at a specific portion of the test results the forced synchronous path.

https://www.reddit.com/r/pcgaming/comments/3j87qg/nvidias_maxwell_gpus_can_do_dx12_async_shading/
This benchmark has now been updated. GPU utilization of Maxwell-based graphics cards is now dropping to 0% under async compute workloads. As the workloads get more aggressive, the application ultimately crashes as the architecture cannot complete the workload before Windows terminates the thread (>3000ms hang)

and he fails to mention why the crash was there, it was problem with on the OS side which can be due to driver related problems.
 
The more I follow Mahigan around (seriously he's popping up everywhere) the more I believe he's an AMD employee on a fluff account. Not that there's anything wrong with that.
 
well I don't know about his past, but the information he had and with Oxide the way they came out and stated what they said, its not him, just didn't have the right information.
 
The program shows it is capable of it, its pretty simple to see that, all of the cuda documentation even states it too, So nV doesn't need to say anything, Who ever this Oxide "dev" is should come out and tell us what he does for Oxide give us the shader profiler information for what he has stated that doesn't work well on nV's cards, and AMD's Hallock should just shut up, because now he is misrepresenting what the program does by looking at a specific portion of the test results the forced synchronous path.

https://www.reddit.com/r/pcgaming/comments/3j87qg/nvidias_maxwell_gpus_can_do_dx12_async_shading/


and he fails to mention why the crash was there, it was problem with on the OS side which can be due to driver related problems.

Whether nvidia has asynch compute or not is not a hill worth dying over, the question is whether their current implementation of async compute is as robust and effective for the types of workloads game makers will be using. Further, if they chose to use the nvidia model vs the amd card model, which, objectively measured for performance and throughput and visuals produces better results. It sounds like the amd path is the superior one for dx12, and nvidia will take a hit. They can still support a version of asynch compute and have a lesser version of it.
 
Whether nvidia has asynch compute or not is not a hill worth dying over, the question is whether their current implementation of async compute is as robust and effective for the types of workloads game makers will be using. Further, if they chose to use the nvidia model vs the amd card model, which, objectively measured for performance and throughput and visuals produces better results. It sounds like the amd path is the superior one for dx12, and nvidia will take a hit. They can still support a version of asynch compute and have a lesser version of it.
Sounds like this will be the defense Nvidia uses to weasel out of any wrong-doing. :cool:
They said they have asynchronous shading, they just never said it was good.
 
Whether nvidia has asynch compute or not is not a hill worth dying over, the question is whether their current implementation of async compute is as robust and effective for the types of workloads game makers will be using. Further, if they chose to use the nvidia model vs the amd card model, which, objectively measured for performance and throughput and visuals produces better results. It sounds like the amd path is the superior one for dx12, and nvidia will take a hit. They can still support a version of asynch compute and have a lesser version of it.

Performance should increase because of the multi-threaded nature of DX12 (except when 100% GPU limited).

That isn't what we are seeing which means that either the developer or the driver are at fault. Even on low settings, NV cards are losing performance compared to DX11.

Being that the engine pretty much requires an overclocked Haswell-E to run decently on either AMD or NV, I am going to put the blame on the developer and wait for further implementations.
 
Out in the real world, AMD are viewed as heroes of the hardware industry. The GPU David vs the Nvidia Goliath.
AMD is known for exaggerating their own hardware, yeah. But not for intentionally going after Nvidia in nefarious ways.

So they're good because they're the underdogs?

Oh yeah, not going after NVIDIA like this? https://steamcommunity.com/app/261760/discussions/2/620700960748580422/#c620700960792940362

http://techreport.com/review/3089/how-ati-drivers-optimize-quake-iii

Yeah, like AMD haven't tried that in the past.

The only reason NVIDIA is making such moves is because they have a near monopoly on the GPU market, like Intel does. It's not like AMD didn't try when they had parity with NVIDIA.
 
Whether nvidia has asynch compute or not is not a hill worth dying over, the question is whether their current implementation of async compute is as robust and effective for the types of workloads game makers will be using. Further, if they chose to use the nvidia model vs the amd card model, which, objectively measured for performance and throughput and visuals produces better results. It sounds like the amd path is the superior one for dx12, and nvidia will take a hit. They can still support a version of asynch compute and have a lesser version of it.

The performance of the 980 Ti is the same as the Fury X in this benchmark.

What conclusions can we seriously draw here?
 
I wonder if any of those news articles will update their posts to let their readers know Rob's conclusions on the B3D benchmark are completely wrong? Probably not.
 
I haven't read most of this thread because I'm sure it's mostly NVIDIA vs AMD fanboyism, but from what I am seeing I think most people are overlooking where this matters most - the mid-range area around ~$300. Right now both the AMD R9 390 and the GTX 970 are about the same price, give or take $10. With both a fully functional 4GB VRAM and the potential benefits of greater async compute performance, the 390 looks like a much better buy if you are the type of person who buys a new video card every 2-3 years instead of every upgrade cycle. If you are blowing $650 on a 980 Ti then you probably don't care if async compute isn't working or is slow, because by the time it matters you will have upgraded already. But in match ups where AMD and NVIDIA are both price and performance competitive, I think this might tip the scales in AMD's favor. Just my 2 cents.
 
I haven't read most of this thread because I'm sure it's mostly NVIDIA vs AMD fanboyism, but from what I am seeing I think most people are overlooking where this matters most - the mid-range area around ~$300. Right now both the AMD R9 390 and the GTX 970 are about the same price, give or take $10. With both a fully functional 4GB VRAM and the potential benefits of greater async compute performance, the 390 looks like a much better buy if you are the type of person who buys a new video card every 2-3 years instead of every upgrade cycle. If you are blowing $650 on a 980 Ti then you probably don't care if async compute isn't working or is slow, because by the time it matters you will have upgraded already. But in match ups where AMD and NVIDIA are both price and performance competitive, I think this might tip the scales in AMD's favor. Just my 2 cents.

+1
The million dollar question is:

Pascal/Greenland


AMD having 250$ cards perform on par with 980Ti is simply the best possible hedge against any possible next gen delay too, and for the longest time AMD can field a completely full desktop lineup moving forward. They can continue to rebrand hawaii because the performance against Maxwell will dictate that it will be possible.


But the real question is:

Is Pascal simply iterative version of Maxwell and does not take advantage of async, & will AMD have removed their bottleneck in Greenland?

Because every single DX12 benchmark scores the r9 290 very very closely to Fiji.
 
I chuckle a little bit...because somehow "async" is now defined as the way AMD defines "async".
This thread will be VERY interesting to go back to in a while...
 
+1
The million dollar question is:

Pascal/Greenland


AMD having 250$ cards perform on par with 980Ti is simply the best possible hedge against any possible next gen delay too, and for the longest time AMD can field a completely full desktop lineup moving forward. They can continue to rebrand hawaii because the performance against Maxwell will dictate that it will be possible.


But the real question is:

Is Pascal simply iterative version of Maxwell and does not take advantage of async, & will AMD have removed their bottleneck in Greenland?

Because every single DX12 benchmark scores the r9 290 very very closely to Fiji.

I don't think any of the current pc engines are putting enough stress on the shader array in fiji to make a meaningful difference in performance there. And the oxide dev said that their implementation of asynch was mild compared to what console devs are already beginning to do. This is why I'd like amd or their partners in game development create a proof of concept that actually did go full throttle with heavier use the more massive shader array coupled with more advanced asych engine support to handle the varied tasks.

Show something truly next gen that actually REQUIRES that sort of power and design to run well. Maybe nothing will run well yet and we have to wait until next year, but the end goal is to show a realtime demo of something like this decades old tv footage.

https://www.youtube.com/watch?v=nwkYaxkDO3c


Too impossible to do in real time? TRY !!!!!!!
 
The program shows it is capable of it, its pretty simple to see that, all of the cuda documentation even states it too, So nV doesn't need to say anything, Who ever this Oxide "dev" is should come out and tell us what he does for Oxide give us the shader profiler information for what he has stated that doesn't work well on nV's cards, and AMD's Hallock should just shut up, because now he is misrepresenting what the program does by looking at a specific portion of the test results the forced synchronous path.

https://www.reddit.com/r/pcgaming/comments/3j87qg/nvidias_maxwell_gpus_can_do_dx12_async_shading/


and he fails to mention why the crash was there, it was problem with on the OS side which can be due to driver related problems.

Maybe nv is trying to do async in software, causing the gpu usage to drop to zero. Why would nv ask oxide to disable async on nv gpus if nv gpus could run it without problems?
 
So they're good because they're the underdogs?

Oh yeah, not going after NVIDIA like this? https://steamcommunity.com/app/261760/discussions/2/620700960748580422/#c620700960792940362

http://techreport.com/review/3089/how-ati-drivers-optimize-quake-iii

Yeah, like AMD haven't tried that in the past.

The only reason NVIDIA is making such moves is because they have a near monopoly on the GPU market, like Intel does. It's not like AMD didn't try when they had parity with NVIDIA.
The first link is a quote from the dev saying they weren't contracted to offer tress fx so they didn't. Nowhere is AMD credited with that decision.

In the 2nd link that was before AMD purchased ATI. How can you blame AMD?
 
The first link is a quote from the dev saying they weren't contracted to offer tress fx so they didn't. Nowhere is AMD credited with that decision.

In the 2nd link that was before AMD purchased ATI. How can you blame AMD?

:rolleyes:

At least NVIDIA offers GameWorks to their competitors, amirite? Seriously, who else outside of AMD has the power to put such contract restrictions on TressFX? Think for a minute.

AFAIK ATI's management is still there at AMD's GPU division, so... Actually you're right. ATI's management is nowhere near as incompetent as AMD's, I appologize :p
 
Whether nvidia has asynch compute or not is not a hill worth dying over, the question is whether their current implementation of async compute is as robust and effective for the types of workloads game makers will be using. Further, if they chose to use the nvidia model vs the amd card model, which, objectively measured for performance and throughput and visuals produces better results. It sounds like the amd path is the superior one for dx12, and nvidia will take a hit. They can still support a version of asynch compute and have a lesser version of it.


It is capable of it and its fast at it too, its all dependent on how the code is written.
 
Maybe nv is trying to do async in software, causing the gpu usage to drop to zero. Why would nv ask oxide to disable async on nv gpus if nv gpus could run it without problems?


Because Oxide did something in code that hurt nV's performance, what ever it is, without a shader profiler of the offending shader no one will know outside of Oxide's word for it, and they are not willing to share that.

Who ever has this game and has both an nV and AMD card PM me, I'll walk you through how to set up a shader profiler on your system, and lets see if we can figure it out? This won't be easy either unless the game has built in shader profiler and we can find the console codes to bring it up.
 
Last edited:
:rolleyes:
Seriously, who else outside of AMD has the power to put such contract restrictions on TressFX? Think for a minute.

Well, perhaps the developer did not want to add more debugging for a feature - when not getting paid.

So it is a possibility that there was no TressFX on Nvidia and everything else, due to them not wanting to spend the time? and money?

No proof that AMD did this to hinder nVidia, especially since they did not do it with tomb raider.. just sayin' as an SLI 980ti owner
 
Because Oxide did something in code that hurt nV's performance, what ever it is, without a shader profiler of the offending shader no one will know outside of Oxide's word for it, and they are not willing to share that.

Who ever has this game and has both an nV and AMD card PM me, I'll walk you through how to set up a shader profiler on your system, and lets see if we can figure it out? This won't be easy either unless the game has built in shader profiler and we can find the console codes to bring it up.

You sound like your on a gamework witch hunt. Oh I mean oxide witch hunt. Nv has the source code for the game, what more do they need?

Where is your proof? Source? Link?
 
You sound like your on a gamework witch hunt. Oh I mean oxide witch hunt. Nv has the source code for the game, what more do they need?

Where is your proof? Source? Link?


If they stated it, I want to see their results, they came on a public forum not on their own forum and stated this, so they have to prove their point of view as well. And the B3D program as proven the opposite of what they have stated to a degree.

Don't be pissing around the bush when the bush was put on fire.
 
Because Oxide did something in code that hurt nV's performance, what ever it is, without a shader profiler of the offending shader no one will know outside of Oxide's word for it, and they are not willing to share that.

Who ever has this game and has both an nV and AMD card PM me, I'll walk you through how to set up a shader profiler on your system, and lets see if we can figure it out? This won't be easy either unless the game has built in shader profiler and we can find the console codes to bring it up.

Source?

If they stated it, I want to see their results, they came on a public forum not on their own forum and stated this, so they have to prove their point of view as well. And the B3D program as proven the opposite of what they have stated to a degree.

Don't be pissing around the bush when the bush was put on fire.

Above you're making claims, where is your proof?
 
err start reading things first instead of posting first it doesn't look good there is two threads on this topic in this very forum, and its in the other thread. You are coming into a conversation with not knowing what is going on, who said what and how it was said that doesn't do anything for you.
 
Back
Top