Ashes of the Singularity Day 1 Benchmark Preview @ [H]

Async Compute is a feature you can rely on, just not on Nvidia hardware. In fact on Nvidia hardware it seems reliable in that it always slows down the graphics hardware.
You are right, in the ONE GAME it which it has been implemented it has been demonstrably detrimental to performance, therefore it should be turned off.

Overclocking isn't guaranteed, true. But by your logic a 5820k is a stupid buy compared to a 6700k. Not how it works in reality.

It's funny you should mention hardware features you can't make up for, because GCN lacks conservative rasterization, but nobody wants to talk about it, because being vehemently anti-nvidia is fashionable nowadays thanks to YouTube celebrities and people who make anti gameworks videos.

If async is a 10% boost, all you need a is a card that's 10% faster to negate the bonus.
 
The issue at the end of the day is Nvidias architecture and no one can fix that apart from Nvidia.
Devs will not use async just because of the way if performs on Nvidia hardware, They are not that big.

Async compute is not just about performance gains either it allows the gpu to work more efficently and remove workload from the cpu.

Conservative rasterization is in Polaris anyway, you have to remember that most of AMDs current cards are based on three year old core designs (Hawaii)
 
You know what, I don't feel like explaining it. I'll let AdoredTV do it for me.


You don't want to explain it because both companies do the same shit if given the chance. Don't think nV and Gameworks is a solitary experience. We have seen it with any big profile games like HL2's water shader. If the IHV's have too much control over a dev's product, which the dev gives them the control it will happen. AMD/ATi isn't above this, as mentioned they have done it in the past, and most likely the code that was used in the nitrous engine was developed on Mantle, which yeah the developer wouldn't even have access to nV hardware for such things. So what they did they made sure it ran well on what they had. Is this bad, no, is this wrong, no its up the developer and what they have access to, and what they are going for. If this falls into the hands of what AMD was doing at the time, its a good fit. End results don't hold your breath and think its all about this game.
 
The issue at the end of the day is Nvidias architecture and no one can fix that apart from Nvidia.
Devs will not use async just because of the way if performs on Nvidia hardware, They are not that big.

Async compute is not just about performance gains either it allows the gpu to work more efficently and remove workload from the cpu.

Conservative rasterization is in Polaris anyway, you have to remember that most of AMDs current cards are based on three year old core designs (Hawaii)
They don't need to fix it, Kepler/Maxwell have far less to gain than GCN ever will.

Well then, while all Fiji and Hawaii owners won't have conservative rasterization, Maxwell owners will.
 
The issue at the end of the day is Nvidias architecture and no one can fix that apart from Nvidia.
Devs will not use async just because of the way if performs on Nvidia hardware, They are not that big.

Async compute is not just about performance gains either it allows the gpu to work more efficently and remove workload from the cpu.


Conservative rasterization is in Polaris anyway, you have to remember that most of AMDs current cards are based on three year old core designs (Hawaii)


The part in red is incorrect, Async doesn't remove anything from the CPU that is another part of the API.
 
The part in red is incorrect, Async doesn't remove anything from the CPU that is another part of the API.
Yeah everyone and their mother mistakenly thinks asynchronous shaders enable some kind of magical performance boost that isn't limited to maximizing utilization
 
They don't need to fix it, Kepler/Maxwell have far less to gain than GCN ever will.

Well then, while all Fiji and Hawaii owners won't have conservative rasterization, Maxwell owners will.

They do need to fix it as the issues look like they are carrying over to Pascal.

conservative rasterization does not offer the same benefits as async though and has been available in DX11 since the Maxwell launch, just like their voxel based shadows which is now only starting to be used
 
They do need to fix it as the issues look like they are carrying over to Pascal.

conservative rasterization does not offer the same benefits as async though and has been available in DX11 since the Maxwell launch, just like their voxel based shadows which is now only starting to be used

I agree, while async offers a circumstantial boost to performance, conservative rasterization opens up a whole new realm of possibilities in term of real time computation.

Again, even if async is totally broken it's a non issue if the raw performance can match that of its competition in an ideal (10% boost) async scenario. I've said this so many times
 
They do need to fix it as the issues look like they are carrying over to Pascal.

conservative rasterization does not offer the same benefits as async though and has been available in DX11 since the Maxwell launch, just like their voxel based shadows which is now only starting to be used


Yes they do have to fix it, and the specific problem is the the way their grid management unit is setup, I wouldn't say its because of the number of ACE's that AMD hardware has, its just the capability to schedule is more robust in AMD hardware. Now having more ACE's might give them more flexibility to do different operations though, which we haven't seen in games yet.
 
For what its worth.....AsyncCompute for 7970/280x's seem to Not be used since i actually gain 1 fps by disabling AsyncCompute in the options. lol I was under the impression its being used for all gcn cards, but for mine it doesn't seem to do anything.:)
 
Yes they do have to fix it, and the specific problem is the the way their grid management unit is setup, I wouldn't say its because of the number of ACE's that AMD hardware has, its just the capability to schedule is more robust in AMD hardware. Now having more ACE's might give them more flexibility to do different operations though, which we haven't seen in games yet.

This is slightly inaccurate, the GMU works fine (CUDA tested) the issue is that under dx12 barriers would necessarily be on the graphics queue, so it's simply harder to tune and works well in totally different scenarios compared to amd.

The real advantage of the ACEs is they are programmable, thus flexible
 
Anyone else here love scanning these threads and reading through the fanboy arguments that come out of the woodwork every time one side has a decisive advantage in performance over the other? Holy crap guys. I presume you don't have a personal stake in the financial performance of either company. Call a spade a spade when you have to. AMD absolutely kicks Nvidia's ass in this game. It happens. Get over it. I'm sure another title will come out in the future where Nvidia will win and you all can remind the internet about how superior Nvidia is and why you were justified in purchasing an Nvidia card instead of an AMD one.
 
For what its worth.....AsyncCompute for 7970/280x's seem to Not be used since i actually gain 1 fps by disabling AsyncCompute in the options. lol I was under the impression its being used for all gcn cards, but for mine it doesn't seem to do anything.:)
It's enabled on all gcn gpus, but like I said, needs to be tuned for every single one :p

Hence IO interactive saying it's hard to tune and not worth the effort
 
This is slightly inaccurate, the GMU works fine (CUDA tested) the issue is that under dx12 barriers would necessarily be on the graphics queue, so it's simply harder to tune and works well in totally different scenarios compared to amd.

The real advantage of the ACEs is they are programmable, thus flexible


True, but not all ACE's are created equal, GCN 1.0 hardware they are not programmable from what I have seen.
 
Nvidia are going to lose some of the CUDA business too now as its been reverse engineered to work on any hardware
 
Anyone else here love scanning these threads and reading through the fanboy arguments that come out of the woodwork every time one side has a decisive advantage in performance over the other? Holy crap guys. I presume you don't have a personal stake in the financial performance of either company. Call a spade a spade when you have to. AMD absolutely kicks Nvidia's ass in this game. It happens. Get over it. I'm sure another title will come out in the future where Nvidia will win and you all can remind the internet about how superior Nvidia is and why you were justified in purchasing an Nvidia card instead of an AMD one.
I agree with this, it's not a big deal if one game runs better on amd/nvidia.

What is a big deal is conflating one datum with a trend spanning the entire dx12 generation.

On top of this, performance on nvidia is actually quite good. See my benchmark results; I do have an overclocked card, and you can argue it's unfair. But if I match a fury X when I overclock, then the Fury Xs lead whittles down to 10% when you overclock it.
 
I agree with this, it's not a big deal if one game runs better on amd/nvidia.

What is a big deal is conflating one datum with a trend spanning the entire dx12 generation.

On top of this, performance on nvidia is actually quite good. See my benchmark results; I do have an overclocked card, and you can argue it's unfair. But if I match a fury X when I overclock, then the Fury Xs lead whittles down to 10% when you overclock it.

Yes but these are only the first games to utilise this, who knows what is possible in the future with the devs fine tuning, Async evolved etc. You know for sure AMD are not going to stop trying to get the most they can out of this especially if Nvidias issues carries on into Pascal. Theres a whole new architecture around the corner from AMD too
 
Haha whoever said CUDA advantage is gone is silly, AMD is working on some kind of CUDA interpreter. Cuda is a godsend and the advantage will only grow with Pascal and mixed precision for neural networking
 
Well the reason why CUDA has done so well is that it has features that competing API's just didn't have or didn't have soon enough. Direct Compute with HLSL 6 should cover the features list up but again, CUDA is still evolving too.
 
AMD has their own thing for HSA in anycase that is a different topic, lets stick to this thread.
 
I agree with this, it's not a big deal if one game runs better on amd/nvidia.

What is a big deal is conflating one datum with a trend spanning the entire dx12 generation.

On top of this, performance on nvidia is actually quite good. See my benchmark results; I do have an overclocked card, and you can argue it's unfair. But if I match a fury X when I overclock, then the Fury Xs lead whittles down to 10% when you overclock it.

You are aware you can also overclock an AMD video card, and results may vary? My R9 280X was an excellent overclocker. My Intel i5 CPU couldn't overclock to save its life. You could also buy an Nvidia card that wont overclock worth a crap. It's not a guaranteed result. Arguing that this isn't a potential issue for Nvidia "because you can overclock your card" is silly in my opinion.

You also seem to be discounting the fact that this early benchmark is indicating that AMD has a distinct advantage over Nvidia with this particular technology or API if it does see consistent implementation. If this is an indication of future trends, then it can seriously hurt Nvidia. I'm not sure why you're so reluctant to acknowledge that as a simple fact.

Wanting to believe something is true doesn't make it true.
 
AMD has their own thing for HSA in anycase that is a different topic, lets stick to this thread.

Yeah. I really don't understand why everyone is arguing with me when I say the performance gain can be had elsewhere.

Assuming performance parity between two hypothetical products, one using async, one not using it, there's no reason to say the one with async is better.


You are aware you can also overclock an AMD video card, and results may vary? My R9 280X was an excellent overclocker. My Intel i5 CPU couldn't overclock to save its life. You could also buy an Nvidia card that wont overclock worth a crap. It's not a guaranteed result. Arguing that this isn't a potential issue for Nvidia "because you can overclock your card" is silly in my opinion.

You also seem to be discounting the fact that this early benchmark is indicating that AMD has a distinct advantage over Nvidia with this particular technology or API if it does see consistent implementation. If this is an indication of future trends, then it can seriously hurt Nvidia. I'm not sure why you're so reluctant to acknowledge that as a simple fact.

Wanting to believe something is true doesn't make it true.

Nothing I said suggested amd doesn't overclock, I said maxwell has a much bigger headroom and that is undeniably true. I have never seen a 970/980/980ti/Tx that won't do 20% oc.

Anyway, you're still totally ignoring the fact that the implementation shouldn't be the same for both ihvs.

AMD's performance boost from async could well be consistent, but that says nothing about the nvidia implementation, which could well also provide gains.

I don't appreciate your remark about me wanting to believe something is true, I am by no means a gamer, I'm an electronics student with an interest in computational physics and I only got drawn into this async affair because I use Cuda. It's interesting.

Your argument is totally inconsistent, you frequently cite Hitman and AotS as evidence for your claims yet the results from one contradict the other.

If this is going to degrade into petty insults and snide remarks I'm just going to stop replying
 
For what its worth.....AsyncCompute for 7970/280x's seem to Not be used since i actually gain 1 fps by disabling AsyncCompute in the options. lol I was under the impression its being used for all gcn cards, but for mine it doesn't seem to do anything.:)

Ooo maybe we can make a "Has AMD forgotten Tahiti?" shitter thread right next to the "Has nVidia forgotten Kepler?" shitter thread!!!

Personally I whole heartedly agree with leldra. It's the end performance OC vs OC I care about. It's not unusual to me for products to be unique. As long as they still get the job done and the IQ is the same.

Still something to keep my eye on to see if other games have more or less impact.
 
Your still hammering on that the gains are all due to async but they are not.

I dont see how Hitman and ATOS contradict each both give good gains switching from DX11 to DX12 and very similar results on the different vendor hardware.

Youve been told why Nvidia dont perform as well many times and its something that Nvidia cannot easily fix as its a hardware issue and or design choice
 
Your still hammering on that the gains are all due to async but they are not.

I dont see how Hitman and ATOS contradict each both give good gains switching from DX11 to DX12 and very similar results on the different vendor hardware
For the love of christ, I'm talking about async. Async specifically, which is whatever everyone is rattling on about.

AMD hardware barely gains anything under dx12 in Hitman, the damn thing doesn't even support your claims

Fury X performs like a 390X under dx12 in Hitman.

Has AMD abandoned Fiji?

Come on.

My argument is perfectly coherent, you're just alternating between underplaying and overplaying the effect of async to suit your argument
 
Maybe when we get a vendor agnostic DX12 title to test (looking at you unreal engine) we'll find some peace of mind eh?
 
Turn async off for both nvidia and amd in AotS and compare performance gain relative to dx11. It's easy to test.

Most of the gain comes from circumventing the driver overhead for amd under dx11.
 
Maybe when we get a vendor agnostic DX12 title to test (looking at you unreal engine) we'll find some peace of mind eh?

Good luck in find a game like that, I don't think we will ever see that. This day and age, if a product looks like its going to have an impact on graphics card sales, they get picked up quickly. Even engines are being made with vendor specific optimizations, well before a game is being made on them.
 
Good luck in find a game like that, I don't think we will ever see that. This day and age, if a product looks like its going to have an impact on graphics card sales, they get picked up quickly. Even engines are being made with vendor specific optimizations, well before a game is being made on them.

Thats exactly why I'm waiting for unreal, there will be vendor dependent rendering paths, that's exactly what great about dx12; allows the developer to do this. Graphics engines are the new Middleware api; they are the abstraction layer that dx11 was essentially
 
True. Yeah the old model of making your own graphics engine for your own game is going the way of the dodo. It takes too much time and effort to create a fully functional engine when you can get one for free now with full source code.
 
True. Yeah the old model of making your own graphics engine for your own game is going the way of the dodo. It takes too much time and effort to create a fully functional engine when you can get one for free now with full source code.
Exactly, this is why I'm befuddled by all the people thinking AotS is a harbinger of doom for nvidia. It's an example of dx12 done right for only one ihv:p

And people citing Hitman as evidence conveniently ignore statements made by its developer on the subject
 
AsyncComputeOff=0 Has zero effect on my score. Switching between dx11/dx12 has no effect either except under 12 it uses a shit ton of dynamic vram compared to the usual 200mb. Either AsyncCompute is NOT used on my card or its always On regardless of the setting.
 
AsyncComputeOff=0 Has zero effect on my score. Switching between dx11/dx12 has no effect either except under 12 it uses a shit ton of dynamic vram compared to the usual 200mb. Either AsyncCompute is NOT used on my card or its always On regardless of the setting.
This is odd, performance has remained identical to the last Beta for me, and anandtech compared async on vs off in the Beta and there was a significant difference
 
Yeah. I really don't understand why everyone is arguing with me when I say the performance gain can be had elsewhere.

Assuming performance parity between two hypothetical products, one using async, one not using it, there's no reason to say the one with async is better.




Nothing I said suggested amd doesn't overclock, I said maxwell has a much bigger headroom and that is undeniably true. I have never seen a 970/980/980ti/Tx that won't do 20% oc.

Anyway, you're still totally ignoring the fact that the implementation shouldn't be the same for both ihvs.

AMD's performance boost from async could well be consistent, but that says nothing about the nvidia implementation, which could well also provide gains.

I don't appreciate your remark about me wanting to believe something is true, I am by no means a gamer, I'm an electronics student with an interest in computational physics and I only got drawn into this async affair because I use Cuda. It's interesting.

Your argument is totally inconsistent, you frequently cite Hitman and AotS as evidence for your claims yet the results from one contradict the other.

If this is going to degrade into petty insults and snide remarks I'm just going to stop replying


Who's making petty insults? Point out one thing in my post that can be interpreted as a "petty insult". I'm saying you're arguing against the results in the game as if this isn't an indication that Nvidia might have a problem with DX12 despite the results. Both AMD and Nvidia have had the ability to write drivers for a long time for this title, and the results showing a massive performance gap are being dismissed by you, which I find to be incredibly confusing. You can't argue against results, and your suggestion that somehow overclocking your Nvidia card can correct what's obviously a difference in architecture VIA brute force is fundamentally flawed because overclocking results are variable. Not only that, even if you were "guaranteed" a 20% overclock, from the figures in the posted review, that still wouldn't be enough to chase down an AMD card at stock clocks with 2GB less VRAM. This doesn't indicate a potential issue to you? Your comment that it can vary based on Nvidia's "implementation" is also confusing to me considering the amount of time they've had to work directly with the developer and write a driver for the title. It is the fact that you're ignoring plain numerical results that lead me to suggest you seem to be attempting to create a scenario that isn't happening because you are not satisfied with the result.

I also don't understand your point about Hitman. AMD had an advantage in both games from the benchmarks that I saw posted, unless someone is fudging the numbers.

If you're really not concerned about gaming performance, I'm not sure why you're bothering
 
145950436379jEKuNgdA_3_4.gif


Even on LOW preset im 92% gpu Bottlenecked lol

==========================================================================
Oxide Games
Ashes Benchmark Test - ©2015
C:\Users\seans\Documents\My Games\Ashes of the Singularity\Benchmarks\Output_16_04_02_1425.txt
Version 1.00.18769
04/02/2016 14:29
==========================================================================

== Hardware Configuration ================================================
GPU 0: AMD Radeon R9 200 Series
CPU: GenuineIntel
Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
Physical Cores: 6
Logical Cores: 12
Physical Memory: 12279 MB
Allocatable Memory: 134217727 MB
==========================================================================


== Configuration =========================================================
API: DirectX 12
==========================================================================
Quality Preset: Low
==========================================================================

Resolution: 2560x1440
Fullscreen: True
Bloom Quality: High
PointLight Quality: Off
Glare Quality: Off
Shading Samples: 4 million
Terrain Shading Samples: 4 million
Shadow Quality: Off
Temporal AA Duration: 0
Temporal AA Time Slice: 0
Multisample Anti-Aliasing: 1x
Texture Rank : 2


== Total Avg Results =================================================
Total Time: 60.004288 ms per frame
Avg Framerate: 56.279987 FPS (17.768305 ms)
Weighted Framerate: 54.296246 FPS (18.417479 ms)
CPU frame rate (estimated if not GPU bound): 98.408936 FPS (10.161679 ms)
Percent GPU Bound: 92.650452 %
Driver throughput (Batches per ms): 4199.374512 Batches
Average Batches per frame: 8208.017578 Batches
 
This is odd, performance has remained identical to the last Beta for me, and anandtech compared async on vs off in the Beta and there was a significant difference
Well were using different cards is why.....looks like they havent put a lot of work toward older 7970/280x cards or AsyncCompute is more or less neutral on these cards
 
For what its worth.....AsyncCompute for 7970/280x's seem to Not be used since i actually gain 1 fps by disabling AsyncCompute in the options. lol I was under the impression its being used for all gcn cards, but for mine it doesn't seem to do anything.:)

I got the opposite of this with my results. if I turn off async I lose just a little bit of performance, 1-3 FPS, that's it. DX11 runs like shit. DX12 without async runs fine, with async it runs a slight bit better.

Your still hammering on that the gains are all due to async but they are not.

I dont see how Hitman and ATOS contradict each both give good gains switching from DX11 to DX12 and very similar results on the different vendor hardware.

Youve been told why Nvidia dont perform as well many times and its something that Nvidia cannot easily fix as its a hardware issue and or design choice

I completely agree! the difference between DX11 and DX12 is night and day for me in this game. with or without async on! BUT... async does still add an improvement. I tried to make a pic with my results side by side by side, dx11 vs dx12 vs dx12 no async. this is at standard settings no aa:
aots dx11 vs dx12.png

edit: also, in DX12 I can bump settings to high(no aa) and it still runs fine, even extreme(no aa) plays better than standard setting in DX11!
 
Last edited:
Back
Top