Ashes of the Singularity Day 1 Benchmark Preview @ [H]

I think that generalized statements about DX12 should not be used it is the Nitrous engine performance with DX12 and currently for this engine AMD is better choice to draw any other conclusions regarding DX12 as a whole would not be reflective of the situation where there are more companies releasing more engines and different (also genre) games using it as well.

But it is scary to see where the 390x is going ;) . Who would have thought that this videocard is such a beast ...
 
When you do the full review please get a AMD FX chip for review. People are dying to see if DX12 is really an AMD cpu's saving grace or if it will still get trounced.
 
Any chance you could try the AMD/nVidia multi-GPU in AoS DX12? Does it really work?
 
Very nice to see you started on this game. I'm as much looking forward to seeing you test the CPU side as the GPU side - in the pre-launch threads I've read here and elsewhere I've seen the game use many cores. So let's see you test the game on an 18/36 or even 36/72 core monster and something in between, like 8/16 or 10/20 cores. It would be very useful to learn whether or not having more cores outweighs a higher CPU speed.
 
I don't think its ignorant to think that DX12 relies on asynchronous compute. After all it is a feature that separates DX 11 from 12. AMD calls it hyperthreading on GPU's. I am not here to stir the pot. Almost bought a Titan X few weeks back.
Im glad I didn't.

What I'm saying is that everyone is using this game as a demo to show off the benefits of Async Compute, like this game is the end all be all of Async Compute. This game however may not be the best demonstration of use of Async Compute. There may be other games, other genre's of games that make better use of Async Support and exploit its performance benefits even better. We shall see, but Async Compute is just one feature among many in DX12 that can help performance. The developer does need to use it, for the game to benefit from it. It is not an automatic thing that just because a game has DX12 will benefit from all the time.

In this particular game I do not know specifically what in regards to DX12 is providing the performance increase, I think it is a combination of things, not just one feature. To think only one feature matters in DX12 is to dismiss all the other benefits DX12 can provide.

Granted Async Compute may be the one feature providing most of the performance benefit.

Granted Async Compute may not be the one feature providing most of the performance benefit.

I'm open minded about it until I have solid proof either way. I don't yet. I don't know what specifically the performance impact is mostly coming from.
 
Last edited:
What I'm saying is that everyone is using this game as a demo to show off the benefits of Async Compute, like this game is the end all be all of Async Compute. This game however may not be the best demonstration of use of Async Compute. There may be other games, other genre's of games that make better use of Async Support and exploit its performance benefits even better. We shall see, but Async Compute is just one feature among many in DX12 that can help performance. The developer does need to use it, for the game to benefit from it. It is not an automatic thing that just because a game has DX12 will benefit from all the time.

In this particular game I do not know specifically what in regards to DX12 is providing the performance increase, I think it is a combination of things, not just one feature. To think only one feature matters in DX12 is to dismiss all the other benefits DX12 can provide.

Yes I understand what you are saying. My point was that judging by the comparison. Nvidia does not do asynchronous compute as well as Fury X ,if at all. Ypu are right. At the moment this game is the best demonstration what this feature brings to the gaming world with current gen of AMD hardware. I think it clearly illustrates at least for me where my money is going this weekend. A Fury X will keep me happy till Polaris is available. Making good use of my 5930 :) While gaming
 
So between this game and Hitman we still can't draw any conclusions about async or DX12? I understand there was some degree of involvement by AMD with both games but eventually we have to draw a line in the sand. Plus you have Ubisoft saying W_D 2 will favor AMD (probably due to async compute) and we're a year away from that game's release.

If you ask me, all of the chips are on the table. Nvidia and their customers lost, their hope is now on Pascal.

Even if the performance gains are from driver efficiencies in DX12/Win10, the outcome is still the same.
 
Just to elaborate on my last post, a long while back on servers there was a shift from megahertz to cores, and it would be interesting to see if Ashes is the game that brings that shift to the gaming PC.
 
Mr Bennett any chance you could run one test with the CPU at default speed and underclocked to examine this :D


I am doing that right now. But sadly the Fury X I just went down and purchased at Microcenter failed during the Crimson driver install, and now black screens when it gets to the OS after a nice corruption. And I wonder why I only keep NV cards here in Texas now to CPU test with. Jeez....
 
Plus you have Ubisoft saying W_D 2 will favor AMD (probably due to async compute) and we're a year away from that game's release.

If you ask me, all of the chips are on the table.

I can't operate on probably's and suppositions. I don't know if Async Compute, that one feature, is causing the performance increase. I cannot say "probably due to async compute" I need proof of that, I need testing that shows me that is true or not. I don't want to dismiss the impact of all the other benefits of DX12.

I do not believe that all the chips are on the table. I think we need a lot more DX12 games under the belt to form a trend.

Async Compute is getting attention like it is the one ring to rule them all, it is the answer to the question (42), it is the holy grail. It's a great feature of DX12, but I don't want to dismiss all the other new features and benefits in regards to DX12 either. Async Compute requires developer support. Time will tell how many pick up that mantle.
 
Last edited:
Nvidia isn't DX12 ready. It will never be with Maxwell. Maybe with Pascal, but rumor is that Nvidia is caught with their pants down having no Async Compute on Pascal. Hence why they had nothing to show. Which means that Nvidia must be doing a mad dash to implement Async Compute into Pascal.

Right now there's two types of DX12/Vulkan games. Ones built properly like "Ashes of Singularity" and ones built poorly like "Tomb Raider". Hitman is a proper made DX12 game and performs like AoS. The Talos Principle also performs like Tomb Raider with Vulkan. Cause The Talos Principle is just opengl with a quick conversion to Vulkan.
 
AFAIK, Nvidia drivers have AC disabled, so your not even getting that working to test .

Dx12 brings low level access to HW , so company A may have feature A for there chips to help performance and company B might have feature B for theirs. Because company A can't run B feature is meaningless as it wasn't meant to help .
Your going to see a lot of big changes in performance in Dx12 in beginning IMO untill this gets sorted to a degree.
 
This is probably my favorite review done here in a long time. I look forward to the upcoming dx12 reviews.
 
145950436379jEKuNgdA_4_3.gif

I for one am super curious just how the 390x would do in this test. I wonder if it matched or beat the 980ti with these settings. If that 8gb of vram is helping for 4k again?
 
145950436379jEKuNgdA_4_3.gif

I for one am super curious just how the 390x would do in this test. I wonder if it matched or beat the 980ti with these settings. If that 8gb of vram is helping for 4k again?

I wonder how close the 980ti gets OC vs OC. 38 vs 38?

nVidia still went the wrong way.... I wonder if that'll ever be resolved.
 
Right now there's two types of DX12/Vulkan games. Ones built properly like "Ashes of Singularity" and ones built poorly like "Tomb Raider". Hitman is a proper made DX12 game and performs like AoS. The Talos Principle also performs like Tomb Raider with Vulkan. Cause The Talos Principle is just opengl with a quick conversion to Vulkan.
The only reason Hitman is "built properly" (async) is because AMD implemented it themselves.
It should be telling when big studios like Microsoft and Square Enix both don't care enough to do it "properly" without AMD's direct involvement.

A feature or set of features that developers don't care to implement without being prompted by a GPU manufacturer, which runs better on one brand of hardware over another -- sounds a lot like GameWorks to me.

Although it seems to be you're using the phrase "built properly" in place of "designed for AMD GPUs".
 
Last edited:
If ya'll want an FX platform to try out on, I have an 8320 that does 4.5 and a Sabertooth 990FX R2.0 that I would be willing to donate for the cause. A pre-paid label would be nice :D
 
Nvidia isn't DX12 ready. It will never be with Maxwell. Maybe with Pascal, but rumor is that Nvidia is caught with their pants down having no Async Compute on Pascal. Hence why they had nothing to show. Which means that Nvidia must be doing a mad dash to implement Async Compute into Pascal.

Right now there's two types of DX12/Vulkan games. Ones built properly like "Ashes of Singularity" and ones built poorly like "Tomb Raider". Hitman is a proper made DX12 game and performs like AoS. The Talos Principle also performs like Tomb Raider with Vulkan. Cause The Talos Principle is just opengl with a quick conversion to Vulkan.


Can't happen with Pascal. The issue here isn't one of software but of hardware. This is an engineering issue. So in other words, not only can Pascal not do this, but if they were already past the Engineering phase of Volta when the Async phenomenon hit, they won't have it for that either.

NVidia could be in deep shit for a while.
 
Interesting review, indeed. The thing that continues to nag me is the difference between the GPU companies. Since DX12 came out, it has been clear that AMD has had a lead, but I have yet to find a decent explanation as to whether this is due to GPU architecture or drivers (oh the irony). I love me my Nvidia GPUs and my experience with them for the last ~10 years or so. But if I'm looking ahead to the next generation of GPUs, I really have to worry about whether sticking with team green is going to provide the best bang/buck if it's a hardware issue.

Is [H] considering doing a DX12 hardware comparison in terms of tech as opposed to performance? If not, can anyone recommend me an unbiased article on the matter?


i think it's a little of both, AMD played the long game and nvidia played the short game, they worried about beating AMD right at this moment with every card release, instead of worrying about 2-5 years down the road and now it's biting them in the ass. the biggest thing for AMD though was how fast they were able to convince microsoft to release dx12 with their Mantel API push and i don't know if Nvidia was expecting that. maybe nvidia thought it would be another failed attempt by AMD and ignored the async compute support or if they thought it wasn't going to matter anyways.
 
if they were already past the Engineering phase of Volta when the Async phenomenon hit, they won't have it for that either.
If Nvidia is so incompetent that they didn't see the benefits of async until async-gate started (~August 2015) then they have no business being in the GPU space at all. Just beyond ridiculous.
I would hope Nvidia is at least slightly more observant than random forum posters (Mahigan) and sites like WCCFTech. How much is their R&D budget exactly?

And how long was the DX12 API in development for Nvidia to see this coming? Clearly even AMD knew about it circa 2011. Plus we had the same signs with Mantle, back in 2013.
 
If Nvidia is so incompetent that they didn't see the benefits of async until async-gate started (~August 2015) then they have no business being in the GPU space at all. Just beyond ridiculous.
I would hope Nvidia is at least slightly more observant than random forum posters (Mahigan) and sites like WCCFTech. How much is their R&D budget exactly?

And how long was the DX12 API in development for Nvidia to see this coming? Clearly even AMD knew about it circa 2011. Plus we had the same signs with Mantle, back in 2013.


Nvidia enjoyed being king of the hill for quite a while in DX11. It wouldn't be the first time a major company ended up getting caught with their pants down due to complacency - see the original AMD vs Intel wars for that.
 
I wonder how close the 980ti gets OC vs OC. 38 vs 38?

nVidia still went the wrong way.... I wonder if that'll ever be resolved.
Im going to venture a guess that this game is extremely hard on overclocks (there ALREADY getting red screens without overclocking) lol i guess well see:)
 
Last edited:
Hi there,

Just registered to point out, 2 German mags already did some CPU performance testing for DX12.
The ban of links in the first posts means, I just can mention the mags and headlines:

Computerbase
HITMAN BENCHMARKS: DirectX 12 hebt das CPU-Limit massiv an
There are no DX12 benchmarks of ROTR, though.


PC Games Hardware
Sadly, they didn't do an update for ROTR benchmarks when the DX12 patch arrived.

Benchmarks for Hitman:
Hitman 2016 PC: DirectX 11 vs. DirectX 12 - Benchmarks: DX12 als Prozessorentlaster [Update 3]
 
Hi there,

Just registered to point out, 2 German mags already did some CPU performance testing for DX12.
The ban of links in the first posts means, I just can mention the mags and headlines:

Computerbase
HITMAN BENCHMARKS: DirectX 12 hebt das CPU-Limit massiv an
There are no DX12 benchmarks of ROTR, though.


PC Games Hardware
Sadly, they didn't do an update for ROTR benchmarks when the DX12 patch arrived.

Benchmarks for Hitman:
Hitman 2016 PC: DirectX 11 vs. DirectX 12 - Benchmarks: DX12 als Prozessorentlaster [Update 3]
Thanks !
 
I'd love to see overclocked results for all the cards involved. Overclocking brings a 980Ti up to par with a Fury X at stock, not bad for a heavily AMD biased game
 
Can't happen with Pascal. The issue here isn't one of software but of hardware. This is an engineering issue. So in other words, not only can Pascal not do this, but if they were already past the Engineering phase of Volta when the Async phenomenon hit, they won't have it for that either.

NVidia could be in deep shit for a while.

I seriously doubt they're in deep shit. They control a much larger portion of the PC gaming market. Think about that for a second. They could easily strong-arm developers in not utilizing the full benefits of Async compute.
 
  • Like
Reactions: Zuul
like this
Strong arming only goes so far. They were caught with their hands in the cookie jar twice with Maxwell. Developers don't want to make games that cripple GPUs, they want to sell the games. If it benefits Maxwell, it cripples Intel, AMD and Kepler remember. The type of optimization through gameworks were definitely Maxwell uarch dependent.
 
Strong arming only goes so far. They were caught with their hands in the cookie jar twice with Maxwell. Developers don't want to make games that cripple GPUs, they want to sell the games. If it benefits Maxwell, it cripples Intel, AMD and Kepler remember.

I agree. I don't want that to happen. EDIT: But that doesn't mean that it can't. I'm hopeful that it won't. :)
 
The writing was on the wall when Xbox and Playstation ended up with AMD hardware :)


NVidia shot themselves in the foot when they forced a renegotiation with MS. You can bet your last $ that Sony stood up and took notice of that. AMD (Art-X/ATi) have been with Nintendo since the game cube days aka 1998-9ish when system development started. It is unlikely MS or Nintendo will go elsewhere as ATi/AMD have proven themselves
 
I agree. I don't want that to happen. EDIT: But that doesn't mean that it can't. I'm hopeful that it won't. :)
The thing about strong arming is . Microsoft
If ya'll want an FX platform to try out on, I have an 8320 that does 4.5 and a Sabertooth 990FX R2.0 that I would be willing to donate for the cause. A pre-paid label would be nice :D
Any takers >? This sums it up nicely Hitman Benchmarks mit DirectX 12 (Seite 2)
Reddit is raving about this thing.
 
Any chance you could try the AMD/nVidia multi-GPU in AoS DX12? Does it really work?
It does work. Tried with my Titan X and a Nano and saw a +20 fps frame increase on high settings @ 1440p. Sadly Asus is lame when it comes to putting waterblocks on cards with the "warranty void" sticker on the screw so I had to return it. If I had that beast on water I probably would of seen even better performance and not be thermally throttled.
 
Back
Top