Ashes of the Singularity Day 1 Benchmark Preview @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,601
Ashes of the Singularity Day 1 Benchmark Preview - The new Ashes of the Singularity game has finally been released on the PC. This game supports DX11 and the new DX12 API with advanced features. In this Day 1 Benchmark Preview we will run a few cards through the in-game canned benchmark comparing DX11 versus DX12 performance and NVIDIA versus AMD performance.
 
Is it safe to say that dx 12 relies on asynchronous compute and current gen Nvidia is not so hot doing it >?

DX12 is DX12, it does not rely on anything. Async Compute is a feature within DX12 that a game developer can chose to exploit. DX12 has its own benefits, besides the one feature of Async Compute for improving game performance or lessening CPU burdens or bottlenecks. None of the testing I did last night specifically looked at or separated Async Compute from DX12 to examine that one feature. This is the overall DX12 versus DX11 performance utilizing the full version of the game as it ships with the latest drivers. To say that Async Compute is the one feature making the performance difference would be ignorant. It's one feature, out of many added to the whole that combined deliver the 9-14% performance advantage we are seeing.

I get the vibe from this game that DX12 is helping most to relieve CPU burdens, that's just the type of game this is. That's why I said in the conclusion this may not be the best demo to show off Async Compute, it may take a game like a first person shooter that is more GPU dependent to really show off how Async Compute can benefit performance, just my opinion.
 
Well, looks the all the rumors some were dismissing about AMD and DX12 are coming true, so far.

Very interesting, and this can only benefit the consumer in the end if AMD has found a stronghold in DX12, for the time being.
 
DX12 is DX12, it does not rely on anything. Async Compute is a feature within DX12 that a game developer can chose to exploit. DX12 has its own benefits, besides the one feature of Async Compute for improving game performance or lessening CPU burdens or bottlenecks. None of the testing I did last night specifically looked at or separated Async Compute from DX12 to examine that one feature. This is the overall DX12 versus DX11 performance utilizing the full version of the game as it ships with the latest drivers. To say that Async Compute is the one feature making the performance difference would be ignorant. It's one feature, out of many added to the whole that combined deliver the 9-14% performance advantage we are seeing.

I get the vibe from this game that DX12 is helping most to relieve CPU burdens, that's just the type of game this is. That's why I said in the conclusion this may not be the best demo to show off Async Compute, it may take a game like a first person shooter that is more GPU dependent to really show off how Async Compute can benefit performance, just my opinion.

I don't think its ignorant to think that DX12 relies on asynchronous compute. After all it is a feature that separates DX 11 from 12. AMD calls it hyperthreading on GPU's. I am not here to stir the pot. Almost bought a Titan X few weeks back.
Im glad I didn't.
 
I don't think its ignorant to think that DX12 relies on asynchronous compute. After all it is a feature that separates DX 11 from 12. AMD calls it hyperthreading on GPU's. I am not here to stir the pot. Almost bought a Titan X few weeks back.
Im glad I didn't.

Yeah, wise choice. Its a fair point of speculation that if this is what type of difference we are going to see from AMD and DX12 with the first run of DX12 games, that as developers become more savvy, we should see some further gains in the next 2 years. I'm hopeful, because at this point, it's good for the consumer for NVidia to get punched in the mouth and bloodied a little. When companies compete, we win.
 
I think a hot topic of discussion very soon is going to be how Nidia's 1070/1080 handle DX12 and will AMD be the better choice moving forward. This is a bad showing on Nvidia and although I really don't care since I'll be buying a new card/cards here soon It does matter for me on my next buying choice. I tend to keep my GPU's for about 1 1/2 - 2 years tops.
 
I don't think its ignorant to think that DX12 relies on asynchronous compute. After all it is a feature that separates DX 11 from 12. AMD calls it hyperthreading on GPU's. I am not here to stir the pot. Almost bought a Titan X few weeks back.
Im glad I didn't.
That is not what he said. You need to read that again. "To say that Async Compute is the one feature making the performance difference would be ignorant. "
 
I think a hot topic of discussion very soon is going to be how Nidia's 1070/1080 handle DX12 and will AMD be the better choice moving forward. This is a bad showing on Nvidia and although I really don't care since I'll be buying a new card/cards here soon It does matter for me on my next buying choice. I tend to keep my GPU's for about 1 1/2 - 2 years tops.


If rumors bear out, Pascal will not have Async support, and potentially Volta may not either. NVidia is going to try to compensate with brute force and async software emulation, which thus far has had poor results.
 
Is it safe to say that dx 12 relies on asynchronous compute and current gen Nvidia is not so hot doing it >?

actually its safe to say that ashes relies on asynchronous compute. I'm pretty sure nvidia sponsored titles won't support it (unless pascal does)
 
This is sort of correct but there are other parts of dx12 that matter. My thinking is that "async compute" is a core feature of dx12. Core in the sense that its just common sense utilization of GPUs that's made possible now in dx12. The problem is that nvidia happens to only be able to use one queue at a time when it comes to compute and graphics. If they both supported it we might not hear so much about it or know when its being used, like other parts of dx12.

Its a bit more complicated than that, and not in any way that benefits NVidia. AMD has been baking into their hardware items that can correctly process the async compute since I believe the 7xxx generation. Up until this point in time, it just sat there, never really utilized.

AMD's first attempt to make gains to use it, and other features in their cards that were being neglected was Mantle. And while Mantle didn't last, it was instrumental in that it spun off into Vulkan, which has some of it baked directly into DX12. That, and given that DX 12 isn't just DX11 + stuff like all previous DX verisons, but an actual attempt at a re-write for a lot of the content of DX to reduce bloat, it means there is a learning curve anyway for DX 12, which means that game developers can't just take a lazy approach to making games like they've been doing with the previous DX versions.

AMD also seized on this, with them going full open source, letting developers get their hands on what AMD uses for their cards with no strings attached, and has even been helpful in assisting the developers with better utilizing DX 12, which they have a leg up on NVidia on because not only do they have insight because of Vulkan/Mantle, but also because they know they have hardware in their card that NVidia does not, and likely will not for 2 card generations.

If AMD pulls this off, and can start slugging it out with NVIDIA, the untold tale that did the trick, the big coup, would be having Microsoft integrate part of Vulkan into the DX12 code in the first place - it gave them an "in" with Microsoft, and if you've been paying attention, MS and AMD have been partnering more and more.

Truth be told, what this all tells me is that when AMD spun off their video division into the Radeon Technology Group, the people they put in charge have been thus far handling this brilliantly, waging a very smart guerrilla war that in the end may end up turning the tide in their favor, and if that is the case, then the partnership with MS comes as the most impactful game-changer.
 
Brent, are you guys gonna test cpu usage next?
Playing with some of that now. Ran literally HUNDREDS of benchmarks for CPU this week with ROTR...and got SHIT. Not sure if the benchmark just sucks that bad, or the benchmark does nothing to focus on what the devs said was important about DX12 and CPU bound workloads.

That said, just out of the gate with AotS, it looks like there are some differences to be seen pretty quick on CPU bound heavy workloads. Still pulling some things together. I need to get an AMD GPU in here....I have not used those on the CPU/Mobo test bench for years now.
 
Alright. Bro-hug time for you three. :D

That's a HUGE difference in performance between the two companies with this title. I hope that it can quell some of the "DX12 is only for low end CPU" talk. I hope Brent doesn't experience too many resets when playing the full game. Can't wait for the overall game experience article.

Great job as usual. Thanks!
 
As a gamer, DX12 is the only thing making me even think about moving to Windows 10. I know that gamers represent a small amount of total installed base for windows, but we are often key knowledge/advice sources for many people. I wonder if NVIDIA (who have a MUCH higher installed base for dedicated GPU's) having poor DX12 performance will have any noticeable impact for W10 adoption. You would think Microsoft would want to work with NVIDIA to make W10 an obvious performance boost for those with cards from team Green.
 
Is it safe to say that dx 12 relies on asynchronous compute and current gen Nvidia is not so hot doing it >?

From what I understand, the current NVIDIA architecture really isn't designed around async compute, where AMDs is (GCN has had an async compute engine since the beginning).
 
Alright. Bro-hug time for you three. :D

That's a HUGE difference in performance between the two companies with this title. I hope that it can quell some of the "DX12 is only for low end CPU" talk. I hope Brent doesn't experience too many resets when playing the full game. Can't wait for the overall game experience article.

Great job as usual. Thanks!
Haha, not at all. I trust these guys opinions. This is what brings me here. We all win when there is healthy competition . :)
 
From what I understand, the current NVIDIA architecture really isn't designed around async compute, where AMDs is (GCN has had an async compute engine since the beginning).

That's exactly it. You can have in some cases Radeon 79xx cards keeping up with the low end Nvidia offerings from this year.
 
Interesting review, indeed. The thing that continues to nag me is the difference between the GPU companies. Since DX12 came out, it has been clear that AMD has had a lead, but I have yet to find a decent explanation as to whether this is due to GPU architecture or drivers (oh the irony). I love me my Nvidia GPUs and my experience with them for the last ~10 years or so. But if I'm looking ahead to the next generation of GPUs, I really have to worry about whether sticking with team green is going to provide the best bang/buck if it's a hardware issue.

Is [H] considering doing a DX12 hardware comparison in terms of tech as opposed to performance? If not, can anyone recommend me an unbiased article on the matter?
 
Interesting review, indeed. The thing that continues to nag me is the difference between the GPU companies. Since DX12 came out, it has been clear that AMD has had a lead, but I have yet to find a decent explanation as to whether this is due to GPU architecture or drivers (oh the irony). I love me my Nvidia GPUs and my experience with them for the last ~10 years or so. But if I'm looking ahead to the next generation of GPUs, I really have to worry about whether sticking with team green is going to provide the best bang/buck if it's a hardware issue.

Is [H] considering doing a DX12 hardware comparison in terms of tech as opposed to performance? If not, can anyone recommend me an unbiased article on the matter?
I think Zion Halcyon nails it a couple of post above :)
 
Interesting review, indeed. The thing that continues to nag me is the difference between the GPU companies. Since DX12 came out, it has been clear that AMD has had a lead, but I have yet to find a decent explanation as to whether this is due to GPU architecture or drivers (oh the irony). I love me my Nvidia GPUs and my experience with them for the last ~10 years or so. But if I'm looking ahead to the next generation of GPUs, I really have to worry about whether sticking with team green is going to provide the best bang/buck if it's a hardware issue.

Is [H] considering doing a DX12 hardware comparison in terms of tech as opposed to performance? If not, can anyone recommend me an unbiased article on the matter?

It's actually pretty clear its a hardware thing. Go reread my posts in this thread - I actually touch on it a few times. AMD has had architecture that went relatively unused and ignored by gaming devs baked into their cards since the 79xx series, but have managed to strategically partner themselves with MS to the point where to make full use of DX12 and its drive for low level APIs to reduce overhead, MS baked into it some of vulkan, which includes the async shader support for their already existing hardware. AMD did a very shrewd thing, and if the gains are deemed significant by the players, NVidia is likely 2 architectures out from being able to put out a competing architecture - MAYBE Volta, if they delay it and depending on where it is in the process, but likely not until the next architecture after Volta.
 
So, basically, we're not really seeing much new? The part of me with the 280x is pretty happy, cuz them's results are great for AMD.

At the same time I have to remind myself that it's April 1, and most of the conclusions posters on this (and various other threads about async/etc) topic stem from that, no? We've got historic results that the 980Ti (with some of the beta/driver combinations) has equivalent performance to the Fury X. So, something's broken (as so succinctly put in the conclusion to the review). EVEN THAT, though, is a huge win for AMD, compared to other games, where there's a notable disparity.
 
It's actually pretty clear its a hardware thing. Go reread my posts in this thread - I actually touch on it a few times. AMD has had architecture that went relatively unused and ignored by gaming devs baked into their cards since the 79xx series, but have managed to strategically partner themselves with MS to the point where to make full use of DX12 and its drive for low level APIs to reduce overhead, MS baked into it some of vulkan, which includes the async shader support for their already existing hardware. AMD did a very shrewd thing, and if the gains are deemed significant by the players, NVidia is likely 2 architectures out from being able to put out a competing architecture - MAYBE Volta, if they delay it and depending on where it is in the process, but likely not until the next architecture after Volta.
I saw those posts, and thank you for the contributions. It's just that, no offense, I was looking for a more official source with some deeper analysis. I was looking for more of a full blown article as opposed to a forum post. :)
 
I saw those posts, and thank you for the contributions. It's just that, no offense, I was looking for a more official source with some deeper analysis. I was looking for more of a full blown article as opposed to a forum post. :)
I saw those posts, and thank you for the contributions. It's just that, no offense, I was looking for a more official source with some deeper analysis. I was looking for more of a full blown article as opposed to a forum post. :)
We will provide them as soon as we r on H payroll :D
 
I saw those posts, and thank you for the contributions. It's just that, no offense, I was looking for a more official source with some deeper analysis. I was looking for more of a full blown article as opposed to a forum post. :)

Yes, take 99.999999999% of everything speculated on this thread with a dumptruck of salt. Pay close attention to what Brent and Kyle have written.
 
Yes, take 99.999999999% of everything speculated on this thread with a dumptruck of salt. Pay close attention to what Brent and Kyle have written.
There is a simple way to find out. Run these with a few mid range Cpus :D
 
I saw those posts, and thank you for the contributions. It's just that, no offense, I was looking for a more official source with some deeper analysis. I was looking for more of a full blown article as opposed to a forum post. :)

Of course. Much of what I said is compiled from articles I have read as well. However, I don't think that what you are looking for is going to happen. I don't think there will be a "silver bullet" article with enough gravitas that will definitively answer your questions. Rather, it is going to be an article here, an article there, that will in the end paint a more complete picture.

For me, ever since this was first on my radar last year with the huge blow up on the early AOTS benchmarks, I've been reading up whatever I can on it. Mostly rumors from unnamed sources, granted. However, what I saw here is the first official confirmation of those rumors, as Kyle and Brent have corroborated with their testing much of what was being rumored.

So I guess I am the inverse of you. I already have been keeping on top of this stuff all along, taking it all with a grain of salt, and now that I have seen information from a place that has earned my trust as a reader ([H]), it lends far more credibility into everything I read, which is pretty much what I shared with you.
 
There is a simple way to find out. Run these with a few mid range Cpus :D
Or buy the game if you like the genre, and see what sorts of performance you can get out of the hardware you presently have? If you run an Nvidia card, stick with DX11, if you run a GCN card, DX12. Those on older hardware are *likely* going to see the best results on DX11, but YMMV.
 
Yes, take 99.999999999% of everything speculated on this thread with a dumptruck of salt. Pay close attention to what Brent and Kyle have written.

What they have written helps confirm much of what was speculated in this thread, Mr Team Green ;)
 
Or buy the game if you like the genre, and see what sorts of performance you can get out of the hardware you presently have? If you run an Nvidia card, stick with DX11, if you run a GCN card, DX12. Those on older hardware are *likely* going to see the best results on DX11, but YMMV.

You had to say a "GCN card" - you couldn't even bring yourself to say "AMD".

You make me chuckle.
 
Or buy the game if you like the genre, and see what sorts of performance you can get out of the hardware you presently have? If you run an Nvidia card, stick with DX11, if you run a GCN card, DX12. Those on older hardware are *likely* going to see the best results on DX11, but YMMV.
Maybe I have already done that :)
 
nVZ0eWT.jpg
 
You had to say a "GCN card" - you couldn't even bring yourself to say "AMD".

You make me chuckle.

I was being precise: I don't think anyone has benched an older Nvidia or AMD card, and the likelihood of them benefitting from DX12, at this moment (given how broken the Maxwell path seems to be), is slim. So, yes, I stand by my statement: GCN, go DX12, everything else, go DX11. Also, you read the part about me having a 280x, right? And a 5770 before that. And being very happy with both those cards in their respective eras. I hate gross misrepresentation of a situation and ignoring prior context, which you seem to enjoy living in. As I wrote--there are previous versions of the beta on respectively-aged drivers had the 980ti and Fury X head-to-head, which is already a much better showing for the Fury X (and respective GCN cards). Please reconcile that. There's clearly something wrong with the Nvidia path--whether that's Nvidia's fault or due to how the game was specifically code, I don't claim to know.

Imhotep--enjoy! How is the game from a playability perspective?

Edit--cuz I let my mouth run more than I should (apologies)
 
Last edited:
Back
Top