Ashes of the Singularity benchmarks.

The reason I say that is if look at cageymaru score under the same settings for say low vista ..

his Avg Framerate: 33.481995
Average Batches per frame: 19750

where the benchmark made my system render more..

my Avg Framerate 25.6
Average Batches pre frame 37031

So the benchmark made my system render almost 2x the batches for that scene,
 
Think my old hexa core xeon doesn't hurt one bit.:) Finally a game that uses all my cpu cores

dx12 is looking good so far:)

Also noticed you are running 0.49 version, 0.50 seems to have also better performance.

lol prime..wtf you have it cranked up to 4.8Ghz ?

.

4.2ghz I can see..



in the other hand sadly I have not the game would be good to test how different can behave a 280X with a quad core+HT vs hexacore+HT intel CPU.
 
Also the debate about the 290x vs 980Ti for this benchmark that has everyone in an uproar is really useless until we know the batch counts pre scene as to what each rendered under DX 12 as it may be better or worst for Nvidia ..
 
Also the debate about the 290x vs 980Ti for this benchmark that has everyone in an uproar is really useless until we know the batch counts pre scene as to what each rendered under DX 12 as it may be better or worst for Nvidia ..

you know, you have a big point there that maybe no one it's taking into consideration.. I'll probably just buy the game just to test and do some research there..
 
I wanted to ask the others here, if they happened to notice this benchmark stresses the gpu harder in some way that every other games cant. Had a couple of occasions where the game just sorta froze and the driver recovered. As if my video card overclock was to high. The odd part is it never did it while doing the benchmarks, which seems to push everything the hardest. Nope not me...Some how actually playing the game will make the driver recover from too high of clock speed....Dropped my gpu clocks 20mhz and the driver recovered errors completely stopped.:) Something with Async compute and DX12 pushes our gpus in a way they havent been pushed before
 
different work loads will stress the GPU differently. Different parts of the GPU can have different power gating based on design which can be affected by the frequency and voltage settings.
 
I saw where the game developer replied over at overclockers.com but I think he took it down.. batches are made up of drawcalls and something else (I forgot) but they call it batches and from what I could understand the benchmark is real time demand.

Now the game has my system pulling almost 40,000 heavy batches on some scenes and that is the performance we are looking for I think as the avg fps mean little..

Where under NDA so be careful what you say.
 
Also our X58 Xeon's seem to be very powerful for draw calls so look at how many batches your pulling as I have one scene doing 38,000...
 
ok batching draw calls, what this does is it drops the cpu usage ;). The more individual draw calls you have the more overhead is created and this is what batching helps alleviate. So lets say there are 50k individual draw calls, when batched, you may only need to do 10 draw calls.

Even in Dx12 you still have to be careful of how many draw calls are done, just that Dx12 has less API overhead, doesn't eliminate that but reduces it.
 
DX12 can manage 600K draw calls – essentially, the amount of objects that can feature on-screen at one time – comparing it to the meagre 6K draw calls that DX9 could render.

The more objects on the screen at one time is the performance and not the frames pre second as the benchmark is maxing out the system to render at once.

From the game developer ..

Regarding batches, we use the term batches just because we are counting both draw calls and dispatch calls. Dispatch calls are compute shaders, draw calls are normal graphics shaders. Though sometimes everyone calls dispatchs draw calls, they are different so we thought we'd avoid the confusion by calling everything a draw call.
 
Last edited:
Thats a good catch....On a different not, How are you guys enjoying the game? Sorta reminds me of Sins but on land. So far i like it:)
 
I wanted to ask the others here, if they happened to notice this benchmark stresses the gpu harder in some way that every other games cant. Had a couple of occasions where the game just sorta froze and the driver recovered. As if my video card overclock was to high. The odd part is it never did it while doing the benchmarks, which seems to push everything the hardest. Nope not me...Some how actually playing the game will make the driver recover from too high of clock speed....Dropped my gpu clocks 20mhz and the driver recovered errors completely stopped.:) Something with Async compute and DX12 pushes our gpus in a way they havent been pushed before

This is very true. The developer tweeted out that the overclocks that are stable in other benchmarks and games won't hold under DX12. :)
 
Here's a 270x paired with an 8350.

K1c1KgO.png


I'm seeing about a 35% performance increase from DX11 to DX12. Still looks like it's time to upgrade.
 
Latest patch from today with FX-9370 @4.7GHz. Looks like my Average CPU Framerate just doubled, unless the new settings are different from the last. Which might be the case. ;) Anyone want to run the benchmark and see what numbers they are getting? MSAA is set to 2x instead of 1x for default now it seems. That puts me into Xeon territory unless their frame rate doubled also. ;)

Pastebin link to results.
http://pastebin.com/w5n5f5WH

O5hi4jJ.png
 
Last edited:
The reason I say that is if look at cageymaru score under the same settings for say low vista ..

his Avg Framerate: 33.481995
Average Batches per frame: 19750

where the benchmark made my system render more..

my Avg Framerate 25.6
Average Batches pre frame 37031

So the benchmark made my system render almost 2x the batches for that scene,

Try and run the benchmark again. The new patch gives me these results for "Low Vista". My CPU frame rate literally doubled in this title from the patch. Hopefully it did the same for your system.

Regular run.

== Shot low vista =========================================
Total Time: 9.993567 ms per frame
Avg Framerate: 29.518990 FPS (33.876499 ms)
Weighted Framerate: 29.384975 FPS (34.030994 ms)
CPU frame rate (estimated if not GPU bound): 65.238136 FPS (15.328458 ms)
Percent GPU Bound: 99.645836 %
Driver throughput (Batches per ms): 7208.254395 Batches
Average Batches per frame: 31946.875000 Batches

Crazy run.

== Shot low vista =========================================
Total Time: 9.971234 ms per frame
Avg Framerate: 18.854237 FPS (53.038479 ms)
Weighted Framerate: 18.763830 FPS (53.294025 ms)
CPU frame rate (estimated if not GPU bound): 52.763870 FPS (18.952362 ms)
Percent GPU Bound: 99.056160 %
Driver throughput (Batches per ms): 6562.497070 Batches
Average Batches per frame: 43616.640625 Batches
 
My CPU FPS went up around 50% on the latest patch.

Also Batch Driver throughput went up around 40% in certain parts (may be the new NV driver as well).
 
Cool! Mind posting some results? Would like to see some Nvidia numbers. :)
 
Cool! Mind posting some results? Would like to see some Nvidia numbers. :)

pt92AUX.jpg


Yes I know the CPU is shit and it will be resolved within the next few months.

== Shot low vista =========================================
Total Time: 10.009574 ms per frame
Avg Framerate: 51.650547 FPS (19.360880 ms)
Weighted Framerate: 51.348885 FPS (19.474621 ms)
CPU frame rate (estimated if not GPU bound): 77.204338 FPS (12.952641 ms)
Percent GPU Bound: 99.770340 %
Driver throughput (Batches per ms): 7733.292969 Batches
Average Batches per frame: 32280.763672 Batches
 
Looks good, seems to be about 20-30% increase in performance compared to the 40fps-ish pulled in the initial benchmarks. I'd probably attribute most of that to recent AotS patches, haven't heard anything from Nvidia yet.
 
That's sweet! They really did some nice optimizations with this patch. Wonder what our Xeon brethren are getting. :) For the amount of objects (ships, light sources, a.i., etc) being tossed around on the screen, it's amazing the amount of load that DX12 can handle compared to DX11.

Just curious. Does your 980ti overclock itself much during your run? I guess the best way of asking is what speed does your card tend to hover around when running the benchmark. I remember the Oxide guys saying that overclocks may not hold up under DX12 due to the stress being placed on the cards. But my R9 290's OC seem to do just fine, although I seem to have the worst one in the world for OC'ing. 1059 / 1350.

Thanks!

@TaintedSquirrel. Yes, they did lots of optimizations. This game runs like butter. I wish other titles ran this nicely. :)
 
Does your 980ti overclock itself much during your run? I guess the best way of asking is what speed does your card tend to hover around when running the benchmark. I remember the Oxide guys saying that overclocks may not hold up under DX12 due to the stress being placed on the cards. But my R9 290's OC seem to do just fine, although I seem to have the worst one in the world for OC'ing. 1059 / 1350

1423 solid on GPU1

I am running pretty conservative OCs with the SLI setup as I am CPU bound in DX11.
 
The thing that makes me laugh about this benchmark is, due to the inclusion of the DX12 API the GPU performance is getting most of the attention over the vastly more relevant CPU scaling.

Its going to be FAR better having a higher end processor and a mid range GPU regardless of DX 12 or DX11. Sim speed is what mostly matters not FPS as with FPS games. Supreme Commander came to a crawl in its day on huge maps and multiple AI, in fact you can still bring most processors to their knees as the game is poorly multi threaded.

Its clear Ashes is well multi-threaded and its even more clear AMD processors are really struggling due to their lower work done per clock cycle regardless of the way the DX12 API can access the CPU.


http://www.pcper.com/files/imagecache/article_max_width/review/2015-08-16/ashes-gtx980.png

.
 
I noticed a lot of reviewers are focused on CPU scaling in both Ashes & Fable benchmarks. Nobody cares. CPU performance hasn't been relevant in gaming for about 3-4 years.

They need to wise up and focus on the things people actually care about. To me it looks like the main tech sites (AnandTech, TechReport, etc) are avoiding async because they don't want to get on Nvidia's bad side, so instead they focus on CPU scaling. It's an embarrassment. WCCF is the only site tackling it so far.
 
I noticed a lot of reviewers are focused on CPU scaling in both Ashes & Fable benchmarks. Nobody cares. CPU performance hasn't been relevant in gaming for about 3-4 years.
.


ROLF you did just look at the CPU scaling link above right?, its about as relevant as ever, this is a RTS game not a FPS!

I get the Nvidia vs AMD GPU DX12 API debate thing going on, but when it comes to this game its mostly CPU that matters.
 
Yeah. The high-end CPUs actually lost performance... Great.

ummm yea ok

http://www.pcper.com/files/imagecache/article_max_width/review/2015-08-16/ashes-gtx980.png

Once again this is a RTS game, make no difference if its running 71FPS or 76FPS, its really not much of a issue.

I am looking at the AMD's and I3's running at 30 and 40 FPS respectively. This is only the optimised scripted benchmark, wait to you see huge maps with thousands of more units!.

Its all CPU baby!
 
I noticed a lot of reviewers are focused on CPU scaling in both Ashes & Fable benchmarks. Nobody cares. CPU performance hasn't been relevant in gaming for about 3-4 years.

They need to wise up and focus on the things people actually care about. To me it looks like the main tech sites (AnandTech, TechReport, etc) are avoiding async because they don't want to get on Nvidia's bad side, so instead they focus on CPU scaling. It's an embarrassment. WCCF is the only site tackling it so far.

TS are you trolling or what? You seem to be channeling some Prime stupidity with that statement....:eek::eek::eek:

RTS is ALL about the CPU, always has been and I don't see that changing anytime soon..
 
You post some strange comments :rolleyes:, you could just admit your wrong and save some face, rather than continuing on.
Would it better if I just complained about Fable? Because this is what I'm talking about:

http://www.anandtech.com/show/9659/fable-legends-directx-12-benchmark-analysis/3
http://techreport.com/review/29090/fable-legends-directx-12-performance-revealed/4

Is it really necessary to test multiple clock speeds & cores across resolutions from 720p to 4K on 4 different video cards when they could be doing what WCCF did? http://wccftech.com/asynchronous-compute-investigated-in-fable-legends-dx12-benchmark/2/

I understand it's important to test CPU scaling in DX12 as its a big feature of the API but at the same time, especially in Fable, it's irrelevant. Even moreso now that we know async is a problem for Nvidia. I would much rather they ignore CPU scaling entirely if it means freeing up time to test async. Fable doesn't even have a DX11 mode so you can't even compare CPU results between APIs!

What we really need is an entire article from a major tech site dedicated to async compute testing both Fable Legends & Ashes across a wide range of hardware. That's what we really need, and we won't get it because it makes Nvidia look bad. Instead they'll spend a few more pages talking about how i3's gained 3fps at 4K. Fantastic.

But now we're way off-topic for this thread. :cool:
 
Last edited:
You guys need to post the screen shot like this so we can see the batch count rendered.. this is 15.9.1 Beta driver..

update%20290x%202.jpg
 
I posted the entire run onto Pastebin. Check my posts. :) And your CPU frame rate really picked up also. They did some nice optimizations.
 
I see.. your 290 and my 290x is close in batchs count for avg of normal/middle /heavy and maybe just the different in SP count. also for the Nvidia guys we need to see the batch count to see what the gpu is rendering as the FPS means noting without seeing the batch count .
 
I see.. your 290 and my 290x is close in batchs count for avg of normal/middle /heavy and maybe just the different in SP count. also for the Nvidia guys we need to see the batch count to see what the gpu is rendering as the FPS means noting without seeing the batch count .

It is about the same on normal.
It looks like the scenes are capped.
I went ahead and did an insane run as that will push everything up and show a better comparison at what pushing to max will result with.
Here are results:

iq0z3x0.jpg


It looks like there are some major optimization that still needs to happen as the driver is either not being fed enough or there is an issue with the driver as throughput varies a lot (could be CPU related as well).

== Shot mountain air battle shot 2 =========================================
Total Time: 4.992939 ms per frame
Avg Framerate: 39.856289 FPS (25.090143 ms)
Weighted Framerate: 39.686920 FPS (25.197220 ms)
CPU frame rate (estimated if not GPU bound): 84.403496 FPS (11.847851 ms)
Percent GPU Bound: 100.000000 %
Driver throughput (Batches per ms): 12298.669922 Batches
Average Batches per frame: 36761.410156 Batches

== Shot high vista =========================================
Total Time: 4.990160 ms per frame
Avg Framerate: 27.454033 FPS (36.424522 ms)
Weighted Framerate: 27.341957 FPS (36.573826 ms)
CPU frame rate (estimated if not GPU bound): 56.083302 FPS (17.830620 ms)
Percent GPU Bound: 99.425385 %
Driver throughput (Batches per ms): 8043.416504 Batches
Average Batches per frame: 67268.742188 Batches
 
Last edited:
Without testing on the same platform it's hard to get correct testing as I don't know how much system ram effects results or PCI Express 2.0/3.0 .. my system is almost 6 years old now and even the hard drive is very old but this is what the 290x does on Crazy settings which appears to be about the same batch count as you 980Ti .

Crazy%20batchs.png
 
Also you have to remember that this isn't your typical benchmark. The A.I. changes every run so you will never get the same numbers.
 
Also you have to remember that this isn't your typical benchmark. The A.I. changes every run so you will never get the same numbers.

Which make it even worse of a benchmark for GPU, it already look like shit for such a demanding game.
 
Which make it even worse of a benchmark for GPU, it already look like shit for such a demanding game.

You would be wise to read up on what is important on this benchmark, the genre, and what is being tested and improved with the new API.

I am saying this as friendly as possible, because dang your comment makes you sound clueless.
 
You would be wise to read up on what is important on this benchmark, the genre, and what is being tested and improved with the new API.

I am saying this as friendly as possible, because dang your comment makes you sound clueless.

Clueless is the new "black" on techforums sadly.
 
Back
Top