Ashes of the Singularity benchmarks.

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,085
Just started a thread to see what numbers others are getting from the benchmark. There is a ton more information under the Breakdown tab. The lowest frames were when the map was zoomed out. This benchmark is better than Furmark as far as loading up a video card. I set my fan profile to a constant 40% normally and I hit 94c (throttling) with my mild OC of 1050 core on my R9 290. I never see above 75c in other games. I increased the fan speed and everything was good. Good luck with your GPU OC's in demanding DX12 titles. No first impressions of the game play as I literally just purchased it.

AMD FX-9370 @4.7 (Stock for a FX-9590)
R9 290 @1050 core and 1350 memory.
Steam, virus protection, Malwarebytes, GOG launcher, Origin, etc running in the background as a normal PC would have.


hKoZOTS.png
 
Last edited:
Could you try these settings and post your results:
sBl64wP.jpg


I'd be curious to see how your AMD CPU performs compared to that Intel CPU with those settings.
 
different version(49.11820 vs 50.12113) can show different results even in same configuration take that into consideration.
 
Trying to figure out how to set the Temporal AA to 12 instead of 6. Not seeing the setting for it offhand. Also the terrain shading samples setting is different from mine. Let me read the forums a bit and see.
 
I'm also thinking that PCIe 2.0 might be a bottleneck when using DX12 and factoring in the amount of draw calls being made. Your GPU should be able to keep up with that CPU of yours.

Your motherboard has a PCIe 2.0 bus. Something tells me that PCIe 2.0 might be a bottleneck. Just a theory thus far... but it could explain why a system running a Core i3 is able to beat a system running an AMD FX 83x0 processor in this benchmark.

Again just a theory.
 
different version(49.11820 vs 50.12113) can show different results even in same configuration take that into consideration.

That's true. But the settings I am requesting ought to bottleneck his GPU further.

I'm curious to see by how much.
 
I'm also thinking that PCIe 2.0 might be a bottleneck when using DX12 and factoring in the amount of draw calls being made. Your GPU should be able to keep up with that CPU of yours.

Your motherboard has a PCIe 2.0 bus. Something tells me that PCIe 2.0 might be a bottleneck. Just a theory thus far... but it could explain why a system running a Core i3 is able to beat a system running an AMD FX 83x0 processor in this benchmark.

Again just a theory.

that's a hard thing to believe for sure... the far I can go in think why a i3 can beat a octacore FX Chip may be due the state of the game and probably are using another kind of task like real time debugging.. I'm expecting to see FX chips competing with quad core i7s once the game become gold state.
 
WCCFTECH ran the benchmark on Crazy with Glare off it seems. :) Let me know if I botched it in some way.

Sp1W6Mw.png
 
I'm also thinking that PCIe 2.0 might be a bottleneck when using DX12 and factoring in the amount of draw calls being made. Your GPU should be able to keep up with that CPU of yours.

Your motherboard has a PCIe 2.0 bus. Something tells me that PCIe 2.0 might be a bottleneck. Just a theory thus far... but it could explain why a system running a Core i3 is able to beat a system running an AMD FX 83x0 processor in this benchmark.

Again just a theory.

Oh my video card runs at 8x instead of 16x also due to me buying the motherboard on day one of release in 2011. I can send it into Asus for replacement, but honestly I don't trust Asus repair centers. They have a horrendous RMA process and I'd rather just deal with what I have.
 
Perfect. You just proved that pcgameshardware has questionable ethics. :p
 
Last edited:
Oh my video card runs at 8x instead of 16x also due to me buying the motherboard on day one of release in 2011. I can send it into Asus for replacement, but honestly I don't trust Asus repair centers. They have a horrendous RMA process and I'd rather just deal with what I have.

Bingo!

You just advanced my theory about a potential PCIe bottleneck. It seems that we need to verify what PCIe rate is running on our GPUs (GPU-Z) before running this benchmark. It could be that x16 PCIe 2.0 is plenty fast. We haven't established that it isn't. But we have proven that Ashes of the Singularity does indeed saturate a PCIe x8 2.0 bus. I wonder what it can do to a PCIe x16 2.0 bus. Some systems may be running multiple PCIe devices, dropping the PCIe link of their Graphics card thus resulting in an oddball result too.

Thank you Good sir :)
 
Last edited:
Bingo!

You just advanced my theory about a potential PCIe bottleneck.

Thank you Good sir :)

Also remember that I have antivirus, Malwarebytes, Steam, Origin, GOG, the kitchen sink, all running in the background also. But I doubt if it is making a huge difference though. Usually they stay quiet.
 
Also remember that I have antivirus, Malwarebytes, Steam, Origin, GOG, the kitchen sink, all running in the background also. But I doubt if it is making a huge difference though. Usually they stay quiet.

Well your CPU frame rate is quite high. It's your GPU frame rate that is lower than expected (GPU Bound). You should be pulling off at least 44FPS with your R9 290 @ 1050MHz on an Intel CPU with a PCIe 3.0 bus. Since your CPU frame rate is 42.8FPS then we should see your GPU framerate at around 42FPS on your current AMD FX cpu. In theory your GPU should perform nearly the same on any Intel CPU than it does your AMD CPU.

It could also be the Hypertransport link which causes oddball AMD CPU performance 6.4 GT/s = 12.8 GB/s. For all intents and purposes, and using the best case scenario of a 990 chipset being used, an AMD FX Processor communicates with the Graphics card at a maximum rate of 12.8 GB/s:
BmxQKa7.jpg


I've seen people with lower results. I wonder what AMD chipsets they're using and what speed their running their Hyper Transport links at?

We have to remember, there is far more information being sent through the PCIe bus with DX12. Each draw call is a piece of information. When PCIe saturation tests were done they used DX11 titles in order to measure performance. Hyper Transport, even version 3.1, runs far slower than what a PCIe 2.0 x16 slot can offer.

If we look at Hyper Transport 3.0, we're looking at a 10.4 GB/s:

1,800 MHz = 3,600 MT/s = 7,200 MB/s
2,000 MHz = 4,000 MT/s = 8,000 MB/s
2,400 MHz = 4,800 MT/s = 9,600 MB/s
2,600 MHz = 5,200 MT/s = 10,400 MB/s

All theoretical of course.
 
Last edited:
Well your CPU frame rate is quite high. It's your GPU frame rate that is lower than expected (GPU Bound). You should be pulling off at least 44FPS with your R9 290 @ 1050MHz on an Intel CPU with a PCIe 3.0 bus. Since your CPU frame rate is 42.8FPS then we should see your GPU framerate at around 42FPS on your current AMD FX cpu. In theory your GPU should perform nearly the same on any Intel CPU than it does your AMD CPU.

I wouldn't take in consideration other versions of the benchmark in comparison to this one, as if you check the changelog of v0.50 you will note that dev's state Performance Optimizations. best way to compare would be with another guy with the same game version and at the same time using an intel CPU with similar GPU.
 
Crazy setting. This is the default Crazy setting that has the Glare Quality on High. Much higher frames than before. Why? Dunno.


lHXm6m3.png
 
I wouldn't take in consideration other versions of the benchmark in comparison to this one, as if you check the changelog of v0.50 you will note that dev's state Performance Optimizations. best way to compare would be with another guy with the same game version and at the same time using an intel CPU with similar GPU.

I agree. I wouldn't make hard assumptions based on two different versions of the game.
 
High setting. Maybe Win 10 was scanning it before? Maybe I changed another setting somewhere messing with it before?


YxetFnv.png
 
That's the performance I was looking for :)

Very odd but that's about where you should be with your GPU. Except the heavy batches % GPU bound makes me think you should reboot your PC and try again.
 
High setting. Maybe Win 10 was scanning it before? Maybe I changed another setting somewhere messing with it before?



full screen maybe?.. I know of some benchmark that run better full screen with more dedicated resources.
 
Just rebooted and got the same higher DX12 results. I think Windows was updating earlier or scanning Ashes. Or maybe I unchecked something important. ;)

DX11 High settings for science. Look at the frame times! R.I.P. DX11. I shall not miss you.

H7iSwh3.png
 
full screen maybe?.. I know of some benchmark that run better full screen with more dedicated resources.

You're right! I did have it in Windowed because I was going to capture some footage in OBS before I started benching. Good find!

Thx! :)
 
High 2xMSAA is too much for the little R9 290. Going to try and actually play the game now. :)


loNLCez.png
 
We have to remember, there is far more information being sent through the PCIe bus with DX12. Each draw call is a piece of information.

Would that really be true? When we look at the AZDO stuff you have the devs storing the buffer data in GPU memory and then draw calls are mostly just manipulating pointers to GPU data with some less frequent updates to the buffer data (if at all?). I figured BMDI would involve a lot less data transfer than DX11. Isn't the point of a low-level API that you don't have to send a lot of data to the GPU, but rather you're directly accessing the memory on the GPU (or, more precisely, passing GPU pointers)?

Granted there is a larger volume of draw calls, but the parameter size for an AZDO call is what, 20 bytes?
 
Just rebooted and got the same higher DX12 results. I think Windows was updating earlier or scanning Ashes. Or maybe I unchecked something important. ;)

DX11 High settings for science. Look at the frame times! R.I.P. DX11. I shall not miss you.


Frametimes are OK.. just right where they should be.. just as reference the correct frame time for 60hz are 16.6ms. So [email protected] would be the right and stuttering free image.. for 120hz it's 8.33ms and 30hz it's 33ms.. so [email protected] are right where the frame should be. Sutter happens when for example you are at 60hz locked with a high frame time variance.. below and above where you should be. That's why some software utilice frame time as FPS measurement because it will represent a more real and subjetive "like" experience able to be measured.. have you ever played a game at 60fps that just feels wrong like it was at 45fps or less? With heavy input lag? That's a job of a bad frame time..

You're right! I did have it in Windowed because I was going to capture some footage in OBS before I started benching. Good find!

Thx! :)

NP... you know I had the little suspect right when I saw the OP Image.. then you just confirmed it later with the same settings.. :)

PS: sorry for mistakes I'm on phone and my phone for unknown reason just hate [H]forum.
 
Tried some High settings CPU @5.0GHz runs on air, but it seems that Windows is back to scanning for hardware changes I believe. No idea why I keep getting such a wide range of numbers. Look at the GPU usage numbers between this one and my best run. Anyways I'm going to play some games instead of benchmarks now. ;) Maybe Windows will fix itself tonight magically.

Ypyu2R5.png
 
I would start with the machine at stock.. to get a constant baseline.. taking into consideration your first statement about how hard push that benchmark... we can think that it can also expose some instability behavior.. that just shouldn't happen at stock settings and should be really constant several times.. then with some scores as baseline you can start to your typical overclock and see how the system react at those changes.. again several times until find a stable and constant result across each test and then proceed to further overclock... also be sure to check task manager before launch the game so you can see if the system it's doing something CPU or DISK intensive..

But as a personal experience. Inconsistent results are symptoms of unstable overclock..
 
I would start with the machine at stock.. to get a constant baseline.. taking into consideration your first statement about how hard push that benchmark... we can think that it can also expose some instability behavior.. that just shouldn't happen at stock settings and should be really constant several times.. then with some scores as baseline you can start to your typical overclock and see how the system react at those changes.. again several times until find a stable and constant result across each test and then proceed to further overclock... also be sure to check task manager before launch the game so you can see if the system it's doing something CPU or DISK intensive..

But as a personal experience. Inconsistent results are symptoms of unstable overclock..
 
I would start with the machine at stock.. to get a constant baseline.. taking into consideration your first statement about how hard push that benchmark... we can think that it can also expose some instability behavior.. that just shouldn't happen at stock settings and should be really constant several times.. then with some scores as baseline you can start to your typical overclock and see how the system react at those changes.. again several times until find a stable and constant result across each test and then proceed to further overclock... also be sure to check task manager before launch the game so you can see if the system it's doing something CPU or DISK intensive..

But as a personal experience. Inconsistent results are symptoms of unstable overclock..

Yea I was thinking that also. Maybe the CPU was starting to thrash at 5.0 GHz. I did have to bump the voltage by a lot to get it to finish the benchmark. At 5.3 GHz on air this CPU isn't stable at all from personal experience. ;)

Worth mentioning that I always have the Turbo off and run all 8 cores flat out.
 
Nice info! I know someone who's having fun ;)

Having too much fun with this. Figured out that a setting called Temporal AA Quality doesn't seem to change in the benchmark when you choose Low, High, Crazy, etc. So turning it off yields the faster results that look like everyone else's on the net. I wonder what the default for it is?

Here is a fun run just for you. 800MHz FX-9370. Do you remember what Brad Wardell stated was the lowest speed that the FX-8350 could run at and yield favorable results? Guess I'll try 2.4GHz next as I think that's what my old Q6600 ran at. I doubt I'll actually play the game until 3am.


oOAAx7S.png
 
Nice thanks for all the scaling options :), cool to see how its CPU bound and each ~2ghz gives 15fps or so.
 
Nice thanks for all the scaling options :), cool to see how its CPU bound and each ~2ghz gives 15fps or so.

I know right! Kinda cool! :)

Okay well here is a 5.0GHz run. High settings and all cores enabled. Any higher and I have to mess around in the bios. Just doing this from A.I. Suite II in Windows 10.


BciVKNd.png
 
Okay well here is a 5.0GHz run. High settings and all cores enabled. Any higher and I have to mess around in the bios. Just doing this from A.I. Suite II in Windows 10.

It's amazing that even that CPU is bottlenecking the game so much. Looks like they need to go back to the drawing board on this one.
 
Is there a way to just download the benchmark alone? All I see on the website was crap for buying the actual game.
 
Back
Top