FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,532
DX11 vs DX12 AMD CPU Scaling and Gaming Framerate - One thing that has been on our minds about the new DX12 API is its ability to distribute workloads better on the CPU side. Now that we finally have a couple of new DX12 games that have been released to test, we spend a bit of time getting to bottom of what DX12 might be able to do for you. And a couple sentences on Async Compute.
 
Really liked the article and i agree, it has potential, now the thing is if developers will be able to extract the performance from it.

The part that you explain that Async isn't the main performance generator for AMD is quite interesting, makes me wonder if AMD wouldn't have shown us such strong performance on dx11 with a better driver development budget than they had, but well, hindsight is 20/20
 
Nice read. The conclusion was a mixed bag. My 2600K does great for my needs, but I'd really like to upgrade. I don't need much of the other platform improvements, so it comes down to gaming speed. I guess I can put more into a nice GPU, though, and upgrade the CPU next year. :) So, it'll be a nice upgrade no matter what.
 
good read!
DX12 it definitely heading in the right direction. the whole async argument people had/have is pointless. the massive gains are from the changes in DX12 itself. there is very little difference between async on/off, less than 2FPS or the normal MOE. AOTS is the only DX12 game I have and it totally changed that game for me. I hope its the same for all DX12 games and breaths new life into my FX-8120.
 
Great article. It is really fun to mess around with the core clocks on those processors and see the difference in performance in AoTS. You should have OC'd that FX-8370 to 5.0GHz for kicks and told us how hot your room got! :D For science of course! I didn't think to try it myself, but I wonder if VSR would have allowed you to run the benchmark at 4K? I'll try in a few and let you know.
 
Great article. It is really fun to mess around with the core clocks on those processors and see the difference in performance in AoTS. You should have OC'd that FX-8370 to 5.0GHz for kicks and told us how hot your room got! :D For science of course! I didn't think to try it myself, but I wonder if VSR would have allowed you to run the benchmark at 4K? I'll try in a few and let you know.
I was having trouble getting the system stable so I pussed out at 4.3GHz. But the 2GHz spread showed me an awful lot. The Intel is done at 1.2GHz and 4.5GHz with Fury X, 980Ti, 7970, and 670. Putting the graphs together now.
 
I bet the biggest gains would have been the 7970 on the FX-8370. too bad the system crapped out!
 
why you didnt tried a 290x2/295x2/390x2/2x290(x)/2x390(x) ?
and what about turning off/on asynchronous shading?
 
why you didnt tried a 290x2/295x2/390x2/2x290(x)/2x390(x) ?
and what about turning off/on asynchronous shading?

I thought you were doing all those cards? Sorry, I guess we miscommunicated.

Reading is fundamental.
 
I mean testing cpu bound scenarios with multiadapter and then compare with the same methodology,Undercloxked Stock clocks and oced,and then disable some modules.

Just to see how multiadapter acts with DX12 on a cpu bound game
I thought you were doing all that work this week!
 
I think you are confused,and you think i am someone else or you are telling me is too much to be published quickly

But I am only suggesting some scenarios hasnt been tested yet
Oh, you want me to test with 10 new sets of variables and cards that I do not have here. Got it. :) Not really sure what CF and SLI is supposed to prove in a fully CPU limited scenario, but I might look at it if I have extra time. I have a set of 980Ti here that I can SLI, but I am not buying another Fury X card.

And if you would READ rather than just look at pictures, you would have seen I talked about Async. As said, reading is fundamental.
 
This should really help XB1 given its slow 8-core CPU. I would be curious how this stacks up to Sony's proprietary API on their hardware.

In the PC market, I really want to see how Pascal GPU's are going to handle the API. I'm not sure we have such a focus on only async compute. Perhaps it's currently all we have to test, or maybe people like the underdog (AMD) winning. What I'd be interested in knowing, personally, is how something like async compute stacks up against the DX12 options found only on Maxwell GPU's like conservative rasterization and raster order views. If it's not possible to get everything on a reasonably-sized die that doesn't cost $1,000 for a non-Titan, then what tech is going to give us the higher frame rate is what's really going to matter in the end, and is something that both manufacturers should bow to. I suppose to most computer owners, being CPU capped is an issue, but this is more of an enthusiast site after all, and most of us are running CPU's and resolutions that GPU cap us.
 
This should really help XB1 given its slow 8-core CPU. I would be curious how this stacks up to Sony's proprietary API on their hardware.

In the PC market, I really want to see how Pascal GPU's are going to handle the API. I'm not sure we have such a focus on only async compute. Perhaps it's currently all we have to test, or maybe people like the underdog (AMD) winning. What I'd be interested in knowing, personally, is how something like async compute stacks up against the DX12 options found only on Maxwell GPU's like conservative rasterization and raster order views. If it's not possible to get everything on a reasonably-sized die that doesn't cost $1,000 for a non-Titan, then what tech is going to give us the higher frame rate is what's really going to matter in the end, and is something that both manufacturers should bow to. I suppose to most computer owners, being CPU capped is an issue, but this is more of an enthusiast site after all, and most of us are running CPU's and resolutions that GPU cap us.
Nexus, it looks like AMD is turning the table. Microsoft has been racking $ for as long as we are using their OS's. Never giving much thought to graphics innovation. Now it seems like they have taken a look at the gaming industry. Considering that AMD hardware is in all Gaming consoles. it looks like we are in for some serious competition $ wise. I am no visionary but the giant may bleed. No self driving cars are running our streets anytime soon. :D
 
Nexus, it looks like AMD is turning the table. Microsoft has been racking $ for as long as we are using their OS's. Never giving much thought to graphics innovation. Now it seems like they have taken a look at the gaming industry. Considering that AMD hardware is in all Gaming consoles. it looks like we are in for some serious competition $ wise. I am no visionary but the giant may bleed. No self driving cars are running our streets anytime soon. :D


Time will tell, but I honestly think you may be right. Most devs who are working their own engines focus on console hardware first and foremost. I don't know what the programmatic implementation on these Nvidia-specific features is like, but if it's too time-consuming, we may not see many, if any, engines taking advantage of them. Despite losing the consoles to AMD, Nvidia still does have a glut of the PC market share, so they still have some leverage; however, leverage in only the PC market given the formerly mentioned focus the devs typically have may very well not be enough in the end.

I'm the kind of guy who has no brand loyalty. You have to earn my dollar with every purchase decision I make. If AMD has truly out-stepped Nvidia, then I have a couple of 980 Ti's that will be up for sale. No matter the outcome, this will be interesting to watch as it unfolds. *grabs popcorn*
 
Does anyone know of a discussion on DX12 making its way to Windows 7? Last I heard it wasn't happening.

If this whole Quantum Break situation has proved anything - I wonder if Windows will push forum more DX12 exclusives to force people to Windows 10?

I'm interested in getting DX12 to try out the new API. I'm just not sure that I want to deal with Windows 10 along with it.

DX12 is and always will be W10+ only. its called progress, just make the jump already. it's a good OS and games run fine on it.
everyone has always bitched about every new version of windows and then they get used to it. ok, ME and Vista were not that great, I personally never had issues with either but a lot of people did.

No self driving cars are running our streets anytime soon.
Yeah! what the hell is that?! hey look at our new card! it goes in your car... wtf?!

No matter the outcome, this will be interesting to watch as it unfolds. *grabs popcorn*
very!
 
holy cow... when was the last time you had an amd cpu on the test bench?? I had to do a double take when i read the header. The article is VERY interesting though.. i am looking forward to the next parts.
 
Great read again, still doesn't convince me that DX12 is quite there yet. But I'm still sold on its progress being made.

Look forward to the Intel line up. Too bad you guys couldn't of made one article containing the two!
 
Not sure how much gain one can get with Async Compute turned on when CPU bound in these tests - may explain the only 3%. In GPU bound scenarios I would be interested to see if that changes. Still since this title is not really overly graphically bound maybe a mute point as well.

I was able to get Fraps to work with Aida 64 to record FPS in DX12 titles - unless there is something inaccurate about it then there is means to extract frame rate data from DX12 titles. Aida records data not only from fraps but gpu temp, cpu usage even per core, frequencies etc. It can save it to easily open in Excel to make graphs and for editing.

LOL seeing an AMD cpu being used, actually nice to see since some of us have had good experiences with them.
 
TYVM for the hard work.

Now, I wonder if the same thing would happen if we replaced the GPU bound scenario with an AMD card and CPU bound scenario with nVidia (EG 280X vs Titan X)?
 
This was one of the most interesting articles if not most interesting i have read in a while.
The difference between DX11 and 12 was significant. I'm very excited to see what Zen and Polaris can do with DX12.
I think AMD and MS have a winner with DX12, but i hope we will see some Vulkan comparisons in the future, dont want MS to get too complacent with their new API.
Thanks for putting in the "Hard" work.
 
Thanks for the bit on Async Compute. There is so much misinformation and more than a few crazy color-tinted opinionated FUD running around, it's hard to sort through it all. I am very much looking forward to some review of proper DX12 GPU-limited games. I love my 970, and don't plan on replacing it any time soon. Same with my OC'd 2500k. However, this is the first time in a while where DX support has actually differed between team red and green to any meaningful extent. How it plays out in GPU-intensive games is going to be very interesting.
The other question in my mind is, when will Nvidia release cards with an async -capable architecture, and by then, will the games be able to actually take advantage of it such that AMD will have had a tech advantage?
 
Well, this answers the question why Fury X gets an ungodly performance increase when you hit the DX12 button: it's CPU-limited even at 4.3 GHz!

But man, there is a lot of power there to be tapped, if you're willing to put your nose to the grinder.

I'm not sure Ashes is a fair comparison of the absolute capabilities of both GPU architectures (there's some evidence they started-out AMD-centric), but it is a shining statement of what's possible.
 
I'm not sure Ashes is a fair comparison of the absolute capabilities of both GPU architectures (there's some evidence they started-out AMD-centric), but it is a shining statement of what's possible.
I would wholeheartedly agree with this statement. DX12 has some exciting stuff going on, at least in some situations.
 
This is exactly inline with my expectations. Good confirming information!

I'd be curious how much of a difference DX12 makes on fewer core lower clock chips as well though.

IMHO, the main push behind DX12 has been to squeeze more out of mobile systems which are lower clocked due to battery and cooling considerations.


Adding a mobile AMD APU and maybe a laptop with a dual core mobile haswell chip and a mobile Nvidia GPU into the mix would be very interesting IMHO.
 
This is exactly inline with my expectations. Good confirming information!

I'd be curious how much of a difference DX12 makes on fewer core lower clock chips as well though.

IMHO, the main push behind DX12 has been to squeeze more out of mobile systems which are lower clocked due to battery and cooling considerations.


Adding a mobile AMD APU and maybe a laptop with a dual core mobile haswell chip and a mobile Nvidia GPU into the mix would be very interesting IMHO.

Or alternatively use desktop Skylake and also Broadwell or Haswell without discrete GPU.
Broadwell would be interesting because it has EDRAM, Skylake as well due to its DX12 support.
It gives more of a feel what they can do, and the future of APU gaming will be when there is more powerful designs replacing low tier discrete GPUs.
Cheers
 
Thanks for the bit on Async Compute. There is so much misinformation and more than a few crazy color-tinted opinionated FUD running around, it's hard to sort through it all. I am very much looking forward to some review of proper DX12 GPU-limited games. I love my 970, and don't plan on replacing it any time soon. Same with my OC'd 2500k. However, this is the first time in a while where DX support has actually differed between team red and green to any meaningful extent. How it plays out in GPU-intensive games is going to be very interesting.
The other question in my mind is, when will Nvidia release cards with an async -capable architecture, and by then, will the games be able to actually take advantage of it such that AMD will have had a tech advantage?

Don't forget that Async is very circumstantial with higher workloads it will scale better. Async is where the shader is not doing anything you can not really anticipate on it there is no rule on how often this happens just on smaller workloads not so much and on bigger more often.

Some people tested Async on 4K resolutions and that was rather scary number ..... Don't forget that Async in this game will not be the same for other games unless they are using the Nitrous engine. When talking about DX12 it is a generalization rather then fact for everything regarding to DX12.
 
This is exactly inline with my expectations. Good confirming information!

I'd be curious how much of a difference DX12 makes on fewer core lower clock chips as well though.

IMHO, the main push behind DX12 has been to squeeze more out of mobile systems which are lower clocked due to battery and cooling considerations.


Adding a mobile AMD APU and maybe a laptop with a dual core mobile haswell chip and a mobile Nvidia GPU into the mix would be very interesting IMHO.

I had an overclocked Q6600 and Mantle's performance boost was absolutely enormous in BF4. Like framerate flat out doubled or tripled in many cases. On my 6700k, DX11 and Mantle are both about the same. Same GPU.

Some genuinely weak CPU's would be interesting to see.
 
I just remembered I still have my old fx4100 kicking around. I should throw that back in and test it. Think I'll do that later today or tomorrow. Fx-4100 + 280x vs fx-8120 + 280x maybe stock vs oc too...
 
Great article! Looks to me like DX12 is a go!

....I'd be curious how much of a difference DX12 makes on fewer core lower clock chips as well though...

Fewer cores will erase the advantage of DX12, at least the main advantage reported by whoever in that fps comparison 42 to 60, a 43% jump.

The one thing that scenario might point out, if tested on a Fury-X since those are scaling up with DX12, is how much of the performance is coming from the extra cores, and how much of the performance is coming from elsewhere in DX12.

That actually could be interesting.

Kyle, on that AMD cpu, can you disable half the cores and run the exact same tests? The performance drop will equate to how much the core-spreading ability of DX12 is helping. That way we can separate out the performance coming from DX12 in other aspects.

The Nvidia card dropping in performance on DX12 makes me think their drivers are not properly utilizing DX12 yet. Time will tell.
 
Back
Top