DX12 core scaling on FX-8370 and A10-7850K.

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,085
AMD blog explaining how DX12 scaling works. Neat to see AMD video cards stomping Nvidia cards. But of course the drivers from both sides are young, so expect a LOT more optimizations from both sides. The new 3D Mark has new DX12 API Overhead tests for you to try. I linked the APU and FX-8370 tests.

Interesting that the FX-8370 capped out at 6 cores. Guess that makes the FX-6300 series of variants relevant especially at less than $100. ;) Then again it is nice to have a couple of cores to run other operations in the background. Wonder if consumers will see the 16 core monster that is coming next year from AMD? That would be like having 6 cores just for graphics and another 10 for other things running on your PC. :D


cuF0UTQh.png

Afiswbs.png
 
Yeah, this is very important actually. More so with the transition from OpenGL to Vulkan though. AMD's OpenGL driver sucks major ass.
 
ATI GPUs have had crap OpenGL support back to their RAGE series. Back then it was both crappy hardware and driver support, now it's mostly just a matter of bad drivers.

I've had to deal with their extremely bad immediate mode rendering support for years over a range of ATI/AMD cards.
 
The numbers in general are pretty skewed. I am not sure how this will reflect on certain aspects of the gaming audience?

Why does the general public need to know what an API does ? I mean will there be a flood of people refusing to play DX12 games? The industry knows what the difference is between DX11 and DX12.

So why is all of this media attention warranted? Is it because still a lot of publishers don't give a crap and prefer to release on DX9 thus creating a serious backlash against the old API?

I mean the benchmark thing is for total n00bs who the hell is going to run a benchmark on driver overhead and to what purpose is that showing that games can be better it only shows that as a developer you can do a lot more with the GPU then you get out of it now that does not translate to "better" games.
 
Think about the adoption rates of earlier DX APIs. This is likely a campaign to get public awareness, likely ignorant and informed, to a point where DX12 is a game seller over previous APIs. Increase the adoption rates as well as get some positive Microsoft propaganda.
 
The reason why it's necessary is because the current API's are made for a different kind of hardware architecture that doesn't exist anymore. DX12, Vulkan, Mantle are designed to function at a lower level. Basically they work at an instruction level rather than a function level. All of the current API's are made for fixed function hardware. And that hasn't existed in years. Because of that device drivers have huge overhead, and have become way too complicated. The new API's are going to simplify things a lot.

One of the reasons why linux graphics drivers for mesa is taking so long is because of that, Once Vulkan gets implemented it'll be a whole lot easier and quicker.

EDIT: That's actually the very reason why AMD's OpenGL driver for Catalyst sucks so hard. The API just isn't made for that kind of hardware.
 
Last edited:
But if anyone should force people to make DX12 games on Windows it is Microsoft. Having said that there is no one in the industry that doubts the advancements being made. But as you have all seen in the past the programmers are to lazy to write decent parallel code and rely on ipc to safe their hide.

If you take a company as blizzard if they would port their portfolio to DX12/Vulkan nearly of of their games would be playable on stuff as laptops and the simple games would work very well on tablets.

There is no doubt that DX12/Vulkan is the way to go but MS has to be forcing companies to comply.
 
But if anyone should force people to make DX12 games on Windows it is Microsoft. Having said that there is no one in the industry that doubts the advancements being made. But as you have all seen in the past the programmers are to lazy to write decent parallel code and rely on ipc to safe their hide

You can't magically make serial code parallel. DX/OGL were serial APIs. Most in-game processing is also serial. That's why you have two threads doing about 90% of the total workload, with sometimes hundreds of others doing that last 10%.
 
You can't magically make serial code parallel. DX/OGL were serial APIs. Most in-game processing is also serial. That's why you have two threads doing about 90% of the total workload, with sometimes hundreds of others doing that last 10%.

But the question is: How much of the existing games does that rule apply and how many are just too lazy to try. AMDs compute ability on their GPUs is what I think they hope DX12 will help utilize. Sometimes it takes just a little innovation to show it can be done and just how far it can go. It wont make serialized tasks disappear but may help reduce the use of it.
 
Interesting that the FX-8370 capped out at 6 cores. Guess that makes the FX-6300 series of variants relevant especially at less than $100. ;) Then again it is nice to have a couple of cores to run other operations in the background. Wonder if consumers will see the 16 core monster that is coming next year from AMD? That would be like having 6 cores just for graphics and another 10 for other things running on your PC. :D

I think you'll find that DX12 is basically optimised 'for up to' 6 cores at this point. I think that's a logical step.
 
But the question is: How much of the existing games does that rule apply and how many are just too lazy to try. AMDs compute ability on their GPUs is what I think they hope DX12 will help utilize. Sometimes it takes just a little innovation to show it can be done and just how far it can go. It wont make serialized tasks disappear but may help reduce the use of it.

About 80-90% of the code was serial, simply due to how the render path worked. You're still going to have the majority of the code serial in DX12, but it will be more parallel then it was before.
 
I think you'll find that DX12 is basically optimised 'for up to' 6 cores at this point. I think that's a logical step.

Which is expected. Even with low level coding, there's still only so many steps to the rendering process, and theres still an order in which things need to get done. So you're still going to stop scaling after a few cores.
 
You can't magically make serial code parallel. DX/OGL were serial APIs. Most in-game processing is also serial. That's why you have two threads doing about 90% of the total workload, with sometimes hundreds of others doing that last 10%.
And this is where it goes wrong, people are to stuck in their ways to do things as they are used to. Telling me that you can magically make serial code into parallel is the same as pretending that coding can't take a different approach. By using a ball and chain method of describing a problem might as well do nothing and just admit failure.

Yep 5 teraflops video cards useless because we use serial code.

Which is expected. Even with low level coding, there's still only so many steps to the rendering process, and theres still an order in which things need to get done. So you're still going to stop scaling after a few cores.

Can i remind you about the Oxide demo with Mantle a while back where they also explained that the same engine would run DX12 . And that demo didn't show a 6 core (kept scaling with more cores) limitation
 
game engines also needs to be updated a bit for dx12.
the cpu wont be a limitation with dx12 as with Mantle as with Vulcan.
Mantle kicked Microsoft into some action finally.
 
Yep 5 teraflops video cards useless because we use serial code.

The process of rendering is an "embarrassingly parallel" problem; that's why it was offloaded to parallel processors (GPUs) decades ago. The stuff that's left? That's mostly serial code. What DX12 is doing, aside from a lower driver overhead, is allowing more then one CPU thread to drive the rendering process, to allow the GPU to use its resources in a more efficient manner.

*Is a Software Engineer
 
DX12 doesn't scale to only 6 cores. Drawcalls scale to 6 cores. There is a difference.

Pretty sure everything these days is going to be well threaded. Heck the new Samsung S6 is a octocore phone. The software industry has just about caught up.
 
The Exynos 7420 you are referring to is not an 8 core design in the sense you think it is. The ARM big.LITTLE configuration used is really 2 sets of 4 cores, one set for high performance and the other set for low power (with low performance), only one set of cores is active at a time. It's a design trying to optimize power efficiency.

Also the challenge is being able to distribute load evenly (or as possible) across multiple threads. It isn't a simple issue of programmers just not doing so or being "lazy."
 
Mobile Device subforum is a little lower guys. But I agree that DX12 is not limited to 6 cores/threads. That would make no sense. Especially if it is sticking around for a while.
 
All the 6-8 core cpus out since like 2010 will now jump up in price. The age of multicore/multicpus has begun. Everything will be multi in the future. Video cards, Multiple core/processors, Teaming vlans/network cards, and multiple monitors, multiple raid drives. Thats the only way to go as things are getting too hot when the frequency is pushed too far under one core. Bottlenecks of the past will be gone. Use to be the cpu or the videocard either or were bottle necking the system. Now its limitless if they got the multithreading right. ;)
 
DX12 doesn't scale to only 6 cores. Drawcalls scale to 6 cores. There is a difference.

Pretty sure everything these days is going to be well threaded. Heck the new Samsung S6 is a octocore phone. The software industry has just about caught up.

Well when you have 8 or 16 cores and performance effectively drops off/plateaus at 6...what's the difference? Doesn't really matter what is or isn't optimised to 6 cores.

DX12..up to 6 cores...effectively, all else is diminishing returns. However, I'll take it as its a darn sight better then the current 1-2 cores with my other 6 sitting there asleep. Can't expect miracles.

I'd still hold off the champagne if you are hoping for all games to be fully multicore/threaded within the next two years.

This isn't going to happen overnight. We may still not get the full benefit till DX13...
 
The process of rendering is an "embarrassingly parallel" problem; that's why it was offloaded to parallel processors (GPUs) decades ago. The stuff that's left? That's mostly serial code. What DX12 is doing, aside from a lower driver overhead, is allowing more then one CPU thread to drive the rendering process, to allow the GPU to use its resources in a more efficient manner.

*Is a Software Engineer

How it used to work does not matter any more. Since Mantle already made it into different engines.
What DX12 is doing is that it allows to make full use of the gpu not restricting it as DX11/open-GL
 
All the 6-8 core cpus out since like 2010 will now jump up in price. The age of multicore/multicpus has begun. Everything will be multi in the future. Video cards, Multiple core/processors, Teaming vlans/network cards, and multiple monitors, multiple raid drives. Thats the only way to go as things are getting too hot when the frequency is pushed too far under one core. Bottlenecks of the past will be gone. Use to be the cpu or the videocard either or were bottle necking the system. Now its limitless if they got the multithreading right. ;)

My body is ready
 
The Exynos 7420 you are referring to is not an 8 core design in the sense you think it is. The ARM big.LITTLE configuration used is really 2 sets of 4 cores, one set for high performance and the other set for low power (with low performance), only one set of cores is active at a time. It's a design trying to optimize power efficiency.

sorry to derail the thread, but yes in part you are right. but after the Exynos 5433 both sets of cores can be seen and used by the OS as active cores completely due to the Heterogeneous Multi-Processing.. the Exynos 5420 can also utilize both sets of cores but in a more inefficient way.. the 5420 can't use HMP but asymmetrical multiprocessing due to the Global Task Scheduling.

The snapdragon 810 also support fully HMP 4+4 (A57+A53) cores..
 
I think both those processors you mentioned will be hopelessly outdated by the time we really see much benefit out of DirectX 12. Just because the API is being finalized doesn't mean games optimized for DirectX 12 will follow immediately or that they'll all be written to optimally use 8+ cores.

It's going to take a while before we see anything like that, even if the Unreal Engine ships a DX12 rendering pipeline the second DirectX 12 is released. AAA titles don't take two weeks from start to finish and Indy games aren't going to have the budget to write all their rendering code in DX12, especially since there probably won't be any need to.
 
Well when you have 8 or 16 cores and performance effectively drops off/plateaus at 6...what's the difference? Doesn't really matter what is or isn't optimised to 6 cores.

DX12..up to 6 cores...effectively, all else is diminishing returns. However, I'll take it as its a darn sight better then the current 1-2 cores with my other 6 sitting there asleep. Can't expect miracles.

I'd still hold off the champagne if you are hoping for all games to be fully multicore/threaded within the next two years.

This isn't going to happen overnight. We may still not get the full benefit till DX13...

It literally says on the slide that the performance plateau is on the application side of things.
 
Back
Top