290/290x and DX12/Vulkan?

jamesgalb

Gawd
Joined
Feb 11, 2015
Messages
565
What is a good source to find out what DX12/Vulkan features the 290/290x will and will not have?

I believe I saw something about it supporting 'some' new DX12/Vulkan features but will be hardware limited from others, and that the number of compute cores it has will benefit greatly from the new APIs asynchronous coding... Any good place to check out information on this?
 
Ah.. here is some info I was looking for.

http://community.amd.com/community/...g/2015/04/22/major-new-features-of-directx-12

Has info on AMDs "ACE" asynchronous compute cores, and how DirectX12 will finally utilize them.

Now the thing that stuck out to me is that I remember reading somewhere that the 290x has more asynchronous compute cores than a GTX 980. I also like to find the TitanX's asynchronous compute core total, but first I have to find that source for the 980s totals... Im very interested in seeing if this forward-thinking architecture will actually boost the 290x more than the GTX 980 going into the next generation of APIs...

But I also remember reading that AMDs GCN/290x wont utilize all abilities of DirectX 12 because of hardware limitations, so im interested in finding more information about that as well.
 
The latest postings say 290/290x is DX12 Teir 3 (full bindless resources)

That mean FULL DX12 support in hardware.
 
Last edited:
AMD during their conference call with investors mentioned that Win 10 is going to launch late July; just in time for the back to school season. So we will find out then if the R9 290x supports all of the features of DX12. I read the same thing that The_Mac said some months ago.

From what Nvidia said around the time that they did the DX12 demo with MS last year, their current cards won't support the full DX12 toolset. They will run DX12 games just fine, but a few features won't work on them. Considering that most games don't utilize the entire DX11 toolset, this is understandable. I'm sure that Pascal will support the full DX12 toolset though. Would be crazy of them not to.
 
ths ago.

From what Nvidia said around the time that they did the DX12 demo with MS last year, their current cards won't support the full DX12 toolset. They will run DX12 games just fine, but a few features won't work on them.

supposedly, any unsupported features can be emulated in software so the feature will still work, just slowly.

Although to be honest, im not sure how you emulated unlimited bindless resources with limited ones.

lol
 
Im very interested in seeing if this forward-thinking architecture will actually boost the 290x more than the GTX 980 going into the next generation of APIs...


http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/3


This 1 single example with a very beta-stage demo using a very beta-stage DX12 build and very beta-stage drivers, the 290X gets a much larger performance boost with DX12 than the 980 does (417% vs 150%, respectively).


Still, the 980 is around 55% faster than the 290X when 4 CPU cores are in use under DX12 (222% faster under DX11), while using a bit less power.


Short of the story is that, sadly, the 290X is old news and is starting to show its age. The good news, 390/390X should hopefully be coming soon. I am really rooting for AMD to bring us a true winner in all the critical fronts: performance, power consumption, heat output, and price tag.
 
http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/3


This 1 single example with a very beta-stage demo using a very beta-stage DX12 build and very beta-stage drivers, the 290X gets a much larger performance boost with DX12 than the 980 does (417% vs 150%, respectively).


Still, the 980 is around 55% faster than the 290X when 4 CPU cores are in use under DX12 (222% faster under DX11), while using a bit less power.


Short of the story is that, sadly, the 290X is old news and is starting to show its age.
The good news, 390/390X should hopefully be coming soon. I am really rooting for AMD to bring us a true winner in all the critical fronts: performance, power consumption, heat output, and price tag.

Its currently one of fastest single GPU card on the market (980 and Titan X beat it, 970 is arguable especially at 1440p, 780ti no longer sold new).

It seems to stand to gain performance more than any card on the market over the next year.

It has a full feature set for the new APIs.

How the hell is that 'old news and showing its age'?
 
Its currently one of fastest single GPU card on the market (980 and Titan X beat it, 970 is arguable especially at 1440p, 780ti no longer sold new).

It seems to stand to gain performance more than any card on the market over the next year.

It has a full feature set for the new APIs.

How the hell is that 'old news and showing its age'?

OK...compare specs. The 980 and Titan X beat it with less memory bus and power. The 970 is around the same (1080p and even into 1440p territory, depending on the game and settings) with half the memory bus and 40% less power draw. ...but all that comes with a price, as it's impossible to beat the value for the performance the 290X gives at current pricing.

Don't get me wrong: the 290X is still a fine choice for being older tech but, just as it shows impressive gains with a LLAPI such as DX12, the nVidia parts offer really good gains, as well...and the 290X is getting left in the rearview mirror in these early LLAPI tests. It's past time for the 390 and 390X to arrive. The market needs them. We consumers need them.
 
OK...compare specs. The 980 and Titan X beat it with less memory bus and power. The 970 is around the same (1080p and even into 1440p territory, depending on the game and settings) with half the memory bus and 40% less power draw. ...but all that comes with a price, as it's impossible to beat the value for the performance the 290X gives at current pricing.

Don't get me wrong: the 290X is still a fine choice for being older tech but, just as it shows impressive gains with a LLAPI such as DX12, the nVidia parts offer really good gains, as well...and the 290X is getting left in the rearview mirror in these early LLAPI tests. It's past time for the 390 and 390X to arrive. The market needs them. We consumers need them.

Starswarm is a single engine test made by Stardock meant to show how DX12 improves batch submission of API calls. It has almost no relevance to expectations of general performance in next generation DX12 games with enthusiast-level graphics.

There is no reliable evidence of what directx 12 will or won't do for AMD or Nvidia. Saying otherwise is mere speculation. Starswarm is there to demonstrate a specific scenario for a large scale RTS game that's doing something unique with numbers of units and AI.

These results don't tell us how either will perform in the next Elder Scrolls or GTA 6 or something people will actually play that pushes their graphics hardware.
 
supposedly, any unsupported features can be emulated in software so the feature will still work, just slowly.

Although to be honest, im not sure how you emulated unlimited bindless resources with limited ones.

lol

You don't, there's no mechanism to just fill in the blanks like that. It would undermine a lot, anyway.

You may be able to use WARP but it's all or nothing, and not something you really want people playing your game on.
 
Read a couple good articles on ASync shaders and Multi-Threaded Command Buffer Recording with DX12. AMD has been quite forthcoming with DX12
.
http://www.developer-tech.com/news/2015/apr/23/amd-opens-about-directx-12s-key-benefits/



After reading the first article, I then found this article surprisingly VERY interesting. Surprising because I thought I was reading about the Xbox One, but turns out there is more to the story of DX12 than what we first expect:

http://www.developer-tech.com/news/2015/apr/24/directx-12-unlocking-xbox-ones-potential/

Lots of good info here like upcoming games Witcher 3 & Batman: Arkham Knight seemingly getting a DX12 upgrade which is interesting and VERY worth the wait if it works out.

However, this quote in particular caught my attention :

Wardell said at GDC: "I've had a lot of meetings with Microsoft, AMD, and a little bit of NVIDIA and Intel - they really need to hit home the fact that DirectX 12, Vulkan, and Mantle, allow all of the cores of your CPU to talk to the video card simultaneously".

"But everyone's really iffy about that, because that means acknowledging that for the past several years, only one of your cores was talking to the GPU, and no one wants to go 'You know by the way, you know that multi-core GPU? It was useless for your games.'"

hmm... indeed.
 
Starswarm is a single engine test made by Stardock meant to show how DX12 improves batch submission of API calls. It has almost no relevance to expectations of general performance in next generation DX12 games with enthusiast-level graphics.

There is no reliable evidence of what directx 12 will or won't do for AMD or Nvidia. Saying otherwise is mere speculation. Starswarm is there to demonstrate a specific scenario for a large scale RTS game that's doing something unique with numbers of units and AI.

These results don't tell us how either will perform in the next Elder Scrolls or GTA 6 or something people will actually play that pushes their graphics hardware.

Re-read my initial post...I stated that it was ONE test using very beta, well, everything. You're defending the aging 290X like an LLAPI (such as DX12 orVulkan) is going to sprinkle some kind of magic fairy dust on it and allow it to kick the asses of the 980 and Titan X. Got news for ya...not happening. Just as the 290X shows admirable gains from a LLAPI, so do the nVidia offerings. The Maxwell architecture is superior in every way, and that it just a fact we all have to accept and live with.

Let's all keep our fingers crossed that the 390/390X will change all that, and hopefully very soon.
 
The 290x probably sees the higher gains (percentage-wise) jumping to DX12 because the AMD DX11 driver simply isn't as optimized as nVidia's. Maybe Omega phase-2 will close the gap a bit, but it looks like AMD is banking heavily on DX12's efficient-by-design approach.

They're in a tough spot, optimizing the driver for every game is costly and basically defeats the purpose of a generic API - and it will be better for both camps if software begins to move away from that. It just sucks to see AMD wasting performance with their existing DX11 driver.
 
http://community.amd.com/community/...g/2015/04/22/major-new-features-of-directx-12

Bottom of the page lists all the AMD DX12 "cards".

Before we part ways, you might be interested to know which AMD products are compatible with DirectX® 12. Presuming you’ve installed Windows® 10 Technical Preview Build 10041 (or later) and obtained the latest driver from Windows Update, here’s the list of DirectX® 12-ready AMD components. We think you’ll agree that it’s an excitingly diverse set of products!


AMD Radeon™ R9 Series graphics
AMD Radeon™ R7 Series graphics
AMD Radeon™ R5 240 graphics
AMD Radeon™ HD 8000 Series graphics for OEM systems (HD 8570 and up)
AMD Radeon™ HD 8000M Series graphics for notebooks
AMD Radeon™ HD 7000 Series graphics (HD 7730 and up)
AMD Radeon™ HD 7000M Series graphics for notebooks (HD 7730M and up)
AMD A4/A6/A8/A10-7000 Series APUs (codenamed “Kaveri”)
AMD A6/A8/A10 PRO-7000 Series APUs (codenamed “Kaveri”)
AMD E1/A4/A10 Micro-6000 Series APUs (codenamed “Mullins”)
AMD E1/E2/A4/A6/A8-6000 Series APUs (codenamed “Beema”)
 
Last edited:
The 290x probably sees the higher gains (percentage-wise) jumping to DX12 because the AMD DX11 driver simply isn't as optimized as nVidia's. Maybe Omega phase-2 will close the gap a bit, but it looks like AMD is banking heavily on DX12's efficient-by-design approach.

They're in a tough spot, optimizing the driver for every game is costly and basically defeats the purpose of a generic API - and it will be better for both camps if software begins to move away from that. It just sucks to see AMD wasting performance with their existing DX11 driver.

You write about something which is highly dependable on which games and what processor is used not just that but also batch count for games under DX12 is a major contributing factor. The real winner are the people with AMD cpu more then anything else.
 
Last edited:
Starswarm is a single engine test made by Stardock meant to show how DX12 improves batch submission of API calls. It has almost no relevance to expectations of general performance in next generation DX12 games with enthusiast-level graphics.

There is no reliable evidence of what directx 12 will or won't do for AMD or Nvidia. Saying otherwise is mere speculation. Starswarm is there to demonstrate a specific scenario for a large scale RTS game that's doing something unique with numbers of units and AI.

These results don't tell us how either will perform in the next Elder Scrolls or GTA 6 or something people will actually play that pushes their graphics hardware.

their engine is built for Mantle err dx12 so the showcase just shows with current hardware and dx12 and win 10 you can have a 400-600% improvement of fps. 10 to 40-60fps
No need to upgrade for a boost and if you have amd you have full support for dx12.
I expect Nvidia already have pressured game developers to not support dx12 fully to disadvantage amd in their new dx12 engines as its their known tactics year after year.
suprised that Kyle isnt writing about that much.
 
It is easy guys , DX12 should be the "same" as Mantle putting in bloat to appease one or the other defeats the purpose of making a close to metal API. So if people are referring to instructions not being supported it would make the API a pain to use for all programmers.

The premise of a close to metal API that you let the game developers make their own code and have them control all of the actions that normally would happen in DX11 (memory allocation and so on).
You guys see how this would not work if DX12 becomes bloated and more a like DX11?
 
Back
Top