DX12 to be announced on March 20th

heh what about Maxwell ?

and the ARM that will offload the CPU ?:D
Microsoft said it's up to AMD, Intel, and Nvidia to bring up drivers that will being DX12 support to existing hardware.

Quite possible Nvidia will choose to use that ARM core within their implementation on Maxwell-based cards.
 
Oh fun, lets all put out our own API's instead of working together and making one awesome API? Anybody here remember back in the 90's when Glide, QuickDraw, Open GL, and D3D/DX made pc gaming a PITA? Now lets all do it again... :(
 
Oh fun, lets all put out our own API's instead of working together and making one awesome API?
Uh... that's the exact opposite of what just happened here...

Microsoft, Intel, Nvidia, AMD, and Qualcomm all just came together under the DirectX 12 API. Total unification, right there.
 
AMD already stated Mantle could use existing DX paths and optimize them to Mantle, coincidence? ;)
 
Did they say that? The only feature I'm aware of for doing porting work from Direct3D to Mantle is that they're able to use existing SM5.0 shaders.
 
I called it the other day. I said that DX12 will just be DX11 with a "Mantle Type" API...and sure enough...there it is. I also stated that current DX11 NVidia card users will not have to upgrade hardware....NICE CALL FROM MS AND NVIDIA.
 
Glad to see that all of the big players are happy for once. And that Nvidia is taking DirectCompute seriously from now on. I think that Mantle support in Linux will help there tremendously with the SteamOS and doing concurrent Linux / DX12 development. Since it seems that it was just integrated into DX12 it should be really easy to get everything working in the coming years.

I can't think of a company that lost in this announcement.
 
Glad to see that all of the big players are happy for once. And that Nvidia is taking DirectCompute seriously from now on. I think that Mantle support in Linux will help there tremendously with the SteamOS and doing concurrent Linux / DX12 development. Since it seems that it was just integrated into DX12 it should be really easy to get everything working in the coming years.

I can't think of a company that lost in this announcement.

None did, they all came away winners. This will be the first time that I will not have to purchase a new card to get new features.
 
Not in the context of gaming. Unless we're counting iOS.
Mac, iOS, Android and almost all other non-Microsoft platforms (but certainly not excluding Windows). Although many mobile GPUs support Direct3D these days, Microsoft has only a very small shard of the actual software (game) base with DirectX, and usage of OpenGL even on the PC is greatly on the rise.

I wasn't even taking into consideration the professional software market.
 
Targeting late 2015 launch with 50% of gamers being dx12 capable by then.

Preview edition late 2014. 100% new gpu going forward can use it.Big big sea change indeed.

Possible win7 support but nothing to announce yet.

No Win 7 support, no care.
 
Here are some quotes and images from the event I noted:

Ryan Shrout: In DX11, one core is doing most of the work on d3d12, overall CPU utilization is down 50%

http://cdnmo.coveritlive.com/media/image/201403/thumb900_phpbqgzejp1010605.jpg

http://cdnmo.coveritlive.com/media/image/201403/thumb900_phpbo6uozp1010606.jpg

"Solved" multi-threaded scalability.

Scott Michaud: Hmm, from ~8ms to ~4. That's an extra 4ms for the GPU to work. 20 GFLOPs for a GeForce Titan.

Multicore Scalability.... Seems like a big deal when you have 6-8 cores!

Josh Walrath: It is a big deal for the CPU guys.

20 GFlops from a Titan? Stock Titan gets around 5 ATM.

Scott Michaud: Titan gets around ~5 Teraflops, actually... if it is fully utilized. I'm saying that an extra 4ms is an extra 20 GFlops per frame.

Lewap Pawel: So 20GFLOPS per frame is 20x60 = 1200GFLOPS/sec? 20% improvement?

http://cdnmo.coveritlive.com/media/image/201403/thumb900_phpsvgfplp1010612.jpg

Ryan Shrout: AMD has been working very closely with DX12.

http://cdnmo.coveritlive.com/media/image/201403/thumb900_phpp7dvmop1010616.jpg

http://cdnmo.coveritlive.com/media/image/201403/thumb900_phpothkqcp1010617.jpg

Josh Walrath: DX12 will enhance any modern graphics chip. Driver support from IHVs will be key to enable those features. This is a massive change in how DX addresses the GPU, rather than (so far) the GPU adding features.

http://cdnmo.coveritlive.com/media/image/201403/thumb900_phpd2eabip1010621.jpg

Just to reiterate... PS4 utilizes OpenGL, not DX. This change will not affect PS4. Changes to OpenGL will only improve PS4 performance.

Ryan Shrout: NVIDIA will support DX12 on Fermi, Kepler, Maxwell and forward!

http://cdnmo.coveritlive.com/media/image/201403/thumb900_phprzybpup1010628.jpg

http://cdnmo.coveritlive.com/media/image/201403/thumb900_php1kjfoap1010629.jpg

http://cdnmo.coveritlive.com/media/image/201403/thumb900_phpowyncxp1010635.jpg

Portability - bringing titles from the PC to Xbox to mobile platform will be much easier.

Windows 7 support? Won't be announcing anything today but they understand the request.

Scott Michaud: "a way to target specific GPUs in a system" this sounds like developers can program their own Crossfire/SLi methods, like OpenCL and Mantle.

Scott Michaud: Apparently NVIDIA's blog says DX12 discussion begun more than four years ago "with discussions about reducing resource overhead". They worked for a year to deliver "a working design and implementation of DX12 at GDC".

---

DirectX 12 What's the big deal?
http://blogs.msdn.com/b/directx/archive/2014/03/20/directx-12.aspx

NVidia Blog: DirectX 12: A Major Stride for Gaming
http://blogs.nvidia.com/blog/2014/03/20/directx-12/

Meet the future of PC graphics: Microsoft reveals faster, console-like DirectX 12
http://www.pcworld.com/article/2110...aled-hitting-microsoft-platforms-in-2015.html
 
Sounds like if i invest into 6 core Haswell-E then i'm set for next 5 years or more on cpu side :D
 
You are certainly taking into account markets that were not factored in to the post you quoted.
Then you should have clarified what markets you were referring to in your post. "API wars" is kind of broad, particularly considering AMD isn't "at war" with any APIs constrained to the Windows platform.
 
What I find interesting is the support curve.

Intel has only pledged support on Haswell and later (GPUs from 2013 forward)
AMD has only pledged support on GCN and later (GPUs from 2012 forward)
Nvidia has pledged support on everything going all the way back to Fermi (GPUs from 2010 forward)

Seems Nvidia's architecture has some serious legs, if GPUs that old are supportable. What do Nvidia's older GPU's have that Intel and AMD's lack?
 
What I find interesting is the support curve.

Intel has only pledged support on Haswell and later (GPUs from 2013 forward)
AMD has only pledged support on GCN and later (GPUs from 2012 forward)
Nvidia has pledged support on everything going all the way back to Fermi (GPUs from 2010 forward)

Seems Nvidia's architecture has some serious legs, if GPUs that old are supportable. What do Nvidia's older GPU's have that Intel and AMD's lack?

My guess would be that because GCN and Intel 5000 series are significantly different internally from their predecessors, and were built first and foremost as DX11.x parts. Whatever more modern feature sets that were enabled on cards before these two were more bolt on additions to previous DX10 architectures (Everything up to the 6970 was a variation/update to the R600, Intel was just tweaking what was originally the HD2000 series).

Fermi on the other hand was Nvidia's first offering in the DX11 world and was a brand new architecture at the time of launch breaking from the G80 base in their DX10 parts. They've been in the "DX11 from the ground up" game a little longer than the other two and as such are presumably in a better position to support DX12 across a broader range of products.
 
What I find interesting is the support curve.

Intel has only pledged support on Haswell and later (GPUs from 2013 forward)
AMD has only pledged support on GCN and later (GPUs from 2012 forward)
Nvidia has pledged support on everything going all the way back to Fermi (GPUs from 2010 forward)

Seems Nvidia's architecture has some serious legs, if GPUs that old are supportable. What do Nvidia's older GPU's have that Intel and AMD's lack?

Supporting 2010 gpu's is only worth bragging about if they actually have the horsepower to play DirectX12 games at speeds you can enjoy. Otherwise it's like driving a Yugo on the Autobahn.
 
Supporting 2010 gpu's is only worth bragging about if they actually have the horsepower to play DirectX12 games at speeds you can enjoy. Otherwise it's like driving a Yugo on the Autobahn.
Higher end Fermi cards do have the horsepower. There are plenty of newer lower end cards that are slower.
 
Supporting 2010 gpu's is only worth bragging about if they actually have the horsepower to play DirectX12 games at speeds you can enjoy. Otherwise it's like driving a Yugo on the Autobahn.

Huh? Gtx 580 is definitely not a slouch. Of course some of the lower end Fermi cards are showing their age, but across the board the Fermi line isn't bad. Fact of the matter is the 580 (as an example) runs 1080p games pretty damn well these days, even 4 years later. It isn't top of the line of course, but that doesn't make it bad. And no it won't run crysis 3 maxed out on ultra at 120 fps @ 1080p. Actually, now that I think of it most GPUs can't do that heh. Nonetheless, not everyone does that and it's still a very capable GPU. There is no absolute pre-requisite that you must run everything maxed out with ultra settings. That is absurd. Drop a setting here and there in crysis 3 or Metro:LL , you're good to go with a 580.

It is faster than the 660ti in some titles and a bit slower than the 670. Depends on the title. 660ti can be about the same , depending. Anyway, the 580 is certainly not "yugo" status. Fermi had excellent tessellation and DX11 performance across the board at launch - far better than Cayman in fact - and that still benefits the Fermi chips to this day. Of course there will come a day when Fermi becomes worthless, but not quite yet. IMO.
 
Last edited:
They said "support" and now I wonder what support mean? Mean that games with DirectX 12 will run on DirectX 11 hardware with some features unavailable or current DirectX 11 hardware will FULLY support all DirectX 12 features ?
 
DX12 doesn't introduce a new shader model. I mean DX12 will be on the XB1. I'm fairly certain that no current DX11 cards will have no issues with DX12; the primary feature of DX12 to note is the improved efficiency and lessened abstraction layer.
 
Then you should have clarified what markets you were referring to in your post. "API wars" is kind of broad, particularly considering AMD isn't "at war" with any APIs constrained to the Windows platform.

The context was pretty clear both before and after your posts popped up.
 
They didn't really mention a time-frame for when we can expect to see this new "update" for our DX 11 GPU's.

They mentioned the XBox One but, what about the XBox 360? Will it get an "update" too? And when will we begin to see these "updates"???
 
They didn't really mention a time-frame for when we can expect to see this new "update" for our DX 11 GPU's.

They mentioned the XBox One but, what about the XBox 360? Will it get an "update" too? And when will we begin to see these "updates"???

Why on Earth are you even bringing Xbox 360 in this conversation? Why would ancient hardware that is now obsolete get update like this? Besides, it is DX9 level hardware and already has own low level api.
 
Higher end Fermi cards do have the horsepower. There are plenty of newer lower end cards that are slower.

The Yugo quip was a wee bit of an exaggeration, granted. :) Then again, if Fermi could still handle everything we could throw at it, we wouldn't need stuff like the 780 Ti or the 290X(ok yes, "need" is a relative term). Also keep in mind, DX12 isn't coming out until late 2015. And who knows when we're going to see games that properly take advantage of it. So by then, things could look very different. Who knows, Fermi cards could still do very well in those games. But I'll be curious to know how many gamers will still be using them when that day comes, and how much of a benefit the older generations of cards will get from DX12 compared to the newer ones(that'll make for an interesting read, if some website takes the time to do an exhaustive evaluation like that).

The perception of DX12's benefits could be interesting when it finally comes out too, depending on how well Mantle does in the meantime. If Mantle starts cookin' along, then by the time DX12 games finally come out with the same lower-level gpu interaction, folks who have been playing stuff like Dragon Age: Inquisition, Evolve, and whatever else has been making use of Mantle might look at it and say, "What took you so long?" Again, IF, but it could happen.
 
Preview release this year.

Games expected to launch for holidays 2015.

I'd be interested to see if anyone actually patches support for it into existing titles prior to games just full on making the switch over. If you can do post release Mantle then you might be able to do DX12 post release as well.
 
They didn't really mention a time-frame for when we can expect to see this new "update" for our DX 11 GPU's.

They mentioned the XBox One but, what about the XBox 360? Will it get an "update" too? And when will we begin to see these "updates"???

Seeing as D3D 11 requires shader model 5.x capable hardware, as does DX12, the xbox 360 will not be included. The xbox 360 uses an ATI GPU with shader model 3.x which is capable of DX9 level features. Shader model 5 includes AMD 5000 and beyond, and Nvidia Fermi and beyond. Anything prior to that, which would obviously include the 360 GPU, cannot use either DX11 or 12.
 
Seeing as D3D 11 requires shader model 5.x capable hardware, as does DX12, the xbox 360 will not be included. The xbox 360 uses an ATI GPU with shader model 3.x which is capable of DX9 level features. Shader model 5 includes AMD 5000 and beyond, and Nvidia Fermi and beyond. Anything prior to that, which would obviously include the 360 GPU, cannot use either DX11 or 12.

SM4.0 actually, the 360 had a unified shader pipeline. The shader tech developed for the Xenos was brought in for the R600 (which is kind of amazing considering how much of a spectacular turd the 2900XT was). That being said the SM4.0 stuff aside Xenos had a lot more in common with the R500 parts.
 
I've seen differing accounts for that, with some sources mentioning 3.5 (incorrect sources). Nonetheless thanks for the correction. That doesn't change the outcome though: Either way the 360 isn't capable of DX12 because it isn't capable of DX11. DX12 requires DX11 shader model 5 capable hardware, and tessellation alone requires shader model 5. That would exclude anything from the 2005 xbox 360.

And really...this should be common sense as the 360 was released in 2005, it aint getting DX12. ;) Seemed like an odd question for anyone to ask. XB1 makes semi sense, although i'm not quite sure if the XB1 will use a "forked" version of DX12 or what. I remember the 360 used a version of DX9 which was similar but not entirely the same: it used a forked version of 9 which had specific direct access to the 360's innards. Much like what DX12 is doing on a general basis.
 
http://www.amd.com/us/press-releases/Pages/amd-demonstrates-2014mar20.aspx

Saw a few people asking the very thing I wondered as my shiny new XfX 280x is being put through some benchies and configuration tweaks(using 14.3 beta Driver which is improved over the 14.2's) and the above link points directly to some very reassuring info. So every1 wondering if their (GCN) capable GPU's (sorry 6'series and below -_- dat 6970doe....) will be compatible, AMD has stated yes to future Dx12 support for (select) pre-dated hardware. :cool:
 
Supporting 2010 gpu's is only worth bragging about if they actually have the horsepower to play DirectX12 games at speeds you can enjoy. Otherwise it's like driving a Yugo on the Autobahn.
A GTX 480 is still significantly faster than the GTX 750 Ti that everyone is ranting and raving about for 1080p gaming... and the new API could make it perform even better.

I'd say Fermi-based cards are still sensible to support.
 
They said "support" and now I wonder what support mean? Mean that games with DirectX 12 will run on DirectX 11 hardware with some features unavailable or current DirectX 11 hardware will FULLY support all DirectX 12 features ?

E.g., Kepler has support for bindless textures but Fermi does not. Fermi does have indirect multi-draw and buffer storage support.

The reality today is that the hardware design in many ways out-paces the API changes, and the APIs are updated post-hoc to better reflect the way the hardware works. In OpenGL the vendor gets around this by providing extensions for hardware features that aren't supported by the core API. For DX you need an act of Congress.

It's kind of amazing to see the snow job AMD pulled with Mantle, but people sure do revel in some good marketing.
 
Last edited:
up until now my gtx460 has been fine for me, however I do feel given now consoles have gone next gen and powerful steam boxes are on the way that spec's required to play pc games are soon to take a big jump, I tgink its no coincidence dx12 is on the horizon after the xbox1 got launched.

ironically its not been games stressing my gpu its been the windows desktop, aero uses video ram, gpu accelerated apps use video ram, so if firefox is gpu enabled, it will chew up 200 or so video ram on my usage, internet explorer compeltely saturates it, IE with hw acceleration enabled = laggy windows due to video ram saturation. Aero itself usually on my system uses 200-300meg of vram, so firefox combined with aero will push it hard and IE will max it out with ease. So yeah funny enough I have been tempted to upgrade my gpu for general desktop performance not for gaming but held off, however I think within 18-24 months if I pc game with any newly released games I may need to upgrade the gpu for games also.
 
Supporting 2010 gpu's is only worth bragging about if they actually have the horsepower to play DirectX12 games at speeds you can enjoy. Otherwise it's like driving a Yugo on the Autobahn.

By this logic AMD using mantle on APUs is pointless right ?

After all GTX 470 and higher are still faster than fastest gcn apus.
 
I'd be interested to see if anyone actually patches support for it into existing titles prior to games just full on making the switch over. If you can do post release Mantle then you might be able to do DX12 post release as well.

It makes more sense to look at something like this from a business perspective than a technical perspective. I'd think it's likely if Microsoft (or someone else) wanted to partner with a developer to help push something out as a showcase, otherwise not so much. Just like with AMD and Mantle, a push from a vested party goes a long way.

From a developer's perspective you have to factor in that they are spending both time and money (even though 4 man months may not sound like much) you still have to factor in what the pay off for them to do so is, particularly for a game that's already been released and already most of the sales (most are at the launch period).

Patching support into existing titles wouldn't really be technically prohibitive. You've had DX9 games retroactively updated to using DX10 and DX11, some which had been out for years even.
 
I dont think Mantle is going any away just yet. We don't know how fast dx12 will be. Mantle has proven it worth. DX12 still hasn't showed any benchmarks. Every new DX version so far in history has shown that first generation new DX cards usually slow then after the 1st generation like 2nd and 3rd it becomes a lot faster like 5870 to 6900 and so on. Mantle will still be faster i think. Mantle will be the most up to date in the end. Directx only gets updated once every 5-8 years. Mantle will have more optimizations by the time a DX12 game comes out. Most likely all engines will support Mantle by the time DX12 games come out. Mantle will be closer to the metal than DX12. When you appease to everyone you gotta make sacrifices. I mean not every card out there is optimized to run like Mantle. I bet DX12 will be like Mantle in nature but it wont perform like Mantle. Mantle will still be more optimized in the end. Mantle solves a lot of problems with multigpus. We don't know if DX12 will waste mutligpu memory by mirroring one cards memory over all the cards. Mantle solves that problem allowing all gpus memory to be accessible individually so you got more memory total in the end under Mantle. Theres a lot we dont know yet about DX12 but i know for sure there will be sacrifices. Mantle will still reign supreme.
 
Back
Top