Are you dissapointed by DX12?

Care to give a specific example how I am doing that other than your obscure posts and "1" singular example so far?

Because you assume DX12 calls are exactly the same, yet we already know it isn't so. Intel IGP is a prime example. if it was, it would just work.
 
Because you assume DX12 calls are exactly the same, yet we already know it isn't so. Intel IGP is a prime example. if it was, it would just work.

Say what? Show me a quote where I said that. Pretty sure in numerous posts I have highlighted the differences. Again I ask you to provide me a specific example where you're saying dx12 fails because of new hardware.
 
Say what? Show me a quote where I said that. Pretty sure in numerous posts I have highlighted the differences. Again I ask you to provide me a specific example where you're saying dx12 fails because of new hardware.

Can we agree if DX12 is just API calls that is neutral, it would work anywhere, on anything with DX12 support? In Total War Warhammer, GCN 1.0 and Kepler cards are not supported either in DX12. Besides the countless examples with Haswell, Broadwell Skylake IGP. 3DMark they all work tho.

You are also assuming that each new release of hardware will have completely different calls for DX12 to handle, which would be the vendors shooting themselves in the foot.
 
Can we agree if DX12 is just API calls that is neutral, it would work anywhere, on anything with DX12 support? In Total War Warhammer, GCN 1.0 and Kepler cards are not supported either in DX12. besides the countless examples with Skylake IGP.
You still have not showed me where I said that dx12 calls are the same as dx11?

Secondly you are still using singular obscure examples. These same issues cropped up in previous releases as well.
 
id set the more sensible bar with Vulkan for DOOM, now hopefully more developers will wake up to the realization that with Vulkan they can have all the multi-platform, multi-Windows-versions upsides of DX12 with none of the technical or financial downsides.

DX12 is more relevant and easier to port to than Vulkan for game developers targeting XBONE. Vulkan would probably get more support if SONY decides to use it as a the main API for building games in the PS4 (just don't see them moving away from their proprietary stuff). The only platform going all in on Vulkan seems to be the "Android" ecosystem, but honestly - most mobile games won't be ported to their PC counterparts when compared to consoles.

It's nice that Microsoft is promoting the "Play Anywhere" program with DX12. GOW4 was great along with some of the recent XBONE titles also available on the PC.
 
You still have not showed me where I said that dx12 calls are the same as dx11?

Secondly you are still using singular obscure examples. These same issues cropped up in previous releases as well.

You acted like DX12 functions like a high level API. Its very clear its not. And for the rest, then what. #waitandsee? With DX10 we waited and we got DX11.
 
You acted like DX12 functions like a high level API. Its very clear its not. And for the rest, then what. #waitandsee? With DX10 we waited and we got DX11.
Again you make a claim with absolutely zero proof. Show me where I said that
 
Total War Warhammer may be one of my favourite DX12 implementations. The developer/sponsor completely sabotaged it for the single other IHV they supported. And ingame it offered nothing to nobody. But at least my card wont pull more than 100W or so in DX12 :p

DX11.
WDX11-2.png


DX12.
WDX12-2.png
 
Already did in post 44.

No you did not. That example did not at all have anything to do with high-level API calls. You're making the assumption that a new release of a video card will have different calls to access its resources than the previous version. That would be the vendor shooting themselves in the foot they started changing how you requested the resources from their card. How is that at all and example of a high-level API?
 
No you did not. That example did not at all have anything to do with high-level API calls. You're making the assumption that a new release of a video card will have different calls to access its resources than the previous version. That would be the vendor shooting themselves in the foot they started changing how you requested the resources from their card. How is that at all and example of a high-level API?

With a low level API like DX12/Vulkan you need to code for the architecture directly. You even have to optimize down to SKU level. Its nothing new, the developer pretty much have to do all the work the IHV normally would. Each and every single developer over and over. With Vulkan its even worse due to the allow of extensions.

If the API isn't neutral to begin with, how would you ever imagine that it would just work on newer/other (radically)changed graphics architectures? And if it was neutral, it would just work on Intel IGPs. And not needing specific paths for that. It feels like a deja vu of those that in the start thought that DX12 support was as easy as a checkbox in the engine for developers.

And then we dont even have to talk about the threading issues. With an API made for a fixed OS and not one turned for application experience OS. Even DICE cant get it right. And Civ6 gets slower AI turns in DX12 as well.

gdc16.png
 
Last edited:
With a low level API like DX12/Vulkan you need to code for the architecture directly. You even have to optimize down to SKU level. Its nothing new, the developer pretty much have to do all the work the IHV normally would. Each and every single developer over and over. With Vulkan its even worse due to the allow of extensions.

If the API isn't neutral to begin with, how would you ever imagine that it would just work on newer/other (radically)changed graphics architectures? And if it was neutral, it would just work on Intel IGPs. And not needing specific paths for that. It feels like a deja vu of those that in the start thought that DX12 support was as easy as a checkbox in the engine for developers.

And then we dont even have to talk about the threading issues. With an API made for a fixed OS and not one turned for application experience OS. Even DICE cant get it right. And Civ6 gets slower AI turns in DX12 as well.

I am sorry, but exactly how do you think these APIs work? Different Architectures sure...like Nvidia compared to AMD compared to Intel. But why would Nvidia, AMD, or Intel completely change how you call their shaders for instance? Why would they fundamentally change that from one series of cards to the next. Not to mention that those companies build code into their cards to make them compatible for DX12, so why would they change that as well from series to series? That is what you are inferring. I am saying that they would be shooting themselves in the foot to do that. Now if there are other features which they add to their architecture, then that would be an additional call, but should in no way break previous games, you just wouldn't be using that feature.

So again, please give specific examples how this would break with Nvidia going to their 1200 or 1300 or 1400 series? And specifically because of DX12.

The examples so far you have given are issues with the developer and/or vendor, not DX12.
 
I think part of it is how much DX12 was hyped as this huge performance gain, but all the games that have supported it have had mostly the same (or worse) performance from DX11.

But in theory, it should be better. Maybe we just need to give developers some time. It took quite a while for DX11 to become standard and well supported.
 
I am sorry, but exactly how do you think these APIs work? Different Architectures sure...like Nvidia compared to AMD compared to Intel. But why would Nvidia, AMD, or Intel completely change how you call their shaders for instance? Why would they fundamentally change that from one series of cards to the next. Not to mention that those companies build code into their cards to make them compatible for DX12, so why would they change that as well from series to series? That is what you are inferring. I am saying that they would be shooting themselves in the foot to do that. Now if there are other features which they add to their architecture, then that would be an additional call, but should in no way break previous games, you just wouldn't be using that feature.

So again, please give specific examples how this would break with Nvidia going to their 1200 or 1300 or 1400 series? And specifically because of DX12.

The examples so far you have given are issues with the developer and/or vendor, not DX12.

Even AMD and Nvidia at GDC says its IHV specific to begin with and that everyone should consider architecture specific paths. Then you can keep pretending it isn't so. DX12 is a huge mess.

https://developer.nvidia.com/sites/.../GDC16/GDC16_gthomas_adunn_Practical_DX12.pdf
 
Last edited:
i remember of the promises of vram stacking mixed gpu setups and so on....

so far sli and crossfire have been a bust.

yes dx12 is in it's infancy but so far very poor showing.

The VRAM stacking was just a plain lie. Note that examples was never shown as well. And who claimed it and why? Someone with a memory limited Fiji and a person called Roy.
 
The only people who get hyped by a new DX are the ones who haven't got the slightest idea what an API is.
 
If it shares history with DX10 then DX12 will be relatively short lived with low adoption. What comes after will be the actual "next gen" API akin to DX11.

To be honest it's likely DX12 was rushed out to tie into and push Win 10 adoption. Once Shader Model 6.0 is finalized we might see a stronger move to start updating more of the back end to take advantage.

Another issue with DX12 perception is that there was relatively a lot of poor marketing pushed out towards end users. This has resulted in a lot of confusion and misunderstand, leading to missed expectations. I'll say the same thing I've said about other related stuff in that a lot of these technical back end advances get marketed to end users but it really isn't something they should be concerned with, end user implementation is all that matters.
 
Going back to my foodie example, I just thought this up to help illustrate why DX12 is a problem.

With DX11, you have a trained chef who has many years of experience cooking all kinds of food. You want a cheeseburger, you tell him you want a cheeseburger, he makes it for you. Yumski!

With DX12, you have your 11 yr old son whom you are trying to teach to cook, so he will be independent. You want him to make you a cheeseburger so you have to stand there and teach him how to make one, showing him which utensils to use, how to use the grill, how to make a patty, how to grill the patty, how to arrange the salad and sauces, etc. Then just because you taught your 11 year old son how to do it, doesn't mean your 10 yr old daughter knows how to. The son and daughter are the different architectures for a IHV(eg. Fermi and Maxwell or GCN 1.0 and 1.2). Or even the new trainee chef(different IHV). If you never taught them, they will not know how to make the cheeseburger(or render the scene).

Hence my apprehension of DX12. DX was born because we needed hardware abstraction. The IHV is responsible for producing drivers that did what the API calls want done. DX12 blows that objective out of the water.


ps. Vulkan does not solve this problem, I know. I support it more only because of its platform agnostic nature rather than Win10 tie-in.
 
If it shares history with DX10 then DX12 will be relatively short lived with low adoption. What comes after will be the actual "next gen" API akin to DX11.

To be honest it's likely DX12 was rushed out to tie into and push Win 10 adoption. Once Shader Model 6.0 is finalized we might see a stronger move to start updating more of the back end to take advantage.

Another issue with DX12 perception is that there was relatively a lot of poor marketing pushed out towards end users. This has resulted in a lot of confusion and misunderstand, leading to missed expectations. I'll say the same thing I've said about other related stuff in that a lot of these technical back end advances get marketed to end users but it really isn't something they should be concerned with, end user implementation is all that matters.

So many people I work with already predict DX12 is the new DX10.

But as to why DX12 was pushed out, we believe it's less to do with Win10 adoption, but rather to push for UWP adoption. A unified development for the PC and Xbox, which I think Microsoft wants developers to assume is the easy path because of an assumption that PC and Xbox development will be similar and thus cost less to develop for.
 
Because you assume DX12 calls are exactly the same, yet we already know it isn't so. Intel IGP is a prime example. if it was, it would just work.

Yes, excluding bugs, it mostly is. The API itself is pretty generic, even Mantle was neutral - the core API was meant to be platform independent.

It will work, there is no NVIDIA ONLY (barring extensions) functions or something nor is there intended to be; but one method for doing something is not guaranteed to be the absolute fastest performance on all hardware no matter what. If you want bleeding edge performance across all vendors, you may have to factor this in.

The VRAM stacking was just a plain lie. Note that examples was never shown as well. And who claimed it and why? Someone with a memory limited Fiji and a person called Roy.

Not really a lie, just not practical.

You can do it if you really want, but how do you actually make a win out of it? If your multi-GPU solution is AFR, both cards need the same resources. Transferring back and forth between cards is suicide.

Lets try something a little more clever - say each card is only tasked with drawing certain things. You maintain a guarantee that each card will have its own unique set of resources and list of shit to draw. Ignoring synchronization and other oddities, how do you keep this efficient in a sane way? Maybe a certain camera angle happens to have mostly only objects that one card is meant to handle... and there's all your performance down the drain.

Now what? Do you start rebuilding the list to try and spread load back out or what? That's a whole can of worms in itself.
 
Last edited:
Even AMD and Nvidia at GDC says its IHV specific to begin with and that everyone should consider architecture specific paths. Then you can keep pretending it isn't so. DX12 is a huge mess.

https://developer.nvidia.com/sites/.../GDC16/GDC16_gthomas_adunn_Practical_DX12.pdf

Did you actually read the whole thing? It is about tailoring your build for specific advantages in how AMD or Nvidia process calls. You are performing the same kinds of functions, only now they are broken down into command lists to get the maximum performance, and you can issue those command lists in certain ways to take better advantage of the hardware architecture you are going to run them on. Perhaps you should also read the actual DX12 Developer how to and the Developer Blog.

So the functions you want to do are essentially the same, however now you can modify your command list to take better advantage of how AMD or Nvidia, or Intel architecture works to perform those functions. It doesn't prevent those functions from working on different hardware. You could also probably make those command lists generic enough to perform like the older more abstracted calls from DirectX 11, it just wouldn't be optimized.
 
Frankly the best way to fix this would be for the GPU makers to come together and back an open API like Vulkan and be active mebers on the development in order to ensure smoother transitions from one gen of API to another
 
I told y'all from the get-go that this entire "low-level API" idiocy was a big marketing scam.

The sooner the industry moves on the better.

The only good thing about DX12 was that it included all the ordinary API updates for modern hardware, that basically brought it up to spec with OpenGL. But all the "low-level" stuff is nonsense. That goes for Vulkan too, which is a trainwreck.
 
Considering the API has been out for a year and you have to build support in from the start of development to utilize it properly it's amazing anything supports it at all. Game development isn't a 10 minute affair, and it will take a while for DX12 support to work its way into the pipeline.

As for the "memory-stacking" thing, I've been trying to explain to people why that idea doesn't represent how DX12 works since it was announced. Just because the API allows addressing RAM on one card from another doesn't mean it's going to make sense form a performance perspective and the entire thing needs to be written by the developer, who has to choose how it works. It's not just like having a big pool of memory that can be used indiscriminately, which is how people who argue against me constantly seem to understand it to work.
 
I told y'all from the get-go that this entire "low-level API" idiocy was a big marketing scam.

The sooner the industry moves on the better.

The only good thing about DX12 was that it included all the ordinary API updates for modern hardware, that basically brought it up to spec with OpenGL. But all the "low-level" stuff is nonsense. That goes for Vulkan too, which is a trainwreck.
Trainwork absolutely not nor is it a marketing scam. The big issue right now is implementation, which few people have had the time to do properly. Once we have game engines actually being written to take advantage of low level API then we will see how it works out. I would point out though that it was game devs in the 90's that wanted abstraction added to API to make programming easier.
 
DX12, as an API, is not disappointing to me. What was disappointing (and amusing): All the people jumping on the hype train that it was going to be the savior of a particular IHV "instantly". (That and Windows 10 would "immediately" improve the performance of same IHV just by installing it.)
 
This is the rub:

As we’ve already stated, DX12 is more than Async Compute; a feature that most developers are currently using in their DX12 games. DX12 offers better multi-tasking CPU capabilities, reduces CPU overhead and can handle more draw calls than before.

If Microsoft released a new version of DirectX 11 with those features then everyone would probably move to it. It's funny how the media hyped up how developers were clammering for a new low-level API but ignored the big names in the industry like John Carmack who were going against the prevailing narrative. But that would not have pushed new hardware and operating systems on their sponsors' customers.
 
I was trying to explain this to our summer intern tester.

Pretend you want a cheeseburger meal.
I can tell my intern: "I want a cheeseburger meal, go get me one!". That's my API call. No matter whether he goes to In-n-Out or McDonalds, I get a cheeseburger meal. We can start to nitpick about the cheeseburger from In-n-Out or McDonalds, like its cheese, or ketchup, or onions, or burger patty(image quality), but it's still a cheeseburger.

That's DX11.

Now with DX12, I have to go buy my own. I can go to either McDonalds or In-n-Out on my own to get the cheeseburger. Now, if I want to, McDonalds or In-n-Out can even let me customize my own cheeseburger! No pickles? No problem! Salad cream instead of ketchup? It's your poison.....

Then, you can walk into a KFC, which doesn't have cheeseburgers.....


Edit: I know it's a terrible analogy, but it was funny when I tried it on my intern. :p
Razor1, you back? I missed the whole you bring banned thing but I'm glad to see your departure was short lived!
 
DirectX 11 came out in 2009. How many years did it take before it became standard?

but there are no positive signs of progress...according to the article I linked in my earlier post, in 2016 we got 11 DX12 games and 2 Vulkan games (13 total)...in 2017 we got 5 DX12 games and 1 Vulkan game-- Forza Motorsport 7, Star Wars Battlefront 2, Sniper Elite 4, Halo Wars 2, Total War: WARHAMMER 2 and Wolfenstein 2: The New Colossus (6 total)...so a more then 50% drop from year-to-year is not a good sign...

how many DX12/Vulkan games are coming out this year?
 
You do have a point. I'm surprised more developers aren't on it. Surely most of the big studios have at least looked into it and/or attempted to update their engines.

So either it's too much work / too complex (which I don't think should be a problem for AAA studios) or they were successful in porting their engines and there were no gains (or possibly losses).

Vulkan at least makes a little more sense, since Windows 7 and 8 are supported (as well as Linux and Mac, sort of). Whereas DX12 is only Win 10.

Not many developers would release Win 10 only unless there was a measurable benefit (aside from Microsoft Studios, whose benefit is getting you to install the latest Windows version).

So it may be that Microsoft locking DX12 to Win 10 could be a barrier. Steam shows only 24% of users have Win 10 and a DX12 GPU. That's leaving out a LOT of customers by shipping DX12 only. That may be an important factor.
 
Well didn't seem like too many adopted it yet and still think dx11 is fine.
 
Back
Top