DirectX 12 Can Combine Nvidia and AMD Cards

Not having to duplicate the frame buffer and being able to use the full amount of vRAM across all installed GPUs is awesome. But some other things seems weird. I wonder how they will work out.

"...the API combines all the different graphics resources in a system and puts them all into one 'bucket.' It is then left to the game developer to divide the workload up however they see fit, letting different hardware take care of different tasks."

"There is a catch, however. Lots of the optimization work for the spreading of workloads is left to the developers – the game studios. The same went for older APIs, though, and DirectX 12 is intended to be much friendlier. For advanced uses it may be a bit tricky, but according to the source, implementing the SFR should be a relatively simple and painless process for most developers."

With so much being left up to the developers, we're probably gonna end up with another situation where we have another potentially highly-useful PC-only tech being ignored or barely utilized by the majority of developers, who will continue to focus their development and optimization efforts on consoles. Of course, this tech may not even end up as useful as it sounds. Too early to tell.

"The source said that with binding the multiple GPUs together, DirectX 12 treats the entire graphics subsystem as a single, more powerful graphics card. Thus, users get the robustness of a running a single GPU, but with multiple graphics cards."

Huh, interesting. I assume then that if the devs choose not to manually handle GPU assignment, that the OS+DX12 will handle things automatically? Does this then mean that every game will have some basic level of support for this DX12 multi-GPU shiznit, even without specific support on a per-game basis in the GPU drivers (as with SLI and CFX)?

And it looks like SFR will be the order of the day here. Doesn't look like AFR will be an option. SFR is a better fit I guess, cuz of the leeway of being able to assign the rendering of specific portions of the screen to GPUs based on their power.

This is interesting:
"Our source suggested that this technology will significantly reduce latency, and the explanation is simple. With AFR, a number of frames need to be in queue in order to deliver a smooth experience, but what this means is that the image on screen will always be about 4-5 frames behind the user's input actions.

This might deliver a very high framerate, but the latency will still make the game feel much less responsive. With SFR, however, the queue depth is always just one, or arguably even less, as each GPU is working on a different part of the screen. As the queue depth goes down, the framerate should also go up due to freed-up resources."

Is lag with AFR really that bad? I don't have too much experience with SLI and CFX, so I can't say for myself if I've ever noticed such. 4-5 frames behind the user's input? That sounds bad in text, but in real life has this been much of an issue? Well, if it is an issue, it looks like it won't be one for much longer.
(Source of quotes: http://www.tomshardware.com/news/microsoft-directx12-amd-nvidia,28606.html)

I agree with a lot of other people though - even if AMD is down with this, nVidia is sure as hell not gonna be cool with it. They'll do whatever they can to stop their GPUs from playing nice with others. Plus we all know what a barrel of fun it is to install both Radeon and GeForce drivers on the same machine.

But if things do pan out like we hope, it would be awesome to be able to install cards from both sides and utilize their specific technologies. Really though I'm most excited about not having to duplicate frame buffers anymore.

Other questions need to be cleared up as well, such as basic setup and output to specific monitors. What about scenarios where, say, you have a G-Sync monitor hooked up to a GeForce card and a FreeSync monitor hooked up to a Radeon, all in the same system? Or heck even the same situation without G-Sync and FreeSync. Who knows what could get fucked up. And I'm not really sure how well things are gonna work out with outputting a game to multiple monitors (like with Eyefinity and NV Surround) with mixed-vendor GPUs in the system (especially if these monitors are connected to different GPUs). Well I guess it's way too damn early to be speculating and trying to get answers for shit that's not really in place yet. I'll be keeping my eye on this stuff though. When can we start seeing previews of this stuff in action? And I mean previews performed by actual sites such as HardOCP and Anandtech, with actual hardware in hand (and necessary preliminary software/drivers/APIs/etc), not a demonstration at some trade show or event.

In the end, I gotta say this shiznit was hella random, and not expected at all. All the other stuff we've heard about DX12, such as the massively expanded draw calls and lower system overhead and closer-to-the-metal optimizations and whatnot, yeah all pretty much stuff we could expect. But native support in the API and OS for mixed usage of Radeons and GeForces? Yeah, I didn't see that coming. And at this point in time, I remain largely skeptical of how it will work out, while realizing that there isn't enough information available yet. I'll take a wait-and-see approach. I ain't gonna be bankin' on it too hard, however.
 
Nvidia like be only one of 2 with balls to make a stand on it. Problem is you got 2 diff gpu arch and getting them to work together would take $ and effort. Likely given AMD's effort history will be expecting Nvidia to do all the work and spend all the $ to do it. So all AMD fanboyz can stop attacking nvidia as they will be only one's not goin for it. AMD doesn't want to deal with support requests hardware that isn't there's, they have had problems providing support for their own hardware as it is. *cough* CF and drivers *cough*

For one, as I have both SLI and Crossfire machines, drivers not working well is a both sided camp buddy, lose the fantasy, it makes you 0 friends.

Secondly, you do realize that aside DirectX 9.0c AMD has been the largest contributor of research time and effort for all DX versions in recent history...your inner fan boy is showing right now, and it's really rather ugly. Top that with the fact AMD has on numerous occasions topped Nvidia....a back and fourth ordeal dude.

Can come back to reality land anytime, now, really, it's not that bad here!
 
Sweet, this sounds like they're finally taking multi-GPU into account on the system level. That doesn't just mean new options and combining cards, it means that current multi-GPU systems will perform better too. I may not need to replace my 2 280xs for a while...
 
"The source said that with binding the multiple GPUs together, DirectX 12 treats the entire graphics subsystem as a single, more powerful graphics card. Thus, users get the robustness of a running a single GPU, but with multiple graphics cards."

Huh, interesting. I assume then that if the devs choose not to manually handle GPU assignment, that the OS+DX12 will handle things automatically? Does this then mean that every game will have some basic level of support for this DX12 multi-GPU shiznit, even without specific support on a per-game basis in the GPU drivers (as with SLI and CFX)?

Yeah, if this is accurate, it almost sounds too good to be true.

Time will tell I guess.

And it looks like SFR will be the order of the day here. Doesn't look like AFR will be an option. SFR is a better fit I guess, cuz of the leeway of being able to assign the rendering of specific portions of the screen to GPUs based on their power.

This is interesting:
"Our source suggested that this technology will significantly reduce latency, and the explanation is simple. With AFR, a number of frames need to be in queue in order to deliver a smooth experience, but what this means is that the image on screen will always be about 4-5 frames behind the user's input actions.

This might deliver a very high framerate, but the latency will still make the game feel much less responsive. With SFR, however, the queue depth is always just one, or arguably even less, as each GPU is working on a different part of the screen. As the queue depth goes down, the framerate should also go up due to freed-up resources."

Is lag with AFR really that bad? I don't have too much experience with SLI and CFX, so I can't say for myself if I've ever noticed such. 4-5 frames behind the user's input? That sounds bad in text, but in real life has this been much of an issue? Well, if it is an issue, it looks like it won't be one for much longer.
(Source of quotes: http://www.tomshardware.com/news/microsoft-directx12-amd-nvidia,28606.html)

The theory on AFR and lag is pretty straight forward, and undeniable.

An oldie but a goodie that demonstrates it:

7125601965_776096cd0b_o.gif


So even if you don't factor a queue depth into the equation, there will inherently be a measurable amount of lag.


Now, how noticeable it is is a completely different question.

There are penty of scientific papers that deal with human response time and the ability to notice delay. While some suggest we can tell the difference as low as 2ms under some circumstances (most prominent example is drummers telling if a beat is off), most suggest the human perception level is much much higher than that.

Just like with audiophiles and their sound quality there will always be those who claim to have super-human abilities to discern lag, which is qhy there are still people qho cling to their aging CRT's and prefer SFR over AFR.

In multi GPU setups, NVIDIA has historically tended to be better than AMD, but even so, both vendors have had a lot of problems with stutter, and compatibility, and a general feeling of lack of that buttery smoothness you get with a single strong GPU. How much of that is due to the inherent lag of AFR I couldn't tell you, but it is there.

SFR always seemed like a solution that made more sense to me, ever since 3DFX did it with the Voodoo2, but the drawback has always been that it doesn't scale as well as AFR, and AFR scaling isn't perfect to begin with.

I am optimistic. I did not expect this at all, and it is a positive surprise. We will see what happens.

I'd imagine that at the very least in order for this to work, all video cards would ahve to be of the same DX level, which measn that this - if it works as they suggest - won't be all THAT useful out of the box for everyone, but as time goes on, I bet it could be a nice solution.

I agree with a lot of other people though - even if AMD is down with this, nVidia is sure as hell not gonna be cool with it. They'll do whatever they can to stop their GPUs from playing nice with others. Plus we all know what a barrel of fun it is to install both Radeon and GeForce drivers on the same machine.

Probably. It depends on what kind of power Microsoft is willing to weild here.

Micrsoft DOES have the ability to take the "comply or else" approach to it, as even though Linux and Apple are making inroads, most gaming still happens on windows machines, and Microsoft could simply enforce playing nice with the unified GPU framework, or make the GPU's not work at all, which would effectively end Nvidias GPU business as we know it.

The question is if they are willing to go to that extent.
 
I can see this maturing and being the way of the future, a standard norm. But I'm sure it will be buggy as hell out of the gate. Pretty exciting news to be honest, api level multigpu rendering.
 
This makes me think of what NVidia just did on the 970s. It might add more memory to be utilized but throwing in different speed GPUs and memory will create bottlenecks. As it forces the faster to wait on the slower, think of how using some cards for PhysX slows things down more than just using the one superior card for everything. Plus if you could make the memory of different GPUs additive, I'm pretty sure SLI and crossfire would have figured that out and be doing that already. Especially crossfire while using mantle. So even if they do get it to work it probably won't be optimal. On top of that I don't see how NVidia would ever go along with it and they will definitely sabotage the mixing of NVidia and AMD cards. I could see NVidia allowing it to work as long as its just mixing different NVidia cards though.
 
The main thing is that D3d12 will finally bring support for Crossfire/SLI to D3d--for games coded to the D3d12 API. For earlier games (all of the games available at the moment), it will be business as usual, imo. In D3d12-specific games, multi-gpu support will no longer require custom drivers by the card IHVs--as it won't matter to the API, because all the gpus in the system will be seen as one. What will be required to make any of it work, though, are *good* D3d12 drivers by the IHVs, as well as games coded to utilize what D3d12 brings to the table (unknown at the moment what that will require on the part of developers.)

I still think, however, that hybrid situations won't work very well--say a dog-slow Intel gpu paired with a fast, discrete AMD or nVidia gpu--best you're going to see there is performance capped at the level of the slowest gpu. But the question has to be asked: why would anyone running multiple gpus *want* to run a combination of AMD and nVidia gpus? Heh...;) Can't see why I'd want to do that, frankly. [However, it does occur to me that it would be 'sort' of nice in that for those games that work best on a nVidia gpu you'd own one you could use; and for those games that run best on an AMD gpu, ditto...I mean, it's already true that many games aren't coded to run on multiple gpus well if at all...so, at the least it would present some new possibilities.]

But, more importantly, I think, if D3d is half of what it's being billed, it will quickly establish itself as the only API worth using for game development, and the *Windows* PC will firmly establish itself as the only platform worth using for games--which everybody here already knows, no doubt...!
 
But, more importantly, I think, if D3d is half of what it's being billed, it will quickly establish itself as the only API worth using for game development, and the *Windows* PC will firmly establish itself as the only platform worth using for games--which everybody here already knows, no doubt...!

Oh boy.

Someone hasn't heard about GLnext. Don't worry, it's going to be an interesting week of announcements coming from GDC.
 
For one, as I have both SLI and Crossfire machines, drivers not working well is a both sided camp buddy, lose the fantasy, it makes you 0 friends.

Secondly, you do realize that aside DirectX 9.0c AMD has been the largest contributor of research time and effort for all DX versions in recent history...your inner fan boy is showing right now, and it's really rather ugly. Top that with the fact AMD has on numerous occasions topped Nvidia....a back and fourth ordeal dude.

Can come back to reality land anytime, now, really, it's not that bad here!

So much this! I'm so tired of people who haven't used an ATI/AMD card since the Rage 128 saying they have driver problems. As someone who has both Nvidia and AMD cards, I don't see much of a difference driver-wise since Catalyst came out.
 
nvidia will do everything it can at the driver level where it won't work with an amd card i would think...

Reminds me of that time years ago that an ATI engineer magically made AA work in Ubisoft games simply by telling the driver an Nvidia branded card was in use.
 
My first thought is, who the hell would do this? Even if you could, why mix and match, even if this works fantastically at MS' level, I still see it introducing complexity and issues no matter what.

The biggest benefit to this is combining GPU + APU. All Intel CPUs have a graphics chip, and you have AMD with their APU's. Combine that with a graphics card and you have huge benefits. But nobody is going to be combining an AMD with Nvidia graphics card. Just because AMD and Nvidia will find a way to stick with their brand to gain some benefit.

Ah, this is actually a good one. Would be interesting to see what kind of boost it would give.
 
I still think, however, that hybrid situations won't work very well--say a dog-slow Intel gpu paired with a fast, discrete AMD or nVidia gpu--best you're going to see there is performance capped at the level of the slowest gpu. But the question has to be asked: why would anyone running multiple gpus *want* to run a combination of AMD and nVidia gpus? Heh...;) Can't see why I'd want to do that, frankly.

I can answer that easily. If you're a consumer like me with no brand loyalty, it would be nice to upgrade my PC by dropping in a card from Brand X while leaving in my prior Brand Y card in order to add to rendering horsepower.
 
On the memory pooling, seems like a really great idea but how would that work? Would it require they both have the same type of memory? If not, how will the usage be prioritized? Will it use it's own memory first and then default to the other card?
 
What magic would allow no overlap of the two vrams? It seems impossible.

One of two methods I figure. BTW this was on the Tek Syndicate a week ago. But either this is done through Direct Compute and or one graphics card handles textures while the other handles physics.
 
So much this! I'm so tired of people who haven't used an ATI/AMD card since the Rage 128 saying they have driver problems. As someone who has both Nvidia and AMD cards, I don't see much of a difference driver-wise since Catalyst came out.

I would disagree.

I've gone back and forth between AMD and Nvidia over the last 6 years.

AMD has improved over the years, no doubt. From a single GPU perspective I'd say stability and performance is now, and has been for some time equivalent. The Nvidia driver settings are better organized and more flexible, but from a stability perspective they are the same for single GPU's.

Multi-GPU is a different story. Don't get me wrong, both SLI and Crossfire have issues, but Crossfire - in my experience - is a lot worse. When I was running Crossfire I had frame rates that were good most of the time, but repeated "down spikes" much lower than desirable. They've also had much worse stuttering problems, and issues with stability and compatibility with new titles. (heck, RO2 didn't work properly in Crossfire until 6 months after release!)

I'm pretty GPU agnostic, but I probably wouldn't go multi-gpu from either vendor based on the issues I've had, and I definitely would not go crossfire. That was a living hell.

That being said, my last crossfire experience was with my dual 6970s, and I have heard things have improved since then.
 
One of two methods I figure. BTW this was on the Tek Syndicate a week ago. But either this is done through Direct Compute and or one graphics card handles textures while the other handles physics.

While this is possible even without dx12, if you offload texturing to one gpu that well essentially leave 90% of the gpu doing nothing :p
 
I'm curious as to what the major selling point for this feature is.

Is it so that laptops with discrete, but modest GPUs can also use the CPU's built-in GPU? This is the only really compelling reason I can think of to go to all this trouble. I wouldn't think this would be worth the trouble or the extra CPU overhead required to implement it on the typical real-GPU-equipped gaming PC.
 
So much this! I'm so tired of people who haven't used an ATI/AMD card since the Rage 128 saying they have driver problems. As someone who has both Nvidia and AMD cards, I don't see much of a difference driver-wise since Catalyst came out.

Speak for yourself. I go back and forth and I finally got tired of waiting a couple months to play new games, especially AAA titles as good as my friends with Nvidia cards.

When you play PC games with 15-20 other people in vent you start to notice the issues and 90% of the time it is something with an AMD GPU and a new release game.

Just go to the Steam forums for any new release and see it for yourself. AMD has shit drivers and their driver release is abysmal. If you play the newest games on or near release you would have to be a fool to go AMD.
 
Zarathustra[H];1041459266 said:
I would disagree.

I've gone back and forth between AMD and Nvidia over the last 6 years.

AMD has improved over the years, no doubt. From a single GPU perspective I'd say stability and performance is now, and has been for some time equivalent. The Nvidia driver settings are better organized and more flexible, but from a stability perspective they are the same for single GPU's.

Multi-GPU is a different story. Don't get me wrong, both SLI and Crossfire have issues, but Crossfire - in my experience - is a lot worse. When I was running Crossfire I had frame rates that were good most of the time, but repeated "down spikes" much lower than desirable. They've also had much worse stuttering problems, and issues with stability and compatibility with new titles. (heck, RO2 didn't work properly in Crossfire until 6 months after release!)

I'm pretty GPU agnostic, but I probably wouldn't go multi-gpu from either vendor based on the issues I've had, and I definitely would not go crossfire. That was a living hell.

That being said, my last crossfire experience was with my dual 6970s, and I have heard things have improved since then.

I'm not a fan of multi-GPU solutions in general if I can avoid them for the reasons you mention here, although Nvidia hasn't always had the advantage over AMD in this regard either. Reviews on this site after the R9 290X was released were actually showing that AMD was getting much better Crossfire scaling than Nvidia was with their 780 SLI solution, which is significant because the situation had been reversed in the past (the 7990 was renowned for stuttering). To my knowledge, Nvidia has corrected the problem as of now, however, it still irritates me that people who haven't owned an ATI card in 15 years still insist that their drivers are totally crap in comparison to Nvidia's, who, of course, can do no wrong. It's one of the reasons fanboyism is so annoying and stupid. The sole reason I bought an R9 280X when I did is because the Nvidia card that went toe-to-toe with it in the same budget category was $80 more for essentially the same performance with 1GB less of RAM. I had an Nvidia card prior to that. If I were to buy today, I'd probably get a GTX 970. It's all about price/performance, but the drivers are not that much different anymore.
 
Speak for yourself. I go back and forth and I finally got tired of waiting a couple months to play new games, especially AAA titles as good as my friends with Nvidia cards.

When you play PC games with 15-20 other people in vent you start to notice the issues and 90% of the time it is something with an AMD GPU and a new release game.

Just go to the Steam forums for any new release and see it for yourself. AMD has shit drivers and their driver release is abysmal. If you play the newest games on or near release you would have to be a fool to go AMD.

I've never had those kinds of issues. Perhaps there is something else going on with your setup. My experience with both brands of card have been incredibly similar from the drive standpoint ever since the Catalyst driver was released like 12 years ago. I'm sure there are exceptions in certain cases. I wonder what those same Steam users were saying about their Nvidia SLI setups when AMD had the better scaling just a few months ago. Probably nothing because, as we know, admitting we're wrong about something is completely unacceptable, especially if it would put our brand of choice in an inferior light for something. Both companies have driver issues at different times, neither is perfect, get over it.

Also, in certain titles, you will have optimizations for certain hardware. Nvidia in particular has been renowned with implementing measures that will nerf their competition. This has been particularly evident with Ubisoft titles in the past, and there are several examples of this that are too direct to be coincidental. It's entirely possible that certain examples have more to do with that than the skill of the driver teams for either company.
 
Speak for yourself. I go back and forth and I finally got tired of waiting a couple months to play new games, especially AAA titles as good as my friends with Nvidia cards.

When you play PC games with 15-20 other people in vent you start to notice the issues and 90% of the time it is something with an AMD GPU and a new release game.

Just go to the Steam forums for any new release and see it for yourself. AMD has shit drivers and their driver release is abysmal. If you play the newest games on or near release you would have to be a fool to go AMD.

Me thinks one would have to be a fool to give [H]ard earned money on or near release of any of the so called "AAA" games that have come out in the last few years. Especially if they come from EA, Ubisoft, or Activison, which all tend to be the companies making the games with the pretty graphics that need special driver enhancements. Plus I would put the issues with those games on the developers of the game and not the developers of the drivers whether it be AMD or Nvidia As the game developers are the ones that say when, where, and how the driver teams can work on them and they decide whether or not to release a crappy bug filled game on or near release.
 
Me thinks one would have to be a fool to give [H]ard earned money on or near release of any of the so called "AAA" games that have come out in the last few years. Especially if they come from EA, Ubisoft, or Activison, which all tend to be the companies making the games with the pretty graphics that need special driver enhancements. Plus I would put the issues with those games on the developers of the game and not the developers of the drivers whether it be AMD or Nvidia As the game developers are the ones that say when, where, and how the driver teams can work on them and they decide whether or not to release a crappy bug filled game on or near release.

+1 on the fool part!
 
I highly doubt the driver for this is to make AMD/NVIDIA play nicely together, but rather is a result of the fundamental changes in DX12. It does grab headlines though.

IMHO, DX12 is all about re-architecting how GPU work is performed. If you think functionally, it may be possible to "stream" the GPUs together. This would also explain the additive VRAM. Very simplistically, if the functional flow is A -> B -> C -> D, under previous versions of DX a single card must perform all the work; i.e. A through D. With SLI/Crossfire each card is performing all the work (A-D) for each frame and simply alternating frames.

However, what if I think about the problem and figure out a way to split up the work so that A and B can be performed on GPU 1, with the results then passed to GPU 2 to do C and D? Suddenly I have both GPUs splitting the work and I'm treating all the GPU resources as a pool. Also, since I would wisely break the functions apart in ways that limit data interdependency to the extent possible, the data needed to perform functions A and B on GPU 1 wouldn't need to be replicated on GPU 2 since it's doing functions C and D. Thus, additive VRAM.

Furthermore, this would allow two cards of various DX specifications and even manufacturers to work together as long as DX12 is architected correctly. Essentially, all GPU 1 knows is that it's been told to do functions A and B and return the results to this memory location. Same for GPU 2 but with functions C and D.

I don't have any special knowledge about DX12, but given the magnitude of the re-architecting, I would not be surprised at all if the GPU workflow changed dramatically (or at least could change if developers decided to use the functionality). Also, I admit the example is extremely simplistic, but I believe the point is still valid. I.e. pool GPU resources and allocate functions of the graphics pipeline across them such that no particular device is actually rendering an entire frame.
 
Me thinks one would have to be a fool to give [H]ard earned money on or near release of any of the so called "AAA" games that have come out in the last few years.

I would mostly agree with you.

I mostly only buy deeply discounted games on Steam. Every now and then something comes out that I have anticipated enough that I am willing to buy it on launch or even pre-order, but it is rare.

Last two titles for me were Civilization V in 2010 which I preordered and Red Orchestra 2 in 2011 which I bought on release.

There have been other titles I have bought on sale which I deemed would have been worthy of purchase on launch (if not too buggy) but only a handful in the last 5-10 years.

Titles like the S.T.A.L.K.E.R games, Metro 2033, Metro Last Light, Deus Ex: Human Revolution and Counter Strike: Global Offensive are the only ones to come to mind right now.
 
Me thinks one would have to be a fool to give [H]ard earned money on or near release of any of the so called "AAA" games that have come out in the last few years. Especially if they come from EA, Ubisoft, or Activison, which all tend to be the companies making the games with the pretty graphics that need special driver enhancements. Plus I would put the issues with those games on the developers of the game and not the developers of the drivers whether it be AMD or Nvidia As the game developers are the ones that say when, where, and how the driver teams can work on them and they decide whether or not to release a crappy bug filled game on or near release.

Nothing wrong at all with buying AAA games at release, if you have an Nvidia card. I bought ten or so in the last year and had only issues with one game (FC4 stutter) . Everything else played and ran perfectly.

But you keep fighting the good fight against the evil AAA publishers man! :rolleyes:
 
What doesn't make sense to me is that DX12 is supposed to reduce overhead, and bring DX closer to the GPU hardware.

Doing something like we are reading about in this thread - while really cool - seems to me like it would require INCREASING overhead, not reducing it, due to all the abstraction between the GPU's and the game needed to make this work.

Something is definitely not right. You can't have your cake and eat it too.

I guess we will see as more official details and specifics about DX12 come out.
 
Back
Top