DirectX 12 Can Combine Nvidia and AMD Cards

It would be interesting if you could install a lesser card simply to gain more VRAM. Though the memory bus of the card you choose would weigh heavily on the added performnce, but it's an interesting possibility nonetheless. perhaps a memory addon card could also be a possibility since it can be addressed separately and only has to shuffle info across the pcie bus. Just a thought..

With the way storage works, if this "lesser" card has a much slower or higher latency access to the memory it will affect all of your performance.
 
I would guess that each card would render a portion of the screen, say something like every other line. So each card would have its own memory and its own driver but render the game at say 1980x540. DX12 will simply coordinate the refresh rate of the combined images and output through the dominant card.
 
If this allows the same company's identical graphics cards to work together in all games with dx12 with improved performance then this would be a huge win for graphics card companies. Right now SLI and CXF don't work in all games and in some games the performance is worse until a driver update. I tried SLI and CXF a few years ago and they only seem like it is worth it if you play a AAA game that it works on. Otherwise, it's a waste of money and for that reason I will stay far away from having two graphics cards from either AMD or Nvidia.
 
With the way storage works, if this "lesser" card has a much slower or higher latency access to the memory it will affect all of your performance.

Or maybe it will work like the 970, with part of the memory being much much slower :p
 
I would guess that each card would render a portion of the screen, say something like every other line. So each card would have its own memory and its own driver but render the game at say 1980x540. DX12 will simply coordinate the refresh rate of the combined images and output through the dominant card.

That's just split frame rendering. The general idea here is similar in that the final image still has to be assembled by some kind of master thread however you can offload specific work loads to certain processors, your iGPU could purely work on things like shadows, ambient occlusion and generally tasks that fit inside cache to avoid memory bottlenecks. It would free up the main card to do more geometry and shader crunching.
 
You can bet your ass that Nvidia, being selfish little proprietary shits they are, is going to do everything in their power to stop this from happening. They already block Cuda and GPU Physx if they detect AMD card present, hell even integrated chips have caused problems. Nvidia does not like AMD and will not play ball with them, not easily anyway.
 
if all the rumors about directx12 are true wow...

combined vram nvidia and amd cards working in the same machine great stuff.

like others have said nvidia will torpedo it to the ground.
 
The company that can't even get the start menu right is going to magically get AMD and Nvidia GPUs working together in a useful way?

DirectX releases are always surrounded by marketing BS, but that's pretty ridiculous.
 
The company that can't even get the start menu right ....

Think about this for a while. Who got the "Start Menu" right before Microsoft? I'm not saying that Microsoft got the Start Screen right or anything but the debates over the Start Menu are because apparently in Windows 7 Microsoft did get the Start Menu right.

Steven Sinofsky was the Windows lead of both Windows 7 and 8 and I do find it interesting that the same person is behind both efforts. The Start Screen that so many hate and the Start Menu that so many find instrumental to their productivity. Fascinating.
 
I'm not entirely sure how this stuff works, but I think I smell lawyers and lawsuits.
 
You can bet your ass that Nvidia, being selfish little proprietary shits they are, is going to do everything in their power to stop this from happening. They already block Cuda and GPU Physx if they detect AMD card present, hell even integrated chips have caused problems. Nvidia does not like AMD and will not play ball with them, not easily anyway.

Agreed, they will try something like this, unless the DX12 standard is written in such a way that it forces it, or nothing DX12 works.

I'm not sure Nvidia would be willing to sacrifice DX12 completely.
 
In the ring is AMD vs. nVidia and Microsoft is the referee. I'm getting a bag of popcorn ready for this. :D
 
nvidia will do everything it can at the driver level where it won't work with an amd card i would think...

+100

Exactly why that did with locking out being able to use a Nvidia card for PhysX when an AMD card is installed.
 
Hell, i would be happy with DX12 simplifying / debugging things at the driver level for either AMD or NVidia. Why can't it just be that simple?
 
M$ just needs Stevie Wonder and Paul Mccartney announcing this with modified lyrics from Ebony and Ivory...Team Green and AMD live together...:D
 
Zarathustra[H];1041457847 said:
Agreed, they will try something like this, unless the DX12 standard is written in such a way that it forces it, or nothing DX12 works.

I'm not sure Nvidia would be willing to sacrifice DX12 completely.

Well DX12 memory pooling can be done in a complete behind-the-scenes fashion but Nvidia can do a driver detect and purposely kill off services if they want to. They've done it before, they will do it again.
 
So PhysX + FreeSync + HBM? Hmm, if true I would give a shit.

Hell, be nice if my 970 could prioritize the HBM over its own memory. :p ;)
 
Really the more interesting thing here is that the new API will allow memory pooling. No more frame buffer duplication.

Yea would be questionable if it can be done since if for AMD at least 2 cards want to access each others memory, has to go through PCI-e bus, o gee mass stuttering and lag while that happens cause limited speed.

It will work until Nvidia torpedo it.

Nvidia like be only one of 2 with balls to make a stand on it. Problem is you got 2 diff gpu arch and getting them to work together would take $ and effort. Likely given AMD's effort history will be expecting Nvidia to do all the work and spend all the $ to do it. So all AMD fanboyz can stop attacking nvidia as they will be only one's not goin for it. AMD doesn't want to deal with support requests hardware that isn't there's, they have had problems providing support for their own hardware as it is. *cough* CF and drivers *cough*
 
I can't think if any reason why I would buy 2 GPU of different manufacturer. It's bad enough trying to deal with one drive's issue.
 
Likely given AMD's effort history will be expecting Nvidia to do all the work and spend all the $ to do it. So all AMD fanboyz can stop attacking nvidia as they will be only one's not goin for it. AMD doesn't want to deal with support requests hardware that isn't there's, they have had problems providing support for their own hardware as it is. *cough* CF and drivers *cough*

Pretty much. When NVIDIA spends millions on research & development for something to improve their products, and then doesn't immediately hand it all over free to their competition or make it public domain, they're being "greedy assholes with their proprietary crap". Fanboi logic.
 
Pretty much. When NVIDIA spends millions on research & development for something to improve their products, and then doesn't immediately hand it all over free to their competition or make it public domain, they're being "greedy assholes with their proprietary crap". Fanboi logic.

Well a lot of AMD fans do have a welfare mentality of "give me shit for free just because".
 
Pretty much. When NVIDIA spends millions on research & development for something to improve their products, and then doesn't immediately hand it all over free to their competition or make it public domain, they're being "greedy assholes with their proprietary crap". Fanboi logic.

its more about the lengths they go through to prevent it from working when there is no reason for example physix should be locked to nvidia only setups.

if someone wants to use an amd gpu as the main gpu, and an nvidia gpu for physx all that does is in the end make nvidia more money because they turned a non customer into a customer but they choose to be spiteful and prevent that set up at trying to turn a potential customer into an nvidia only customer if you want to touch any of their stuff.

I for one will likely never buy an nvidia product again i am tired of their gameworks crap that usually runs like crap on AMD hardware because they lock out the code and physx bs of intentionally disabling the ability to use it.

If nvidia wants to grow up and start advancing gaming tech as a whole and not intentionally blocking or over charging for features that should just be included for free like gsync they would likely get me back as a customer.

take Gsync there is no reason to not allow it to function AMD hardware you pay premium for the monitor so part of that premium goes to nvidia so they already proffit off you regardless of which gpu you use so its not like you are getting it for free as an AMD user you still have to pay nvidia, now since it is only an nvidia feature and they still charge you a premium for it, its just nvidia double dipping its customers, it should be a selling point for the gpu alone not something you need to pay for ontop of it atleast when AMD develops stuff for the most part it is open to anyone that wants to use it, i am pretty sure AMD let Intel play with its MANTLE code and that is one of AMD biggest rivals and yet AMD manages to play ball, then they help develop freesync ect it is obvious AMD cares more about their customers as well as the community as a whole even tho they may lack the budget to execute things properly at times.
 
As many others have said, yup, not going to trust Nvidia to stick to their end of the deal. They are like Sony with the PS3.

Here, have linux/alternate os.
Oh gnoes! Someone found how to enable obscure and difficult offline piracy, sorry no more alternate OS. But it's okay, you don't have to upgrade if you don't want to.

Release new shiny game: forced upgrade or no play.


Nvidia did this with the Physx in two stages from memory. You could initially pick up a cheap ex-workstation quadro with the minimum cuda cores needed for nvidia to allow you to use overblown physx. Few driver updates later they locked them out, few years back but pretty sure they made it physx for geforce only or similar. Then few updates later you can't use this trick at all with any of their cards and an AMD/Ati card. Plenty more stories like that from nvidia too.

So, thanks but no thanks. I'll stick with the often equally crap AMD drivers, who at least don't completely screw customers over every time possible, thanks.
 
The VRAM doubling is the huge news IMO, cannot wait for this. Looking forward to seeing how it works in the real-world. Hoping this will allow same brand CrossFire and SLI to also double VRAM. If so, this is a huge step in multi-GPU performance on the PC !
 
The biggest benefit to this is combining GPU + APU. All Intel CPUs have a graphics chip, and you have AMD with their APU's. Combine that with a graphics card and you have huge benefits. But nobody is going to be combining an AMD with Nvidia graphics card. Just because AMD and Nvidia will find a way to stick with their brand to gain some benefit.

Also in other news Khronos named their new API Vulkan. With all the rumors flying around and Valve about to do its presentation soon, I wonder if this is Microsoft trying to take some spotlight away from this. Oh well, good news is good news.
 
This will never pan out well. One of them will ruin this. They both despise giving options to their customers.
 
What magic would allow no overlap of the two vrams? It seems impossible.
 
The main application I see as far as cross-branding goes would be AMD APU with Nvidia GPU. There isn't many other times it would make sense, if that even does.
The excitement for me lies in the combining of GPU RAM, and of course DX12/Mantles highly threaded nature where cheap current 8 core CPU's combined with two or three mid-range cards could/should destroy on future DX12 and low level APi console/Mantle ports.
For the first time in a long time I am excited and optimistic for PC gaming!
 
Wouldnt this affect input latency? I feel input latency has slowly been going up and up as technology progresses... Sure ur screen could be ultra low, but so much happens from the time you move your mouse to what happens on the screen as it is... To this day no game does input latency like quake on a CRT with my old 1GHZ Athlon + GeForce 3 setup.
 
Back
Top