DrezKill
Gawd
- Joined
- Mar 11, 2007
- Messages
- 769
Not having to duplicate the frame buffer and being able to use the full amount of vRAM across all installed GPUs is awesome. But some other things seems weird. I wonder how they will work out.
"...the API combines all the different graphics resources in a system and puts them all into one 'bucket.' It is then left to the game developer to divide the workload up however they see fit, letting different hardware take care of different tasks."
"There is a catch, however. Lots of the optimization work for the spreading of workloads is left to the developers – the game studios. The same went for older APIs, though, and DirectX 12 is intended to be much friendlier. For advanced uses it may be a bit tricky, but according to the source, implementing the SFR should be a relatively simple and painless process for most developers."
With so much being left up to the developers, we're probably gonna end up with another situation where we have another potentially highly-useful PC-only tech being ignored or barely utilized by the majority of developers, who will continue to focus their development and optimization efforts on consoles. Of course, this tech may not even end up as useful as it sounds. Too early to tell.
"The source said that with binding the multiple GPUs together, DirectX 12 treats the entire graphics subsystem as a single, more powerful graphics card. Thus, users get the robustness of a running a single GPU, but with multiple graphics cards."
Huh, interesting. I assume then that if the devs choose not to manually handle GPU assignment, that the OS+DX12 will handle things automatically? Does this then mean that every game will have some basic level of support for this DX12 multi-GPU shiznit, even without specific support on a per-game basis in the GPU drivers (as with SLI and CFX)?
And it looks like SFR will be the order of the day here. Doesn't look like AFR will be an option. SFR is a better fit I guess, cuz of the leeway of being able to assign the rendering of specific portions of the screen to GPUs based on their power.
This is interesting:
"Our source suggested that this technology will significantly reduce latency, and the explanation is simple. With AFR, a number of frames need to be in queue in order to deliver a smooth experience, but what this means is that the image on screen will always be about 4-5 frames behind the user's input actions.
This might deliver a very high framerate, but the latency will still make the game feel much less responsive. With SFR, however, the queue depth is always just one, or arguably even less, as each GPU is working on a different part of the screen. As the queue depth goes down, the framerate should also go up due to freed-up resources."
Is lag with AFR really that bad? I don't have too much experience with SLI and CFX, so I can't say for myself if I've ever noticed such. 4-5 frames behind the user's input? That sounds bad in text, but in real life has this been much of an issue? Well, if it is an issue, it looks like it won't be one for much longer.
(Source of quotes: http://www.tomshardware.com/news/microsoft-directx12-amd-nvidia,28606.html)
I agree with a lot of other people though - even if AMD is down with this, nVidia is sure as hell not gonna be cool with it. They'll do whatever they can to stop their GPUs from playing nice with others. Plus we all know what a barrel of fun it is to install both Radeon and GeForce drivers on the same machine.
But if things do pan out like we hope, it would be awesome to be able to install cards from both sides and utilize their specific technologies. Really though I'm most excited about not having to duplicate frame buffers anymore.
Other questions need to be cleared up as well, such as basic setup and output to specific monitors. What about scenarios where, say, you have a G-Sync monitor hooked up to a GeForce card and a FreeSync monitor hooked up to a Radeon, all in the same system? Or heck even the same situation without G-Sync and FreeSync. Who knows what could get fucked up. And I'm not really sure how well things are gonna work out with outputting a game to multiple monitors (like with Eyefinity and NV Surround) with mixed-vendor GPUs in the system (especially if these monitors are connected to different GPUs). Well I guess it's way too damn early to be speculating and trying to get answers for shit that's not really in place yet. I'll be keeping my eye on this stuff though. When can we start seeing previews of this stuff in action? And I mean previews performed by actual sites such as HardOCP and Anandtech, with actual hardware in hand (and necessary preliminary software/drivers/APIs/etc), not a demonstration at some trade show or event.
In the end, I gotta say this shiznit was hella random, and not expected at all. All the other stuff we've heard about DX12, such as the massively expanded draw calls and lower system overhead and closer-to-the-metal optimizations and whatnot, yeah all pretty much stuff we could expect. But native support in the API and OS for mixed usage of Radeons and GeForces? Yeah, I didn't see that coming. And at this point in time, I remain largely skeptical of how it will work out, while realizing that there isn't enough information available yet. I'll take a wait-and-see approach. I ain't gonna be bankin' on it too hard, however.
"...the API combines all the different graphics resources in a system and puts them all into one 'bucket.' It is then left to the game developer to divide the workload up however they see fit, letting different hardware take care of different tasks."
"There is a catch, however. Lots of the optimization work for the spreading of workloads is left to the developers – the game studios. The same went for older APIs, though, and DirectX 12 is intended to be much friendlier. For advanced uses it may be a bit tricky, but according to the source, implementing the SFR should be a relatively simple and painless process for most developers."
With so much being left up to the developers, we're probably gonna end up with another situation where we have another potentially highly-useful PC-only tech being ignored or barely utilized by the majority of developers, who will continue to focus their development and optimization efforts on consoles. Of course, this tech may not even end up as useful as it sounds. Too early to tell.
"The source said that with binding the multiple GPUs together, DirectX 12 treats the entire graphics subsystem as a single, more powerful graphics card. Thus, users get the robustness of a running a single GPU, but with multiple graphics cards."
Huh, interesting. I assume then that if the devs choose not to manually handle GPU assignment, that the OS+DX12 will handle things automatically? Does this then mean that every game will have some basic level of support for this DX12 multi-GPU shiznit, even without specific support on a per-game basis in the GPU drivers (as with SLI and CFX)?
And it looks like SFR will be the order of the day here. Doesn't look like AFR will be an option. SFR is a better fit I guess, cuz of the leeway of being able to assign the rendering of specific portions of the screen to GPUs based on their power.
This is interesting:
"Our source suggested that this technology will significantly reduce latency, and the explanation is simple. With AFR, a number of frames need to be in queue in order to deliver a smooth experience, but what this means is that the image on screen will always be about 4-5 frames behind the user's input actions.
This might deliver a very high framerate, but the latency will still make the game feel much less responsive. With SFR, however, the queue depth is always just one, or arguably even less, as each GPU is working on a different part of the screen. As the queue depth goes down, the framerate should also go up due to freed-up resources."
Is lag with AFR really that bad? I don't have too much experience with SLI and CFX, so I can't say for myself if I've ever noticed such. 4-5 frames behind the user's input? That sounds bad in text, but in real life has this been much of an issue? Well, if it is an issue, it looks like it won't be one for much longer.
(Source of quotes: http://www.tomshardware.com/news/microsoft-directx12-amd-nvidia,28606.html)
I agree with a lot of other people though - even if AMD is down with this, nVidia is sure as hell not gonna be cool with it. They'll do whatever they can to stop their GPUs from playing nice with others. Plus we all know what a barrel of fun it is to install both Radeon and GeForce drivers on the same machine.
But if things do pan out like we hope, it would be awesome to be able to install cards from both sides and utilize their specific technologies. Really though I'm most excited about not having to duplicate frame buffers anymore.
Other questions need to be cleared up as well, such as basic setup and output to specific monitors. What about scenarios where, say, you have a G-Sync monitor hooked up to a GeForce card and a FreeSync monitor hooked up to a Radeon, all in the same system? Or heck even the same situation without G-Sync and FreeSync. Who knows what could get fucked up. And I'm not really sure how well things are gonna work out with outputting a game to multiple monitors (like with Eyefinity and NV Surround) with mixed-vendor GPUs in the system (especially if these monitors are connected to different GPUs). Well I guess it's way too damn early to be speculating and trying to get answers for shit that's not really in place yet. I'll be keeping my eye on this stuff though. When can we start seeing previews of this stuff in action? And I mean previews performed by actual sites such as HardOCP and Anandtech, with actual hardware in hand (and necessary preliminary software/drivers/APIs/etc), not a demonstration at some trade show or event.
In the end, I gotta say this shiznit was hella random, and not expected at all. All the other stuff we've heard about DX12, such as the massively expanded draw calls and lower system overhead and closer-to-the-metal optimizations and whatnot, yeah all pretty much stuff we could expect. But native support in the API and OS for mixed usage of Radeons and GeForces? Yeah, I didn't see that coming. And at this point in time, I remain largely skeptical of how it will work out, while realizing that there isn't enough information available yet. I'll take a wait-and-see approach. I ain't gonna be bankin' on it too hard, however.