Is SLI faster on Zen 2 (3700x, etc) ?

For the most part all older games that were supported with SLI are still supported - in those cases performance with 2x 1070Ti will indeed increase. If one has a good backlog of older games that one will play -> SLI might be a rather smart option. If all you play are new games then SLI is probably a crappy choice unless you play the very few newer titles that work well with it.

For games that do not work well or just don't plain work with SLI -> Just turn off SLI -> The game will now work just as well as before as with a single card -> Why the hell do people bring up degrading performance with SLI as if one has to suffer with it in a particular game? TURN IT OFF.

Buying the single fastest card does not mean the fastest performance in all cases, case in point, my 2x 1080 Ti will rip any single 2080 Ti in Shadows of The Tomb Raider with over 95% scaling that game engine delivers. Now overall I would say the 2080 Ti would be the better more consistent performer in a broad base selection of games. And of course if one want the feel for lower performance with a 2080Ti, forcing you to use lower resolutions for good frame rates -> use RT in the few titles that support it ;)
 
Last edited:
The only time you should choose an SLI setup is if you have literally no other choice.
naaa, there are cases where some folks do play only a few particular games, even for years that if SLI works well with may benefit them greatly. Everyone is different and some are way different. It is up to the person to research, ask questions and figure out their best solution. The answer could be yes, maybe and no for SLI is my view.
 
Honestly, I've never gone SLI, have no intention of ever doing so, and quite frankly every time I see a post about it breaking or what-not, I quietly, and very smugly, smile to myself.
 
SLI was useful for many years, and in the G80 days in particular where nvidia invested more resources to get games working better with it.

My guess is moving forward it will make a comeback, because if AMD/Nvidia go forward with chiplet design for GPU's this will require SLI-Like technology to get running properly.
 
SLI was useful for many years, and in the G80 days in particular where nvidia invested more resources to get games working better with it.

My guess is moving forward it will make a comeback, because if AMD/Nvidia go forward with chiplet design for GPU's this will require SLI-Like technology to get running properly.
Instead of Nvidia selling you two cards to make more money, Nvidia just charges double the price to make even more money from you. ;)

It would be nice if it made a comeback but just the new price structure makes it very prohibited overall.
 
For the most part all older games that were supported with SLI are still supported - in those cases performance with 2x 1070Ti will indeed increase. If one has a good backlog of older games that one will play -> SLI might be a rather smart option. If all you play are new games then SLI is probably a crappy choice unless you play the very few newer titles that work well with it.

For games that do not work well or just don't plain work with SLI -> Just turn off SLI -> The game will now work just as well as before as with a single card -> Why the hell do people bring up degrading performance with SLI as if one has to suffer with it in a particular game? TURN IT OFF.

Buying the single fastest card does not mean the fastest performance in all cases, case in point, my 2x 1080 Ti will rip any single 2080 Ti in Shadows of The Tomb Raider with over 95% scaling that game engine delivers. Now overall I would say the 2080 Ti would be the better more consistent performer in a broad base selection of games. And of course if one want the feel for lower performance with a 2080Ti, forcing you to use lower resolutions for good frame rates -> use RT in the few titles that support it ;)

I agree that SLI has always been the enthusiasts trial and error gaming setup of choice. It's virtually the opposite of set it and forget it that a lot of entry level pc users prefer for their gaming experience. With SLI you are going to constantly be searching for that magic alignment where everything just clicks and you get what you were anticipating to just have working based on your investment. Unless you want to spend as much time tweaking settings and reading forums to figure out what is actually working for people, it shouldn't even be a consideration for more than 99% of the gaming public anylonger.

naaa, there are cases where some folks do play only a few particular games, even for years that if SLI works well with may benefit them greatly. Everyone is different and some are way different. It is up to the person to research, ask questions and figure out their best solution. The answer could be yes, maybe and no for SLI is my view.

When i used to run a 5970 class gpu years ago the drivers were so borked that I would literally have to install a different set of drivers depending on what game I was playing, it was that much of a difference. Some games were choppy and unplayable with certain drivers, while others would get literally like 3x the fps with newer drivers. It was a pita, but it just forced me to spend more time investing into a single title for longer periods instead of just hopping into whatever ticked my fancy at will (or depending on my kill streaks.)
 
I usually concentrate on one game so did not have the need to rotate drivers. FC5 and CF didn't work, then worked great and then didn't work??? At least I finished it before it didn't work. That was with 2x Vega FE's, HDR and Freesync (what a beauty). Then SLI 2x 1080 Ti's in Shadow of The Tomb Raider, best multi GPU game I've ever played with 95%+ scaling, that was using HDR, rendering at a higher resolution and down scaling - most beautiful game I've ever played - That was before Nvidia supported FreeSync unfortunately. I still have a backlog of games that SLI well so SLI, at least for the 1080Ti's is still useful. I don't think I would buy new, two cards for CFX/SLI, as for multi-gpu DX12/Vulkan ? I will wait for more games to effectively support that, only Shadow of The Tomb Raider does it extremely well with Nvidia and did not work with AMD, not sure about Rise of The Tomb Raider. Also if VR starts implementing multi-gpu, only Serious Sam VR series does that well with AMD cards, not so much with Nvidia cards last I tested.
 
I ran SLi and/or Crossfire for years. It was GREAT in games like BF3/BF4, etc. But as Dan said, the Devs don't give a shit anymore. Just not worth it.
 
from another point of view
A question: Would CF/SLi work faster on Zen2 pci-e4 platform?

The SLI would have little (if any) to no benefit at all. SLI relies on sli bridge, for intercommunication.

The CF would likely gain performance with faster pci-e transfers/sec, but the intercom between cpu would be a necessity unfortunatly. Mainly because CF uses pci-e lanes and intercom with cpu to communicate. One could argue that 5GHz intel would be faster, by minimizing cpu latency, but it may not essentially translate to better fps or performance.While with Zen2 you would have greater bandwidth - while suffering with bit (not that much) worse latency. One would have to analyze deeply if latency is problem - and what kind of data is being communicated between the gpu's.
Though i think latency is big issue, and reason why nv is sticking with their SLI bridges.

Common: if graphics card can be bottle necked by pci-e 3.0 8x, 4x we will see gain in performance by going to pci-e 4, as we'll practically double our bandwidth with zen2 based platform.



In today use:
The shrinkage of the multi-gpu usage in games, is directly related to how the physics effects are being handled. Lack of precision required to render certain parts same way as the other gpu.

It is also a problem on the 'gameworks' effects, as they lack the ability to be executed simultaneously on multiple gpu's. (mainly how you create shader, the 2nd buffer is never filled to render the effect) Its something NV is currently working for their tiled based multi-gpu render. Thus there is hope. Then we'll have to come around problem of non-uniform precision between the render compute. Lets say fluid dynamics or something else.
Problem: One gpu states (within its fp8) that water should flow to the left, while the other gpu decides it would flow to the right - what do we get there? Well we'll likely get couple frames where water flows both ways before there's sync - unless there isn't one. Another option would be leaving certain gpu's for certain workloads only. Such as: you deal with rt compute, and you will deal with physics, and so on - just like its possible to offload physx to other card (or at least used to be).

Endgame?
Whether devs want it or not - ultimately it won't be their choice when gpu's become mcm's. They will have to work with it - and make it work.
Its very likely that ps6 or 5.2/5.5 or some refresh will have first mcm from amd - and thus would mark a come back to multi-gpu render support in games. (unfortunately i think consoles will have to lead the way)
 
So really, and thanks for all the erudite responses, it is hit or miss but still, in a lot of games two 1070Tis can bring a decent performance boost.

Let's say I got an extra 1070ti for free? Is it worth it then?

I think your interpretation of what everyone said is quite optimistic. If you get one for free, you're still better off selling both for a single better card. Unless you simply want the thrill of seeing two GPU's in your case with only one doing the work most of the time...
 
When i used to run a 5970 class gpu years ago the drivers were so borked that I would literally have to install a different set of drivers depending on what game I was playing, it was that much of a difference. Some games were choppy and unplayable with certain drivers, while others would get literally like 3x the fps with newer drivers.

I ran 2x 4870x2 quad-crossfire during that same era and seem to have had a totally different experience. I was actually surprised how many games not only supported crossfire, but supported all 4 GPUs. Maybe we played different games.
 
I think your interpretation of what everyone said is quite optimistic. If you get one for free, you're still better off selling both for a single better card. Unless you simply want the thrill of seeing two GPU's in your case with only one doing the work most of the time...

Nah, I get the vox populi, I can clearly see that the industry isnt going to keep supporting SLI so I am not going to go that route.
My initial impulse for even asking was an article I read that said that SLI might work better on multicore boxes, but if the developers are not going to code for it, as has been pointed out in this thread then fuck it.
 
I can clearly see that the industry isnt going to keep supporting SLI so I am not going to go that route.

Here's the fun part: the engine developers are supporting mGPU.

I'm thinking that while there's very little return for it today, keeping the basic support updated -- even if games shipping with said engines don't use it -- is worth it with higher resolutions and refresh rates, VR, and ray tracing all on the horizon.

4k per eye with < 12ms frametimes? With ray tracing? That's well more than two of whatever is coming out next year.
 
  • Like
Reactions: noko
like this
Here's the fun part: the engine developers are supporting mGPU.

I'm thinking that while there's very little return for it today, keeping the basic support updated -- even if games shipping with said engines don't use it -- is worth it with higher resolutions and refresh rates, VR, and ray tracing all on the horizon.

4k per eye with < 12ms frametimes? With ray tracing? That's well more than two of whatever is coming out next year.

Not only engine devs but Nvidia has continued to work on mGPU and new tech for it as well. NV is still working with AFR but has put more effort into Checkerboard Frame Rendering (CFR) rendering for NVlink capable GPU's. CFR is still scaling at about 50%, compared to 90% for AFR, in Crysis but there is no micro stutter with CFR. CFR is available in the drivers but requires activation in the inspector.

Was reading RDR2's SLI seems to work really well and the scaling is impressive for all the other issues with the game.

Here is 3dCenters info on CFR and their forums have a ton more. It is in German, just FYI.

http://www.3dcenter.org/news/nvidia-wiederbelebt-multigpu-rendering-mittels-neuem-cfr-modus

With regard to VR, I would gladly pickup another GPU if mGPU would be implemented in VR games. Unfortunately the VR games I play, mainly DCS/P3D, have said they have little interest in mGPU. It is a shame.
 
Not only engine devs but Nvidia has continued to work on mGPU and new tech for it as well. NV is still working with AFR but has put more effort into Checkerboard Frame Rendering (CFR) rendering for NVlink capable GPU's. CFR is still scaling at about 50%, compared to 90% for AFR, in Crysis but there is no micro stutter with CFR. CFR is available in the drivers but requires activation in the inspector.

Was reading RDR2's SLI seems to work really well and the scaling is impressive for all the other issues with the game.

Here is 3dCenters info on CFR and their forums have a ton more. It is in German, just FYI.

http://www.3dcenter.org/news/nvidia-wiederbelebt-multigpu-rendering-mittels-neuem-cfr-modus

With regard to VR, I would gladly pickup another GPU if mGPU would be implemented in VR games. Unfortunately the VR games I play, mainly DCS/P3D, have said they have little interest in mGPU. It is a shame.

Does CFR "just work"? I am fine with a true 50% increase in FPS if there's zero additional microstutter and it just works all the time. It sounds like it still needs developer support from your last line.

I personally won't dabble in MGPU if it's not as easy as a single GPU. I am ok with enabling in inspector but it has to "just work" after that.
 
With regard to VR, I would gladly pickup another GPU if mGPU would be implemented in VR games.

I'm assuming that this will come pretty quickly.

One of the bigger challenges is getting PC engines and the games built on them supporting DX12 well. With that done, and with ray tracing and mGPU in mind, this stuff should be easy going forward. I think the latest Tomb Raider is an example of decent ray tracing and SLI implementation, and that was a backport.
 
Does CFR "just work"? I am fine with a true 50% increase in FPS if there's zero additional microstutter and it just works all the time. It sounds like it still needs developer support from your last line.

I personally won't dabble in MGPU if it's not as easy as a single GPU. I am ok with enabling in inspector but it has to "just work" after that.

I believe CFR just needs mGPU in the game engine and it is NV's drivers that use CFR rather than AFR. In the inspector you select how the GPU renders the SLI. Either AFR or CFR. Everything I read on 3DCenter made it sound like it works now with zero stutter but the scaling isn't as high as AFR. I personally won't be testing anytime soon.

My last line was mainly regarding VR since most use custom engines and the dev teams won't spend time with mGPU at this point.
 
My last line was mainly regarding VR since most use custom engines and the dev teams won't spend time with mGPU at this point.

Ah, so that's the problem. Nvidia has had a 'one GPU per eye' solution available for quite some time, but you'd probably have to license the Unreal Engine to get baked-in support.

But couple that with SLI support and say four 3080Ti's... you could probably pull of 4k per eye at 120Hz with ray tracing!

Someone will build it, if there's support :D

[maybe Tech Jesus?]
 
I thought one of the major issues with multi card and VR is latency.
VR is so sensitive to uneven framerate and lag, its difficult to solve both.
Perhaps the new chiplet designs that share memory between cores can help.
 
I thought one of the major issues with multi card and VR is latency.
VR is so sensitive to uneven framerate and lag, its difficult to solve both.
Perhaps the new chiplet designs that share memory between cores can help.

That's actually the thing they'd be trying to solve: use one GPU per eye. At most there'd need to be a simple bit of logic to make sure that the GPUs were synced, but otherwise not a big deal.
 
SLI was useful for many years, and in the G80 days in particular where nvidia invested more resources to get games working better with it.

My guess is moving forward it will make a comeback, because if AMD/Nvidia go forward with chiplet design for GPU's this will require SLI-Like technology to get running properly.
Isn't the current rumor that nvidia is looking at a chiplet design for their next GPU architecture in a couple of years?

Regardless, I think I'd probably forego SLI simply on the grounds that the money spent on a second 10 series card could instead be spent on getting a 20 series card that is that much better, or getting a better monitor or something. I would think a 1070 Ti should still be fine for most reasonable monitor gaming.

The exception to this being that if what you really want is just to screw around with an SLI rig now that you don't have to sell both kidneys to try it out, then by all means, go for it.

That's actually the thing they'd be trying to solve: use one GPU per eye. At most there'd need to be a simple bit of logic to make sure that the GPUs were synced, but otherwise not a big deal.

I wonder what the scaling for that would actually look like. It's seems like such an obvious solution, but I have a feeling it's not as great as one might expect. There is a lot of work in rendering each frame of a game that is shared between both eyes in VR, such as shadow maps and such, and if you split the rendering between two GPUs, you'd either have to have a way of sharing the shadow maps between the GPUs or render them independently on each one, which would undermine the benefit of using two or more.
 
That's actually the thing they'd be trying to solve: use one GPU per eye. At most there'd need to be a simple bit of logic to make sure that the GPUs were synced, but otherwise not a big deal.
I heard they were doing this but havent seen any results.

It should work great with wireless (assuming the bandwidth isnt an issue) but perhaps not so well with a single cable, raw video data would need sending from the daughter card to the other connected to the VR headset with minimal delay.
It may need dual cable to be most effective.

Unless...
they create a none standard link that connects 2 cards down one cable.
We would be at their mercy for extensions.
 
I wonder what the scaling for that would actually look like. It's seems like such an obvious solution, but I have a feeling it's not as great as one might expect. There is a lot of work in rendering each frame of a game that is shared between both eyes in VR, such as shadow maps and such, and if you split the rendering between two GPUs, you'd either have to have a way of sharing the shadow maps between the GPUs or render them independently on each one, which would undermine the benefit of using two or more.

The basic issue is that the FOVs for each eye are necessarily different, so at the very least you have to render pixels separately. Obviously there's plenty of room for culling and resource sharing, but that would only be useful for VR when it doesn't affect frametimes.
 
Unless...
they create a none standard link that connects 2 cards down one cable.
We would be at their mercy for extensions.

I don't see why binding two cables together wouldn't work, honestly. Obviously the cable would be a little bulkier and the headset would need to be able to accept the extra signal lines, but that's the easy stuff.
 
I don't see why binding two cables together wouldn't work, honestly. Obviously the cable would be a little bulkier and the headset would need to be able to accept the extra signal lines, but that's the easy stuff.
Its the turning round twisting cables round each other.
Adds more movement resistance and is more likely to damage a cable.
 
Its the turning round twisting cables round each other.
Adds more movement resistance and is more likely to damage a cable.

Oh no doubt -- they'd want to do a custom cable, not glue two together to prevent that as much as possible.
 
Back
Top