Near-Perfect mGPU Scaling in Strange Brigade

Are you running all settings at low or something? My GPU is maxed out going between a high of ~60fps and a low of ~48fps.

And that's when running a custom ultrawide resolution of 3840x1646, letterboxed, in order to increase framerates.

I'm not willing to turn down the settings.

No, only Shadows down to high from very high and no Ambient Occlusion. Even at max settings, with enhanced HBAO+ mod, I never go below 45fps when GPU limited. Shadow distance kept to very high.
 
What I never found explained in detail is why SFR can't work properly. After all is the only mGPU that makes sense, and even seems is should be natural for multimonitor setups.

They always like to talk about paralelization, they work market segmentation in part by adding or removing cores. so why no full GPUs? What is preventing this?

By the way RT looks like the perfect application for SFR since they are doing backwards ray tracing.
 
I hate this generalization. There are so many modern games that are easily maxed out at 4k on current GPU's.

I've been gaming at 4k since 2014!

i've been on 4k since 2016 or so and exclusively used sli the entire time.

980's to 1080's
 
If SFR comes back with a vengence, I'll definitely consider going mGPU again. If not, I'll probably keep buying a single very expensive GPU. I just don't want to deal with AFR.

SFR could be really useful for VR where you end up having to render things twice any way
 
What I never found explained in detail is why SFR can't work properly. After all is the only mGPU that makes sense, and even seems is should be natural for multimonitor setups.

They always like to talk about paralelization, they work market segmentation in part by adding or removing cores. so why no full GPUs? What is preventing this?

By the way RT looks like the perfect application for SFR since they are doing backwards ray tracing.

Because it is very hard to balance the workload for optimal results. The left half can have significantly less to render than the right half when standing next to a wall, for example.
 
I would take higher minimums and lower averages over lower minimums and higher averages.

That's not the problem. Using SFR outright kill the purpose of going with multi GPU setup in the first place. And there is no solution to this problem even after years of SFR existence. AFR have it's shortcoming but there are solution to reduce it.
 
That's not the problem. Using SFR outright kill the purpose of going with multi GPU setup in the first place. And there is no solution to this problem even after years of SFR existence. AFR have it's shortcoming but there are solution to reduce it.

No, it doesn't. The only reason AFR gained dominance over SFR is because it is much easier to implement at the driver level, while SFR is nearly impossible to optimize at the driver level. SFR would have to be done at the developer/engine level for best results, and few developers are willing to put in the effort. Also, who would want a maximum of 250 FPS when your lows are 15 FPS? Lows of 30 FPS with highs of 100 FPS would be preferable to the previous.
 
No, it doesn't. The only reason AFR gained dominance over SFR is because it is much easier to implement at the driver level, while SFR is nearly impossible to optimize at the driver level. SFR would have to be done at the developer/engine level for best results, and few developers are willing to put in the effort. Also, who would want a maximum of 250 FPS when your lows are 15 FPS? Lows of 30 FPS with highs of 100 FPS would be preferable to the previous.

Civ dev try using SFR with CIV BE before and they still not solving the main issue with SFR. If there is a solution for it then it will be used. Need more tight integration on game engine level? If that's the case the likes of nvidia probably already work on it to make it into reality. Remember they have big influence on game engine developer. They will try to push it even if majority of game developer not interested with it (just like their PhysX).
 
Civ dev try using SFR with CIV BE before and they still not solving the main issue with SFR. If there is a solution for it then it will be used. Need more tight integration on game engine level? If that's the case the likes of nvidia probably already work on it to make it into reality. Remember they have big influence on game engine developer. They will try to push it even if majority of game developer not interested with it (just like their PhysX).

There is a solution, just very difficult to implement. AFR is significantly easier to implement and produces higher averages and maximums.

nVidia and AMD have not been pushing multi-GPU for a long time. Especially nVidia, they would rather keep increasing the price on their flagship cards and spend their money on draconian NDAs.
 
There is a solution, just very difficult to implement. AFR is significantly easier to implement and produces higher averages and maximums.

nVidia and AMD have not been pushing multi-GPU for a long time. Especially nVidia, they would rather keep increasing the price on their flagship cards and spend their money on draconian NDAs.

difficult means it is not impossible. if it is possible then we should at least see some demo to show how the problem can be solved even if the demo is tune specifically to make it work (in other words not really applicable in how real game works but still can be done). i still remember when AMD said the AI in games can be "accelerated" using GPU compute. yes it might not really applicable in how AI really works in games but back then AMD still have some demo to show to the public. but with SFR there is none at all.

and yes in the past multi GPU is one way for AMD and nvidia to increase their revenue because back then majority revenue coming from mid range hardware. how about spending less than what a flagship would cost and yet still able to get significantly noticeable better performance even when the scaling is not that good? GTX460 SLI on average was about 20% faster than GTX480 and yet they cost much less to get two of them vs a single GTX480. but overtime doing multi GPU on the mid range loss it's appeal. because two "mid range card" can't no longer get better performance than the flagship. this has been the case since GTX960. they said nvidia was afraid the 1060 would eat into 1080 sales if nvidia enable SLI on 1060 but the reality is to match 1080 the 1060 at minimum need perfect scaling for it. and that's simply impossible. this is actually what happen with GTX960. how many people actually go for GTX960 SLI? so ditching SLI support for 1060 is the right move for nvidia.
 
difficult means it is not impossible. if it is possible then we should at least see some demo to show how the problem can be solved even if the demo is tune specifically to make it work (in other words not really applicable in how real game works but still can be done). i still remember when AMD said the AI in games can be "accelerated" using GPU compute. yes it might not really applicable in how AI really works in games but back then AMD still have some demo to show to the public. but with SFR there is none at all.

and yes in the past multi GPU is one way for AMD and nvidia to increase their revenue because back then majority revenue coming from mid range hardware. how about spending less than what a flagship would cost and yet still able to get significantly noticeable better performance even when the scaling is not that good? GTX460 SLI on average was about 20% faster than GTX480 and yet they cost much less to get two of them vs a single GTX480. but overtime doing multi GPU on the mid range loss it's appeal. because two "mid range card" can't no longer get better performance than the flagship. this has been the case since GTX960. they said nvidia was afraid the 1060 would eat into 1080 sales if nvidia enable SLI on 1060 but the reality is to match 1080 the 1060 at minimum need perfect scaling for it. and that's simply impossible. this is actually what happen with GTX960. how many people actually go for GTX960 SLI? so ditching SLI support for 1060 is the right move for nvidia.

As I understand it (but I could be wrong) is that with SFR applications a lot of data needs to move between the GPU's. This was manageable during the Voodoo days when we all gamed at 640x480 or 800x600.

Pump up the resolution to 1440p or 4k and you have a completely different scale of a problem.

It would seem to me that AMD's infinity fabric tech gives them a way to do this though.

I'd absolutely love to see a new scalable SFR tech, but I'm not holding my breath because it would take efforts and man hours, and we already have AFR.

AFR has many drawbacks, but im not sure the cost/benefit calculation for either AMD or Nvidia makes sense, especially since most customers seem content with the status quo, and seem to be blissfully ignorant of the issues with AFR.

If customers start expressing interest - however - this might change.

At some point it will have to if we want to see continued performance increases. With each die shrink becoming increasingly difficult and costly, the only way to improve performance at some point will be to split the load between multiple chips, and splitting the frame is the best way to accomplish this.

If we can get to the point where the interconnects are fast enough that you can just string rendering cores on separate dies together modularly, this would be awesome. They could then even present themselves to the host PC as a single GPU.

I mean, a GPU is already a large collection of parallel rendering cores. The hard part is already done. Why should these cores have to reside on the same die?
 
Back
Top