GTX 1080 ..... is SLI useless at 1080p?

Joined
Jan 3, 2009
Messages
645
Normally, when building or upgrading a gaming rig, I try to get the best single card I can afford, than about a year or so later buy a second one used cheap to SLI/Crossfire it so I can keep the system going for a few years more until the next single card can outperform that SLI setup.

Currently I have two GTX 670s in SLI, and am looking to upgrade. The 900 series cards were close, but barely beat out my SLI setup, unless you were looking at TITAN cards or (at the time) $800 Ti versions.

Needless to say, the 1080 obliterates it, and I am considering an upgrade.

Thing is, my monitor is 1080p, and I am not really interested in upgrading to 4k, namely because it is a 144hz monitor that supports Nvidia 3D Vision (Yes, I actually use the 3D feature), and pushing 4k at 120FPS would be a challenge even for two 1080 cards..... not to mention I don't think any monitors even supports 120+hz and 3d Vision at 4k... and if they do they would cost a fortune right now.

However..... I looked at benchmarks of GTS 1080 cards in SLI and the results were..... disheartening.

www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080_SLI/

They performed well for 4K, practically all the games they tested saw a clear FPS in SLI at 4K.... but it's the 1080p results that worry me. The majority of the games saw practically no boost at all, or worse, a few of them actually ran SLOWER than a single card would!

Granted, this benchmark was made a while ago, so that's why I am asking here. Has this situation improved at all? Was it some driver or lack of support issue? Or do two GTX1080s still barely bring any performance increase at 1080p?

Admittedly, I didn't realize games needed to have an SLI profile otherwise they would run on a single card. This wasn't really an issue back when the 670 were new as SLI profiles were being updated for pretty much all major games all the time in Nvidia's drivers, but lately they seem to be putting far far less efforts into the whole SLI thing and support. (Strange, you would think Nvidia would WANT people to buy more of their hardware).

As I said though, I am not going to just buy two 1080s right off the bat, I am going to replace my dual670s with a single 1080, and if it's not a pointless waste to get a second 1080, add one in later, and stick with that setup for a few years instead of upgrading to the 1180 when it comes out. I don't upgrade my cards every time a new one is out.
 
A single 1080 is enough for 1080p.

A single 1080 is enough for regular 1080P, but the OP mentions in his post that he actually uses the NVidia 3D vision feature of his 1080P monitor. Basically, he needs to be able to push 120+ FPS consistently, not unlike a VR setup.

I would guess that the 1080 is still adequate for this, but probably not the massive overkill a lot of us would think. I say start with one. Get another if you're not satisfied. I suspect that VR will probably stimulate interest in using a discrete GPU for each eye, at which point the scaling of multi-gpu setups will likely improve, at least for up to two cards in stereo applications like the OP's.
 
A single 1080 is enough for regular 1080P, but the OP mentions in his post that he actually uses the NVidia 3D vision feature of his 1080P monitor. Basically, he needs to be able to push 120+ FPS consistently, not unlike a VR setup.

I would guess that the 1080 is still adequate for this, but probably not the massive overkill a lot of us would think. I say start with one. Get another if you're not satisfied. I suspect that VR will probably stimulate interest in using a discrete GPU for each eye, at which point the scaling of multi-gpu setups will likely improve, at least for up to two cards in stereo applications like the OP's.
A GTX 1080 is around 20% faster than a Mitan X, which pushes around 80 FPS in a lot of recent games at 2560x1440 with in-game settings at ultra or a mix of ultra and high. 2560x1440 is 78% more pixels than 1920x1080. I think a single 1080 will have no problem getting a solid 120 FPS in a lot of games. If not, then turn the image quality settings down to get there. But I agree that it really isn't "overkill" as many people seem to think. There is still an overwhelming opinion that 60 FPS is the standard, and I say that is short-sighted. Beyond that, SLI is really on life support at the moment. The hit you take to frame times probably overrides the increase you'd get in raw framerate, anyway, especially in VR and/or 3D.
 
A GTX 1080 is around 20% faster than a Mitan X, which pushes around 80 FPS in a lot of recent games at 2560x1440 with in-game settings at ultra or a mix of ultra and high. 2560x1440 is 78% more pixels than 1920x1080. I think a single 1080 will have no problem getting a solid 120 FPS in a lot of games. If not, then turn the image quality settings down to get there. But I agree that it really isn't "overkill" as many people seem to think. There is still an overwhelming opinion that 60 FPS is the standard, and I say that is short-sighted. Beyond that, SLI is really on life support at the moment. The hit you take to frame times probably overrides the increase you'd get in raw framerate, anyway, especially in VR and/or 3D.
60FPS would be fine if it could be consistently rendered. You get into trouble when you start talking about averaging 60FPS, though.

It's also important to note that the CPU has to do some work in order to render every frame, so while the requirements of the CPU do not necessarily scale in a linear way with the framerate, they do increase, and at some point, the CPU becomes the bottleneck, regardless of how fast the GPU is. I suspect that this may happen more often than some of us think, especially when talking about using a really fast graphics card with a relatively low res display, like the OP's. I know it happened to me when I was using a GTX 970 and a 2500K. At 1080P, the 970 would be idling while the CPU struggled to produce 60FPS. Upgrading to a 3770 on that same machine got me like 10-15FPS in cases.
 
The hit you take to frame times probably overrides the increase you'd get in raw framerate, anyway, especially in VR and/or 3D.

Forgive me for being ignorant but, what is the difference between frame times and raw framerate?
 
Forgive me for being ignorant but, what is the difference between frame times and raw framerate?

Frame time is the amount of time it take a GPU to draw the next frame. This needs to be consistent. When I play a game called Warframe, I get sub 6ms frame times. This makes the game super smooth as the GPU is very consistently drawing the next frame for me. It is a glass like experience. Kyle was testing a VR game called Raw Data and it was at 11ms+ for some of the cards. 11ms is what I consider to be the upper limit of tolerance for me. When you get to 17ms and 33ms spikes is when you start getting fatigue from staring at your monitor.

I've also played games where the raw frame rate was high, but the frame times were inconsistent. This made my eyes hurt. Assassin's Creed Unity when it first came out was terrible with this. You would have 60 fps, but the game would deliver them in chunks. So I would get nothing, then say 4 frames clumped together, then another pause, then another frame dump. It made me want to throw up and the game bugged out. Eventually this was fixed of course.

When dealing with CrossfireX and SLi this happens all the time. Basically there is latency between the GPUs trying to coordinate to draw scenes. In well designed games with a low level API like Mantle, the developers would use techniques like SFR that did not increase the frame rate, but lowered the frame times. This made the game smoother than glass. When using conventional AFR with SLI and CrossfireX, you get wildly inconsistent frame times in comparison to a single GPU. But the raw frame rate will be double and consumers like to see big numbers even if it will make them want to puke due to the inconsistency.

So what is the cure? Either use SFR for mGPU systems and lower the frame times for a glass like presentation, or increase the bandwidth and lower the latency of PCIE connections on motherboards. Since the motherboard consortium can't agree on PCIE 4 it seems, it is up to consumers to say that they prefer SFR's lower frame times in comparison to AFR's greater frame rate. But consumers are fickle and will always look for a bigger number instead of a better experience. So in the end we're going to be forever stuck with big numbers for frame rate, but crap experiences due to wildly fluctuating frame times.

Big numbers sell video cards; not favorable experiences. People would go to their graves with an aneurysm if they can get one more frame rate.

Now another thing to remember is that powerhouse single cards have excess horsepower which allows them to be underutilized to a degree when completing simplistic tasks. This excess power in reserve allows them to output terrific frame times. When a GPU is being stressed is when the frame times collapse into crap. That's why you always buy the biggest GPU that fits your budget. Always. 100% of the time.
 
Forgive me for being ignorant but, what is the difference between frame times and raw framerate?
Frame time is the time it takes (usually measured in milliseconds) the system to process the game logic and render a single frame. Framerate is the number of frames the system can render per unit time. With a single GPU, frame time is the inverse of framerate (1 second / 60 frames = .03333 seconds).

Frame time starts to become a better measure of performance once you start getting into very large frame sizes (4K, for instance) and the use of multiple GPUs to render them. It takes the GPU longer to render a large frame than a small one - a large frame time. As frame time grows, the time between the user's providing input, say pressing the "fire" button, and the frame depicting that action appearing on the display also grows. When the frame time gets long enough, the user starts to experience it as a lag between input and result.

This is relevant because many games that support SLI do so using an "alternate frame rendering" scheme, where the GPUs take turns rendering whole frames, mostly independently. This increases the total number of frames rendered (the frame rate), but because each GPU is rendering entire frames, it has no effect on frame time. As a result, even though the action appears smooth, the user may notice a lag between input and result.
 
I am not really sure I am getting it.

Frame time is the amount of time it take a GPU to draw the next frame. This needs to be consistent. When I play a game called Warframe, I get sub 6ms frame times. This makes the game super smooth as the GPU is very consistently drawing the next frame for me. It is a glass like experience. Kyle was testing a VR game called Raw Data and it was at 11ms+ for some of the cards. 11ms is what I consider to be the upper limit of tolerance for me. When you get to 17ms and 33ms spikes is when you start getting fatigue from staring at your monitor.

I've also played games where the raw frame rate was high, but the frame times were inconsistent. This made my eyes hurt. Assassin's Creed Unity when it first came out was terrible with this. You would have 60 fps, but the game would deliver them in chunks. So I would get nothing, then say 4 frames clumped together, then another pause, then another frame dump. It made me want to throw up and the game bugged out. Eventually this was fixed of course.

This makes it sound like it's Frames Per Second, but the game doesn't deliver all of them them evenly within that second.

Yet this:
Frame time is the time it takes (usually measured in milliseconds) the system to process the game logic and render a single frame. Framerate is the number of frames the system can render per unit time. With a single GPU, frame time is the inverse of framerate (1 second / 60 frames = .03333 seconds).

Frame time starts to become a better measure of performance once you start getting into very large frame sizes (4K, for instance) and the use of multiple GPUs to render them. It takes the GPU longer to render a large frame than a small one - a large frame time. As frame time grows, the time between the user's providing input, say pressing the "fire" button, and the frame depicting that action appearing on the display also grows. When the frame time gets long enough, the user starts to experience it as a lag between input and result.

Makes it sound more like latency and input lag.

Also..... someone mentioned the Asus PG278Q monitor, now I am not sure if I want to upgrade. It would be nice to have a monitor that supports 3D Vision 2 and G-Sync (though it can't do both at the same time), but since it's WQHD, according to that benchmark I would need two 1080s if I were to get 120 fps in some of the newer games at 2560x1440.... a few games seem to even have trouble at 60FPS at that resolution with a single 1080.... but two gtx1080s AND that monitor are.... very very expensive.

Not sure if it's worth it considering that 4k monitors would likely drop in price in the next few years, but then again, a 4k 120 or 144hz monitor that supports 3D vision and gsync would likely still be even more expensive even when the price drops...

(Plus there is the fact that my current monitor is 24 inches while that one is 27, so I am not sure if it would even fit. Would have been nice to have HDMI too so I can connect other devices to it but eh, I am just going to be using my PC on it 99.99% of the time anyway..... hmm, I am guessing trying to find a used one on ebay would be a bad idea?)
 
Hmm. Best way to describe it is like. You want to watch a movie so you cut on Netflix. There are 2 scenarios that play out. X are frames and think of them as flashes of light that dazzle your eyes. Would you rather have:

X_X_X_X_X_X_X_X_X_X_X_X_X_X_X_X_X_X_X_X_X_X_X_X_X

or

X___XX_X_X______X_X__XXX_XXXX_________XX_XX_XX___X

Where the ___ are the pauses before flashing the next frame into your eye. Sometimes you get dropped frames where you are just missing information. That might be where you were trying to be perfect turning your car in a game and suddenly you find yourself against the wall because the video card feed got behind and dropped some frames to catch back up. Now imagine having some screens inches from your eyes like with a VR headset and them stuttering and dropping frames.

That's why after you meet the minimum frame rate for comfortable viewing, frame time is king over additional frame rate. Well frame time is king regardless. :)
 
Wouldn't something like vsync help with that then? Although it sounds like gsync would actually be less beneficial in this case.
 
Back
Top