G-Sync Reduces SLI Performance

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
Some Nvidia forum posters claim that turning on G-Sync reduced performance when using SLI. The folks over at ExtremeTech grabbed a pair of 1080s to put that theory to the test, and the results were interesting. Deus Ex: Mankind Divided, for example, dropped from 49.5 FPS to 44.6 FPS when turning G-Sync on in DX11 mode, but it had absolutely no effect in DX12 mode. Meanwhile Far Cry 5 seemed to suffer more of a G-Sync deficit as anti-aliasing was cranked down. It seems that G-Sync does affect SLI performance in certain games, but the reason for that drop is still unclear.

Nonetheless, the pattern here is clear. Turning G-Sync on and using SLI is not guaranteed to tank your frame rate. We also tested Metro Last Light Redux with a forced AFR2 rendering profile, and that game showed no performance drop at all between G-Sync enabled and disabled. Deus Ex Mankind Divided reported a small penalty that vanished in DX12, while Hitman takes too much of a performance hit in that mode when using SLI to ever justify it. Three games, three different performance models. It's always dicey to try and test forum reports, not because forum commenters are liars, but because most don't provide sufficient technical information to be sure you're reproducing a problem correctly. I don't have an explanation for this issue at the moment and I realize that "timing problem" is extremely vague. It's a theory that happens to fit some common-sense facts. Our results collectively suggest that the issue is real and that the performance gaps could be as large as Nvidia users say, particularly if they continue to worsen as frame rate increases.
 
I wonder if the same thing happens with nvlink on the rtx cards. It shouldn't since nvlink has so much bandwidth.
 
On a relaterd note, I'd love to see ULMB performance numbers too since they are related technology to GYNC.
 
Interesting.


I wonder if there is an additional CPU load involved? That might explain why it didn't impact DX12 as much.

Would be interesting to see how this changes with different CPU speeds.

Back when I ran dual 6970's in Crossfire in my Phenom II 1090T, I found that I definitely got CPU limited WAY sooner with two GPU's than I did with one.

That was a long time ago with different hardware though
 
On Pascal and pre-pascal cards, the PCI-E bus is used for many intra-card transfer. On systems where each card doesn't have a dedicated x16 link, PCI-E bandwidth can bottleneck SLI scaling. For some reason, when G-Sync is active, the intra-card PCI-E bus traffic is increased significantly. Why? I don't know. On some titles, even a full x16/x16 can bottleneck SLI scaling.

AFAIK, Turing doesn't suffer from the SLI G-Sync performance penalty; this is likely due to the dedicated high-bandwidth NVLink bus. According to the Turing whitepaper, the PCI-E bus isn't used anymore for intra-card transfers as was the case on Pascal and previous architectures.

Hopefully NVidia can address this issue though for all of the 10 series owners out there.
 
I don't see an issue even if there is a hit. The whole beauty of adaptive sync is that rendering and presentation are smooth and perfectly playable even at frame rates far below what would be considered playable without it. I understand faster is always better and that some people are numbers obsessed, but really this is not something I'd be worried about.
 
SLI? I had to check the year this thread was started, thinking it was necro'd.

Yeah, I wouldn't use SLI today. Too many drawbacks.

Before I'd ever consider doing SLI, I'd go all the way up to the fastest single GPU on the market, currently the 2080ti (or maybe the Titan V? I haven't seen direct comparisons) and only if that isnt fast enough for my application would I consider SLI, and even then, I'd be reluctant.

SLI and crossfire just have too many problems.
 
I don't see an issue even if there is a hit. The whole beauty of adaptive sync is that rendering and presentation are smooth and perfectly playable even at frame rates far below what would be considered playable without it. I understand faster is always better and that some people are numbers obsessed, but really this is not something I'd be worried about.

The whole think about adaptive sync is that its patch for low fps.. if you have enough FPS adptiv sync does nothing,
So the cure is worsening the disease in this case

Adaptiv sync aka slowing down the monito to fit your slower FPS , is uneeded when you FPS is not slow to begin with.

and then there is the added inputlag from gsync communication even at same FPS/HZ



not saying daptive sync is not a nice improvement when its applicable. buts its for specific situations
 
I wonder if the same thing happens with nvlink on the rtx cards. It shouldn't since nvlink has so much bandwidth.

It's not a bandwifth problem, it's a latency problem. No matter how much NVLink can transfer, there's still a minimum transfer time involved, and that's what's causing the problem.

This is really no different from games that are a stuttery mess with SLI even with high FPS; what matters isn't how many frames you produce, but rather meeting the monitors refresh window.
 
The whole think about adaptive sync is that its patch for low fps.. if you have enough FPS adptiv sync does nothing,

Not necessarily true.


The is an inherent input lag associated with vsync, as the frame sits in the framebuffer and isn't delivered to the monitor until the next refresh.

With adaptive sync technology, the frame is delivered to the screen as soon as it is ready, eliminating this input lag.

So, the function of adaptive sync is twofold:

1.) It eliminates tearing by allowing you to sync frames to the screen regardless of how fast they render.

2.) It eliminates the portion of input lag that is due to waiting for vsync.
 
The whole think about adaptive sync is that its patch for low fps.. if you have enough FPS adptiv sync does nothing,
So the cure is worsening the disease in this case

Adaptiv sync aka slowing down the monito to fit your slower FPS , is uneeded when you FPS is not slow to begin with.

and then there is the added inputlag from gsync communication even at same FPS/HZ



not saying daptive sync is not a nice improvement when its applicable. buts its for specific situations


That's not true at all. Adaptive sync is simply using the right element of the system in which to properly sync video output. If adaptive sync had been the default method developed at the earliest point in PC gaming, we would even be having this discussion or hardly any of the other discussions about frame rates because instead of 30 years of making a choice between stuttering or tearing, we would have had neither.

I suppose you remember the days before "V-Sync OFF" was even possible? V-Sync was always enabled and we all lived with stuttering, (repeated frames), when our cards couldn't keep up with the refresh rates? If V-Sync had been around back then we might never have had this need for getting the highest refresh rate possible. Even tech like SLI and CrossFire might not have come along without a real need for higher frame rates and only larger displays and the drive toward 4K resolutions would have provided the push for them.

What would Doom 2, Unreal Tournament, Medal of Honor, and all the rest been like if we had always had adaptive sync?

I'm not saying that higher frame rates aren't great, or that input lag isn't an issue. But Adaptive Sync isn't a nice improvement, it's how synchronization always should have been done. And had it come along far earlier, there is no telling where research and development would have taken us over the years, as the emphasis would likely have been on different areas of development.
 
Not necessarily true.


The is an inherent input lag associated with vsync, as the frame sits in the framebuffer and isn't delivered to the monitor until the next refresh.

With adaptive sync technology, the frame is delivered to the screen as soon as it is ready, eliminating this input lag.

So, the function of adaptive sync is twofold:

1.) It eliminates tearing by allowing you to sync frames to the screen regardless of how fast they render.

2.) It eliminates the portion of input lag that is due to waiting for vsync.

Wait up, I think I either completely misunderstand this issue, or maybe you do.

Without adaptive sync, video cards always deliver frames to the monitors in sync with the monitors refresh rates. If a monitor is running at 85Hz, the video card will send a frame at that rate, whether it is the same frame as a previous one, (V-Sync ON), or partial frames, (V-Sync OFF). Most games that suffered from these issues were either first person or third person titles, the mouse controlled the "view" which was essentially the composition of the frame. So it didn't matter if you had input lag or not, the frame being repeated made it immaterial. But V-Sync OFF freed things up and no input lag could be perceived as long as frame rates remained high enough.

Now we have V-Sync and display doesn't suffer as before from lower frame rates, but because of the variable nature of the frame posting, input lag is now a new issue. So my understanding is that it's the variable frame/refresh rates that creates what we call input lag.
 
You guys are wasting your time arguing with him about Adaptive sync..He hates it for some reason...:shrug:
 
You guys are wasting your time arguing with him about Adaptive sync..He hates it for some reason...:shrug:

A twitch gamer needs a level of responsiveness that others do not. I understand this and I think SvenBent (Assuming this is who you meant by "he"), would admit it as well if pressed. I don't think his comments were intended to be applicable to someone who plays less demanding titles like Skyrim or League of Legends.

That being said, I also need to bring up that I don't use any 4K displays and went instead for the 21:9 ultrawide 2K display format. Therefore I can't say with authority that my experiences are not a little different from other peoples'.
 
Yeah, I wouldn't use SLI today. Too many drawbacks.

Before I'd ever consider doing SLI, I'd go all the way up to the fastest single GPU on the market, currently the 2080ti (or maybe the Titan V? I haven't seen direct comparisons) and only if that isnt fast enough for my application would I consider SLI, and even then, I'd be reluctant.

SLI and crossfire just have too many problems.

SLI/crossfire only made sense back in the 3dfx days when there were no GPU. I know most of you guys don't play fast-paced games but damn, how do you NOT notice all the damn lag? What's worse, you people use vsync and derivatives on top of that. :/
 
Wait up, I think I either completely misunderstand this issue, or maybe you do.

Without adaptive sync, video cards always deliver frames to the monitors in sync with the monitors refresh rates. If a monitor is running at 85Hz, the video card will send a frame at that rate, whether it is the same frame as a previous one, (V-Sync ON), or partial frames, (V-Sync OFF). Most games that suffered from these issues were either first person or third person titles, the mouse controlled the "view" which was essentially the composition of the frame. So it didn't matter if you had input lag or not, the frame being repeated made it immaterial. But V-Sync OFF freed things up and no input lag could be perceived as long as frame rates remained high enough.

Now we have V-Sync and display doesn't suffer as before from lower frame rates, but because of the variable nature of the frame posting, input lag is now a new issue. So my understanding is that it's the variable frame/refresh rates that creates what we call input lag.

Nope. It goes something like this:

Without vsync:

The frame is rendered as fast as the GPU can muster and is delivered to the monitor whenever it is finished rendering, regardless of where the screen is in its refresh cycle. This often results in tearing (those horizontal lines that move across the screen when two adjacent frames are on the screen at the same time)

With Vsync:

The GPU renders the frame, and when it is done keeps it in framebuffer memory until the screen is about to initialize it's next refresh, and then sends it. This has the benefit of eliminating tearing, but has downsides, including:
- Caps framerate at screen refresh rate
- only delivers frames at even divisibles of the screen refresh. So let's say you have a 60hz screen, if you can render 60fps, that is great, but if you even drop a fraction of a frame below 60fps, you are suddenly rendering at 30fps. Drop below that, now you are rendering at 15fps, etc.
- Introduces a small amount of input lag due to waiting for the screen to refesh

Vsync operates in two different modes, either double buffered or triple buffered. In double buffered mode the GPU - once it completes a frame - waits to render the next frame until that frame has been delivered to the screen.

In triple buffered mode, the GPU continues rendering another frame while waiting for the first frame to be delivered, and if it completes it in time, uses that frame instead of the previously rendered one, increasing the GPU load, but reducing the average input lag and smoothness associated with vsync.


Adaptive Vsync:

This is a hybrid between vsync and no vsync. If your GPU renders at or above the refresh of the screen, it syncs to it, but when it fails to do so recognizes that momentary tearing is better than dropping the framerate by half, and thus unsyncs it until the GPU can keep up again.


Adaptive Refresh (like G-Sync and Freesync):

The GPU renders as fast as it can, and as long as it stays within the available refresh range of the monitor, the monitor adapts to the GPU instead of the other way around. The frame is delivered to the screen and syncs perfectly as soon as it is done, without any input lag.


Fast Sync:

In this mode the GPU renders as fast as it can and the monitor at refresh time gets the most recent fully renders frame at refresh time.

No tearing and minimal input lag.

This is only typically useful if your GPU is capable of rendering MUCH faster than the fixed refresh rate of your screen. For everyone else it just results in a hot and loud GPU without any real benefit, and may even increase input lag too much.


For most people the following is true:

- If you have a G-Sync or Freesync screen and a compatible GPU, use it. Set Vsync to on or off or something else to determine what happens if you render outside of the compatible refresh rate range of your screen.

- If you have a fixed refresh screen, Adaptive Vsync is probably best. You only get tearing when your GPU falls short of rendering at the screen refresh rate, and you avoid your framerate randomly halving. The added benefit is that when your GPU is keeping up with the refresh rate, it caps the framerate and keeps your GPU cooler, so that it has more headroom to boost clocks and minimize the impact of temporary more complex scenes reduxing how often you have dips in the framerate.
 
SLI/crossfire only made sense back in the 3dfx days when there were no GPU. I know most of you guys don't play fast-paced games but damn, how do you NOT notice all the damn lag? What's worse, you people use vsync and derivatives on top of that. :/


I tend to think of it as total pipeline input lag. There are lots of different things that add input lag. Everything from the monitor, vsync, SLI, game engine, net code, mouse hardware, mouse driver, etc. etc. contribute.

Studies show that In the general population, anything 100ms or less is perceived as more or less instantaneous. Those of us who play FPS games with a mouse and keyboard are probably more sensitive to it, but I have never seen any data on the subject, so let's just make a wild guess and say it's 50ms for us.

If your total pipeline is below this level of input lag you don't notice it. If you happen to have a very fast CPU, great mouse and drivers, great USB hardware and drivers, and a monitor with a very low amount of input lag, this can make up for the slight amount of input lag something like Vsync or SLI causes, or maybe even both at the same time.

With Vsync we are usually not rendering more than a third of the refresh rate faster than the screen, so your input lag here, if you game at 60hz is (1/60)/3s, so we are talking 5.5ms. A great low refewhr screen can more than make up for this.

With SLI/Crossfire, the input lag is greater. With two GPU's rendering in AFR mode your added input lag will equal an entire frame, so at 60hz, this is 1/60s, or 16.67ms.

Now with combined input lag due to SLI/Crossfire in AFR mode and vsync, we have added a total of 22.22ms at 60hz.

If my guess at 50ms system total above for when input lag becomes noticible to a typical PC gamer is accurate, then the rest of your system (including monitor) better be pretty snappy and responaive, or chances are it will be noticible.

As with everything else in life, some systems and game engines are laggier than others, and some people are more sensitive to the lag than others. In some setups to some people in some titles the SLI+Vsync input lag may be really apparent, in others not.

If they could just make a multi-GPU SFR mode that works and scales well and abandon AFR all together, we wouldn't be talking about this input lag problem due to mGPU anymore.
 
I think we have a semantics thing going here.

No matter your buffering mode, if V-Sync is enabled, the video card will always send a frame to the monitor in sync with the refresh rate of the monitor. The frame might be a new frame, or it might be that same frame that was previously sent. The result is "stuttering" and a FPS Counter to shows the "halfing" drops in frame rate you mentioned above. Say a monitor has a refresh rate of 60hz, as soon as the card can't keep up and starts repeating frames, it reports it as a halving, or more, of the nominal monitor refresh rate. The only point I am trying to make is that the card is still sending a frame to the monitor every 1/60th of a second at 60Hz, even if it's a repeat. The tie-in to SVent's comment about input lag is that because the image isn't changing, it's not evident. You have a bigger problem in this situation, actually, input lag in this situation is irrelevant.

Your description of V-Sync Disabled, I also have a problem with. The card doesn't render a frame and send it when it's finished. It sends what it has in time with the monitor's refresh rate. It is able to do this because whereas with V-Sync Enabled, the frame buffer get's fully deleted before rebuilding the next frame, with V-Sync disabled, the frame is not deleted, it's simply over written with the new information. If the frame isn't finished when the refresh cycle hits, it goes anyway, part of the old image and part of the new, "tearing". Which is most evident in an first person view when you swing the camera left or right so that the image is partially at one angle and partially at another. Like tearing a picture in half and sliding it to the side a fraction.

This is what I have understood since the days when V-Sync Disable first appeared as an option and was being explained to the uninitiated like myself.

For the most part, this is an argument for people like us, the rest probably don't care much how things work exactly, just what they should do for their own best results and your recommendations are as good as any I have seen.
 
Your description of V-Sync Disabled, I also have a problem with. The card doesn't render a frame and send it when it's finished. It sends what it has in time with the monitor's refresh rate. It is able to do this because whereas with V-Sync Enabled, the frame buffer get's fully deleted before rebuilding the next frame, with V-Sync disabled, the frame is not deleted, it's simply over written with the new information. If the frame isn't finished when the refresh cycle hits, it goes anyway, part of the old image and part of the new, "tearing". Which is most evident in an first person view when you swing the camera left or right so that the image is partially at one angle and partially at another. Like tearing a picture in half and sliding it to the side a fraction.

This probably is the case. Having it render to the frame buffer, and whatever is in the frame buffer is sent at the time of refresh. Makes sense.

The outcome is the same either way.
 
This probably is the case. Having it render to the frame buffer, and whatever is in the frame buffer is sent at the time of refresh. Makes sense.

The outcome is the same either way.

Agreed. It's the practical outcome that matters to most. For my part I have issues with my Bethesda titles, I don't think I'm alone. I think I have G-Sync up and running with them, but I frequently will be running along fine and then a get sudden bad, not lag but almost a freeze, like it's going to freeze and lock up, sometimes it crashes completely. Neither here nor there with our discussion other than, not all titles and game engines are the same. Some run well, others not so much. Sometimes it's hard to know what could be an issue, video lag, input lag, game optimization, drivers, complex systems have a multitude of potential problems. Sometimes the simplest things can become a real issue with one title and not affect any other a person owns.
 
...without any input lag...

You mostly got it right, but there is always some input lag. Perhaps you meant "without adding any undue input lag".

For 60hz refresh rate, the screen is updating every 16.67ms. There is basically at least that much input lag, which is unavoidable. You move the mouse halfway thru the current refresh, so we are 8ms into this frame and the new frame is being calculated. The game engine might not even catch the new input in this frame, so you get a new frame on the screen for 16.67ms, now we are at 25ms from when the mouse was first moved, before the third frame which has incorporated it is finally displayed.

G-Sync/freesync are good at reducing input lag, since the GPU can tell the display how fast to display frames. So if the gpu and display (max) refresh rate is fast enough, the display can speed up how fast it refreshes, and sync up to the GPU output, and some of the above input lag is eliminated. But it cannot eliminate all of it.

A better explanation is on blur busters, it's worth a read. According to that research, having a 5ms faster display (than your opponent) = 7% greater chance to win (57% chance to win vs 50%) (page 3, the quickdraw simulation) https://www.blurbusters.com/human-reflex-input-lag-and-the-limits-of-human-reaction-time/4/

It's a long but interesting article. This one is also good, covering G-Sync specifically: https://www.blurbusters.com/gsync/gsync101-input-lag-tests-and-settings/3/

Limiting the frame rate can further reduce input lag, which is really interesting and counter-intuitive. That's a few pages in. General rule to get the best input lag reduction is G-sync on, limit fps to -3 below the displays' max refresh rate. So set a 57hz fps limit on a 60fps display, 97hz on a 100mhz display, etc. I think the fps limits are set with some third party tool, not sure I still need to read the rest of the article.
 
You mostly got it right, but there is always some input lag. Perhaps you meant "without adding any undue input lag".

For 60hz refresh rate, the screen is updating every 16.67ms. There is basically at least that much input lag, which is unavoidable. You move the mouse halfway thru the current refresh, so we are 8ms into this frame and the new frame is being calculated. The game engine might not even catch the new input in this frame, so you get a new frame on the screen for 16.67ms, now we are at 25ms from when the mouse was first moved, before the third frame which has incorporated it is finally displayed.

G-Sync/freesync are good at reducing input lag, since the GPU can tell the display how fast to display frames. So if the gpu and display (max) refresh rate is fast enough, the display can speed up how fast it refreshes, and sync up to the GPU output, and some of the above input lag is eliminated. But it cannot eliminate all of it.

A better explanation is on blur busters, it's worth a read. According to that research, having a 5ms faster display (that your opponent) = 7% greater chance to win (57% chance to win vs 50%) (page 3, the quickdraw simulation) https://www.blurbusters.com/human-reflex-input-lag-and-the-limits-of-human-reaction-time/4/

It's a long but interesting article. This one is also good, covering G-Sync specifically: https://www.blurbusters.com/gsync/gsync101-input-lag-tests-and-settings/3/

Limiting the frame rate can further reduce input lag, which is really interesting and counter-intuitive. That's a few pages in. General rule to get the best input lag reduction is G-sync on, limit fps to -3 below the displays' max refresh rate. So set a 57hz fps limit on a 60fps display, 97hz on a 100mhz display, etc. I think the fps limits are set with some third party tool, not sure I still need to read the rest of the article.


You got it. I was referring specifically to the process of transferring the frame from the framebuffer to the screen, so "without adding any input lag" would have been clearer, but I thought that was clear since l had just finished discussing the input lag of the whole system :p
 
I tend to think of it as total pipeline input lag. There are lots of different things that add input lag. Everything from the monitor, vsync, SLI, game engine, net code, mouse hardware, mouse driver, etc. etc. contribute.

Studies show that In the general population, anything 100ms or less is perceived as more or less instantaneous. Those of us who play FPS games with a mouse and keyboard are probably more sensitive to it, but I have never seen any data on the subject, so let's just make a wild guess and say it's 50ms for us.

If your total pipeline is below this level of input lag you don't notice it. If you happen to have a very fast CPU, great mouse and drivers, great USB hardware and drivers, and a monitor with a very low amount of input lag, this can make up for the slight amount of input lag something like Vsync or SLI causes, or maybe even both at the same time.

With Vsync we are usually not rendering more than a third of the refresh rate faster than the screen, so your input lag here, if you game at 60hz is (1/60)/3s, so we are talking 5.5ms. A great low refewhr screen can more than make up for this.

With SLI/Crossfire, the input lag is greater. With two GPU's rendering in AFR mode your added input lag will equal an entire frame, so at 60hz, this is 1/60s, or 16.67ms.

Now with combined input lag due to SLI/Crossfire in AFR mode and vsync, we have added a total of 22.22ms at 60hz.

If my guess at 50ms system total above for when input lag becomes noticible to a typical PC gamer is accurate, then the rest of your system (including monitor) better be pretty snappy and responaive, or chances are it will be noticible.

As with everything else in life, some systems and game engines are laggier than others, and some people are more sensitive to the lag than others. In some setups to some people in some titles the SLI+Vsync input lag may be really apparent, in others not.

If they could just make a multi-GPU SFR mode that works and scales well and abandon AFR all together, we wouldn't be talking about this input lag problem due to mGPU anymore.

I can't be trusted with it, so I cannot comment on the math. Still, I'd rather start with 120 Hz as a baseline. I've only had one 60 Hz panel between my old 120Hz LCD and my last CRT and it was a miserable experience because I could no longer enjoy playing Quake, UT or anything like that.

So, ~50ms (depending on game and setup) local + ~50ms ping to a server, isn't that 100ms? That's not even taking into account possible side effects

I can't wait for cheap 4k 120 big fat panels. I am saying this because in reality all you have to do is move your mouse in ANY game that's rendering fast enough.

Meanwhile we have this
https://hardforum.com/threads/black...-have-been-lowered-from-60hz-to-20hz.1970098/ going on hahah
 
Side note:

A cool sync mode I would like to see would be some sort of AI assisted render delay.

An AI that statistically monitors the frame content and when Vsync is enabled intelligently delays rendering the frame so it finishes rendering just in time for the next screen refresh.

This way the snapshot in time you are rendering to would be a lot more current, especially at low GPU loads on low refresh screens.

You'd need to build in some sort of statistical buffer, because if you screw up and run over, the penalty would be stutter, but I think this could drastically decrease input lag on even very low refresh rate screens.
 
I can't be trusted with it, so I cannot comment on the math. Still, I'd rather start with 120 Hz as a baseline. I've only had one 60 Hz panel between my old 120Hz LCD and my last CRT and it was a miserable experience because I could no longer enjoy playing Quake, UT or anything like that.

So, ~50ms (depending on game and setup) local + ~50ms ping to a server, isn't that 100ms? That's not even taking into account possible side effects

I can't wait for cheap 4k 120 big fat panels. I am saying this because in reality all you have to do is move your mouse in ANY game that's rendering fast enough.

Meanwhile we have this
https://hardforum.com/threads/black...-have-been-lowered-from-60hz-to-20hz.1970098/ going on hahah


I don't think you can just add the network lag to the input lag. They contribute to each other in some respects, bit the network lag doesn't impact mouse response and most games use fancy predictive netcode these days to minimize the impact of network lag.

I used 60hz, because since the CRT era when I would play vsynced to 100hz at 1600x1200, I have only had 60hz panels, and I think that is true for most people. That said, it scales linearly. If 120fps is your target, just halve my input lag numbers in my post above, as the frame times are half as long.

If I were playing competitively in twitchy online games, I'd get something with a higher refresh, but I don't, So I prefer resolution over high refresh.

Personally I've never really enjoyed Quake/UT style games. I mean, I did back in the 90's when there wasn't anything better than twitchy fast paced deathmatch games, but ever since I have found that style of gameplay desperately boring. It depends too much on twitchy fast luck of the draw, and not enough on strategic planning.

I actually never played any UT game, but I did play Doom, Quake and Quake 2 back in the day.

I didn't get Q3A until a couple of years after launch, and at that time I had player Half life, Deus Ex and Counter-Strike and there was no going back. Counter-Strike was great and required more strategy and teamwork, but I never quite enjoyed it's fast twitchy run and gun nature. I preferred something with a bit more of a deliberative pace and careful planning and strategy.

When I discovered Red Orchestra (and its mods, Darkest Hour and Mare Nostrum) I found what I like. I essentially consider it the greatest game of all time, and the sequel, Red Orchestra 2 to be quite up there as well, but only if played in Classic or Realism modes.

IMHO 60hz is serviceable, but you can still get a noticible improvement above it, though the diminishing returns s set in rather quickly. I'd argue that above 90fps, there is no point really. At that point you are just creating needless heat and noise and would be better served by a higher resolution.

I would jump for an adaptive refresh 120hz 4k screen at about 43" though. I'd love that.
 
Last edited:
I don't think you can just add the network lag to the input lag. They contribute to each other in some respects, bit the network lag doesn't impact mouse response and most games use fancy predictive netcode these days to minimize the impact of network lag.

I used 60hz, because since the CRT era when I would play vsynced to 100hz at 1600x1200, I have only had 60hz panels, and I think that is true for most people. That said, it scales linearly. If 120fps is your target, just halve my input lag numbers in my post above, as the frame times are half as long.

If I were playing competitively in twitchy online games, I'd get something with a higher refresh, but I don't, So I prefer resolution over high refresh.

Personally I've never really enjoyed Quake/UT style games. I mean, I did back in the 90's when there wasn't anything better than twitchy fast paced deathmatch games, but ever since I have found that style of gameplay desperately boring. It depends too much on twitchy fast luck of the draw, and not enough on strategic planning.

I actually never played any UT game, but I did play Doom, Quake and Quake 2 back in the day.

I didn't get Q3A until a couple of years after launch, and at that time I had player Half life, Deus Ex and Counter-Strike and there was no going back. Counter-Strike was great and required more strategy and teamwork, but I never quite enjoyed it's fast twitchy run and gun nature. I preferred something with a bit more of a deliberative pace and careful planning and strategy.

When I discovered Red Orchestra (and its mods, Darkest Hour and Mare Nostrum) I found what I like. I essentially consider it the greatest game of all time, and the sequel, Red Orchestra 2 to be quite up there as well, but only if played in Classic or Realism modes.

IMHO 60hz is serviceable, but you can still get a noticible improvement above it, though the diminishing returns s set in rather quickly. I'd argue that above 90fps, there is no point really. At that point you are just creating needless heat and noise and would be better served by a higher resolution.

I would jump for an adaptive refresh 120hz 4k screen at about 43" though. I'd love that.

Well, I mentioned net lag because it adds to the effective lag that you get to experience. I am not giving up my multiplayer angle. As for great predictive netcode, care to share any names? Because I see 20 hz server in CoD, and have you actually played any modern game like that? PUBG is horrible for the very same reason. What good is predictive backend when you're running at 20 Hz and for sure not much more on the client (updates, not actual frames). Things are not as good as you imagine. I actually installed QuakeLive because I simply had to know for sure that I was not simply remembering it wrong. No.

It's an objectively better in every technical respect. Rendering quality and game play aside. That's all I am arguing.

I remember Red Orchestra. Wasn't it free at some point? Looked impressive. Too slow, though. 90fps was enough because the old UT engine sadly didn't allow for more.

Talk to you again after you make the jump to a nice and fast panel. I mean there are some "120" Hz panels out there. I've looked into a Sony model that had ~18ms input lag running in "gaming" 120Hz mode. Twice that in 60 ms. Current TVs are not that great for (the) games (that I play) :)
 
Well, I mentioned net lag because it adds to the effective lag that you get to experience. I am not giving up my multiplayer angle. As for great predictive netcode, care to share any names? Because I see 20 hz server in CoD, and have you actually played any modern game like that? PUBG is horrible for the very same reason. What good is predictive backend when you're running at 20 Hz and for sure not much more on the client (updates, not actual frames). Things are not as good as you imagine. I actually installed QuakeLive because I simply had to know for sure that I was not simply remembering it wrong. No.

It's an objectively better in every technical respect. Rendering quality and game play aside. That's all I am arguing.

I remember Red Orchestra. Wasn't it free at some point? Looked impressive. Too slow, though. 90fps was enough because the old UT engine sadly didn't allow for more.

Talk to you again after you make the jump to a nice and fast panel. I mean there are some "120" Hz panels out there. I've looked into a Sony model that had ~18ms input lag running in "gaming" 120Hz mode. Twice that in 60 ms. Current TVs are not that great for (the) games (that I play) :)


Either way it's a ways off. There isn't a single GPU out there that can handle modern games at 4K 120hz. Heck, my Pascal Titan X overclocked to 2080mhz and +700 on the RAM on water can only barely handle most games made in the last 5 years at 60fps at 4k. Newer more demanding titles like Fallout 4 or Deus Ex Mankind Divided are frequently in the 45fps range at 4k. Even some older titles, like 8 year old Metro 2033 really struggle at 4k.

So, Id love a 43" 4k 120hz Adaptive refresh screen, but short term my gaming wouldn't benefit much from it, because even if the damned 2080ti's ever come back into stock, I'm not getting that kind of framerates
 
After reading that article on Extremetech, looking at the graphs... I don't think that G-Sync is causing performance loss, I think that G-Sync has to work within the max refresh of the display, and it is going to lower the fps to never be greater than the displays' max refresh rate. Basically the higher fps is going to have torn screens (the tests with G-Sync off).. the issues that vsync + G-Sync fix.

My theory anyway. He does mention that, but then proceeds to test on a 60hz max refresh G-Sync display?

Do the test again with a 120hz or better G-Sync display.

G-Sync was doing exactly what it is supposed to do. All it really points out is that SLI is overkill unless you got a really fast refresh display.
 
Either way it's a ways off. There isn't a single GPU out there that can handle modern games at 4K 120hz. Heck, my Pascal Titan X overclocked to 2080mhz and +700 on the RAM on water can only barely handle most games made in the last 5 years at 60fps at 4k. Newer more demanding titles like Fallout 4 or Deus Ex Mankind Divided are frequently in the 45fps range at 4k. Even some older titles, like 8 year old Metro 2033 really struggle at 4k.

So, Id love a 43" 4k 120hz Adaptive refresh screen, but short term my gaming wouldn't benefit much from it, because even if the damned 2080ti's ever come back into stock, I'm not getting that kind of framerates


I greatly enjoy my 34" Acer Preditor display at 85hz. It 2K, not the same as a 4K at all, but I love the ultrawide format. You could argue that the extra width isn't useful but I'd say, it feels natural. Like my peripheral, if I didn't have it, I'd feel wrong, something missing.

I wish there was more acceptance and demand of this display format.
 
Last edited:
Either way it's a ways off. There isn't a single GPU out there that can handle modern games at 4K 120hz. Heck, my Pascal Titan X overclocked to 2080mhz and +700 on the RAM on water can only barely handle most games made in the last 5 years at 60fps at 4k. Newer more demanding titles like Fallout 4 or Deus Ex Mankind Divided are frequently in the 45fps range at 4k. Even some older titles, like 8 year old Metro 2033 really struggle at 4k.

So, Id love a 43" 4k 120hz Adaptive refresh screen, but short term my gaming wouldn't benefit much from it, because even if the damned 2080ti's ever come back into stock, I'm not getting that kind of framerates

There's a lot to be desired from Mankind Divided. I know they're simulating a fairly large area but it absolutely doesn't have to be that way. A lot of wrong with both new Deus Ex games. They kind of work for me but run like shit. I am getting ~40-100 depending on where I am and what's going on. At first I thought it's probably my 2500k that's holding me back, or my 1070 but after looking at reviews and youtube videos I learned there's no fixing it. Great shame.

As for Metro. I don't know why everyone loved this game. Atmosphere was good, I get that. I had 560ti back in 2011 and gave up on the game in a couple of hours. Honestly, the graphics weren't anything special but it ran like absolute crap. I mean low 20s. I want to support these independent developers but they are making it really hard to justify the encouragement. Hard Reset was kind of like that too. Great game, great concept. It's not all mindless action at all. But it ran like shit on my 560ti. I swear there was at least half a second delay between me pressing space and the character actually jumping. Completely different game on modern hardware.
 
Yeah, I wouldn't use SLI today. Too many drawbacks.

Before I'd ever consider doing SLI, I'd go all the way up to the fastest single GPU on the market, currently the 2080ti (or maybe the Titan V? I haven't seen direct comparisons) and only if that isnt fast enough for my application would I consider SLI, and even then, I'd be reluctant.

SLI and crossfire just have too many problems.

Yeah, hard to disagree with that. However Titan V is $3K which is likely more than a complete high end rig alone. It basically just went into Quadro/Tesla region. SLI still works in some games even those that seem to not support it like ACO or RoTR and SoTR as evident with my SLI setup (1080ti). However I think that is also what accounts for game crashes from time to time. I haven't had the time nor desire to experiment and based on my experience Deus Ex Mankind Divided started this crashing trend (I gave up on that game as I'm unwilling to pull one of the cards).
 
I know I'm coming out of far left field with this one. Recently MS did another API update and around the time NV's 4xx drivers came out I started seeing some issues on both my rigs(non SLI 1080TI/G-sync & 3d and 1080SLI V-Sync HDR). I noticed the problems with Shadow Of Tomb Raider. I know the game has a crash happy reputation but while I kept both rigs at 399.24 it was much, much, more stable. I also use that game as an example because performance and feature wise there's significant differences between DX11 and DX12. DX11 allowed 3d but I had to use DX12 for everything else(SLI, HDR, etc.). It wouldn't be the first time a new API w/o correct drivers created a new problem that someone only noticed when re-benching an older title. I took this ride once before with the fall creators update and those issues took nearly six months of NV drivers to correct.
 
Back
Top