How VSync works, and why people loathe it

No, not at all. In page flipping triple buffering (what you described), input lag will be reduced if your FPS is much higher than your refresh rate, yes, but unfortunately that isn't the method that is used. DirectX's triple buffering isn't page flipping, but render ahead (a queue of completed frames). Completed frames are shown in the order they were completed, regardless of whether or not there is a newer completed frame to be shown. That will result it *INCREASED* input lag if your FPS is higher than your refresh rate. At least 1 frame+, depending on how big the render ahead queue actually is (from 0 to 8 frames with 3 being the default). So with the default triple buffering in DirectX combined with higher FPS than refresh rate can easily result in ~32ms of additional input lag. And some games do actually use a larger render ahead queue, adding even more input lag if your FPS exceeds your refresh rate.
I didn't know this either. I have to ask: what the heck is the point of this? It seems totally and utterly useless (well okay it deals with the integral fraction framerates I guess...). The page flipping method just seems so much more obvious, easy to implement and beneficial, there must be a reason they chose this method instead.

It's true that, if you waited until the last frame frame to do the blending, you'd have the same input lag as v-sync (though with much smoother looking movement). The trick is, you don't have to wait for all the frames to start displaying the output.
Interesting idea, and I think it might be possible to make it work with some effort. Possibly without even modifying the render pipeline... Still, your simplified description seems like it's probably masking some serious practical problems, and I think the technique would still only be usable with framerates at least a couple times higher than the refresh rate.

Fair enough, but at the same time that situation really doesn't happen in most games - at least it doesn't in games that are designed to not give you a seizure
No, but it's pretty clear to me that the brain can process information at these kind of time scales. For example consider MPAA tracking dots in theaters these days (if you haven't noticed them, example. They last one frame, are often present in busy action sequences with lots of visual stimuli, or just before a cut, and are of moderate contrast. They're not at all hard to notice, and you can clearly discern the structure and probably transcribe the dots from memory shortly afterward. Yes it's 40ms, but the time scale is similar and not outside the realm of possibility.

Plus, and far more importantly IMO, we're not talking about interpreting a unique stimuli, but a feedback loop that's already set up and is basically 'subconscious'. The problem is that your brain, in aiming at the enemy, overshoots the mark because the feedback is delayed. In this continuous loop context, my gut feeling is that the brain's response is very different than any reaction or perception test would indicate. In poking around looking for a paper I thought I read about a similar situation, I found a different paper citing that an update rate of 1000Hz is necessary for haptic feedback in simulated surgery (for training; no delays in this example) or surgeon accuracy is reduced. Not suggesting that's necessary with visual feedback, but certainly feedback systems like this are very sensitive.
 
I didn't know this either. I have to ask: what the heck is the point of this? It seems totally and utterly useless (well okay it deals with the integral fraction framerates I guess...). The page flipping method just seems so much more obvious, easy to implement and beneficial, there must be a reason they chose this method instead.

It does a couple of things:

1) Input lag really isn't a problem except for an extremely small minority of people, thus reducing input lag isn't a high priority, and doesn't take precedent over a conflicting approach.

2) It can help create smoother FPS. Look at microstutter - rapidly changing fps can cause something to not look as smooth. A render ahead can iron out those small variations in render time. This really helps on consoles, as they can then vsync at 30fps and be able to keep vsync lock even if the occasional frame takes longer to render.


Yes it's 40ms, but the time scale is similar and not outside the realm of possibility.

No, the time scale is very important. We're talking 24fps vs. 60fps. We know that 24fps without motion blur doesn't look smooth to our eyes. For many, even just bumping that to 35fps can make it look smooth. Our eyes suck, you give them far too much credit.

Plus, and far more importantly IMO, we're not talking about interpreting a unique stimuli, but a feedback loop that's already set up and is basically 'subconscious'. The problem is that your brain, in aiming at the enemy, overshoots the mark because the feedback is delayed. In this continuous loop context, my gut feeling is that the brain's response is very different than any reaction or perception test would indicate. In poking around looking for a paper I thought I read about a similar situation, I found a different paper citing that an update rate of 1000Hz is necessary for haptic feedback in simulated surgery (for training; no delays in this example) or surgeon accuracy is reduced. Not suggesting that's necessary with visual feedback, but certainly feedback systems like this are very sensitive.

Interesting - by any chance do you still have that paper?
 
I notice it even on CRT's at 120hz. But, the more I read, the more I think that some people can see it and others can't.

Even with scientific testing these guys said they overall couldn't detect it on this monitor:

Input lag on AW2310 is also very low. We measured input lag to insignificant levels and our measurements seldom showed anything at all This is very good and important for gaming. You can use our inputlagTest software here to evaluate input lag on all monitors and TVs.
http://www.flatpanelshd.com/review.php?subaction=showfull&id=1275899900

So the OptX was essentially matching the CRT they were using. I've been playing Portal again in Surround. Portal is a game that I think is easy to detect lag with, I've noticed there plenty. Turned v-sync on, frame rate LOCKED at 120, not a SINGLE dip below that ever and the game is just totally quick. No lagging, no tearing, no anything other than 120 FPS beautiful 2D Surround.

But I do agree with you, some notice more than others and not only that its detection will change from game to game even with the same individual.
 
Even with scientific testing these guys said they overall couldn't detect it on this monitor:

http://www.flatpanelshd.com/review.php?subaction=showfull&id=1275899900

I don't think you can measure the input lag that is been discussed here, it's just feeling that things aren't just right with your mouse control.

The scientific testing done on the link is basically running a game on two monitors, one crt and one lcd and then seeing how far behind the lcd is. So all the guys are testing is how far behind the alienware monitor is from a CRT.

It's the standard test for gaming monitor for years now.
 
I don't think you can measure the input lag that is been discussed here, it's just feeling that things aren't just right with your mouse control.

The scientific testing done on the link is basically running a game on two monitors, one crt and one lcd and then seeing how far behind the lcd is. So all the guys are testing is how far behind the alienware monitor is from a CRT.

It's the standard test for gaming monitor for years now.

I did say that it was just a comparison to the CRT they were using. At any rate if these Alienwares aren't fast enough then there's nothing much else left and if 1 in a 1000 could detect the lag I'd be shocked. As I said, I replicated a condition that I noitced it and it was just not there. And this was in Surround which will amplify the problem. When you move your mouse its really easy to notice 5 ft of screen not following.
 
My opinion is vsync is aesthetically ideal, and when twitch response isn't needed it seems reasonable enough to have on...
 
I don't think you can measure the input lag that is been discussed here, it's just feeling that things aren't just right with your mouse control.

The scientific testing done on the link is basically running a game on two monitors, one crt and one lcd and then seeing how far behind the lcd is. So all the guys are testing is how far behind the alienware monitor is from a CRT.

It's the standard test for gaming monitor for years now.

If the LCD is as fast as the fastest CRT, what is being argued here? If there is nothing faster then the LCD, it is the best, therefore is the best monitor to buy, correct? If someone were to complain that no monitor in the world is fast enough to keep up with their eyes, that sounds like a personal, likely a mind-over-matter, problem.
 
If the LCD is as fast as the fastest CRT, what is being argued here? If there is nothing faster then the LCD, it is the best, therefore is the best monitor to buy, correct? If someone were to complain that no monitor in the world is fast enough to keep up with their eyes, that sounds like a personal, likely a mind-over-matter, problem.

There is nothing been argued here? I was just letting heatlesssun know that I notice the problem on a CRT and that the scientific method used is just comparing the LCD to a CRT, so if you notice the lag caused by enabling vsync on a CRT you are going to notice it on any LCD.
 
For games with varying frame rates the triple buffering scheme can restrict processing in the same way that double buffering does since it is feasible processing can fill both back buffers before the vertical scan is finished. In other words, whereas double buffering blocks when you can refresh an entire screen whilst waiting for the vsync, triple buffering will cause the same lock when you can refresh two. This means you can feasibly extend the number of buffers as far as you like.

A technical problem which can occur in systems using the triple buffer scheme is, since it best used to smooth refresh rates when the amount of frame that can be computed per screen refresh varies between one and two buffer updates per screen, it will almost certainly lead to juddering even if screen tearing is eliminated because the rate at which the buffers are switched will vary. Sometimes switching every screen, sometimes every other screen and then back again.

Double buffering with vsync is an 'ideal' solution. This requires the programmer to ensure a complete back buffer can be computer exactly once for every screen refresh, keeping both front and back buffers refreshing at the same rate.

Flight simulators used to use a scheme in which they would guarantee precisely 60Hz screen transport by dynamically changing geometry loads so every new frame could be computed during one screen scan.

One drawback not mentioned with schemes that use more than two buffers is that when screen compute times plummet a noticeable drop in response time can be detected since the motion of the user is being computed for the new screen that is at the end of the buffer queue. In other words, if you move a controller while the third screen is being drawn, your move will be drawn to the screen following, four screen ahead in the queue.
 
A quad necro, (2005, 2008, 2010, and now 2011), for a first post. Awesome!!!
 
Which is the nicest thing about 120Hz monitors, you don't have to unless you consider 120 FPS a performance hit.

Yup. I cannot handle no 60fps limit. My first PC in 1998 was a Pentium 3 with a Voodoo 3 on a Trinitron 17" that natively did 85 hertz refresh kicked ass in HLDM. Now I run 120Hz on a projector @ 80 inch screen size, DA Friggin BOMB if your into FPS I play TF2 on it and have a blast (I can consistently snipe pretty good with the Sidney Sleeper some people think it's underpowered but it covers the enemy in jarate which leads to mini crits on them hehe) Anyhow smooth as butter when 90+ fps is steady imho (I can't stand how every site thinks 60 fps is smooth....ugh r they blind?

I'll only use 60fps when it comes to single player games with the 3D vision, 60fps per eye ;) My kit arrives Saturday or Monday with the Duke Nukem download ticket, not that I'm dying to play that title but it's free so it goes on my steam account. 3D is fun in Oblivion or SF 4 or arcade edition or in racers like GRID, so I'm hoping it makes Skyrim that much better since it's a DX9 title I need sum thing extra to enhance the graphics. ;) Or it will just feel like 2007 for me hehe...which isn't necessarily a bad thing....60 hertz IS tho. ;)
 
For games with varying frame rates the triple buffering scheme can restrict processing in the same way that double buffering does since it is feasible processing can fill both back buffers before the vertical scan is finished. In other words, whereas double buffering blocks when you can refresh an entire screen whilst waiting for the vsync, triple buffering will cause the same lock when you can refresh two. This means you can feasibly extend the number of buffers as far as you like.

A technical problem which can occur in systems using the triple buffer scheme is, since it best used to smooth refresh rates when the amount of frame that can be computed per screen refresh varies between one and two buffer updates per screen, it will almost certainly lead to juddering even if screen tearing is eliminated because the rate at which the buffers are switched will vary. Sometimes switching every screen, sometimes every other screen and then back again.

Double buffering with vsync is an 'ideal' solution. This requires the programmer to ensure a complete back buffer can be computer exactly once for every screen refresh, keeping both front and back buffers refreshing at the same rate.

Flight simulators used to use a scheme in which they would guarantee precisely 60Hz screen transport by dynamically changing geometry loads so every new frame could be computed during one screen scan.

One drawback not mentioned with schemes that use more than two buffers is that when screen compute times plummet a noticeable drop in response time can be detected since the motion of the user is being computed for the new screen that is at the end of the buffer queue. In other words, if you move a controller while the third screen is being drawn, your move will be drawn to the screen following, four screen ahead in the queue.

Bad necroposter, bad! *hits newbee with rolled up newspaper* :mad:
 
Nice read.
Interestingly nvidia solved the lower-than-refresh-framerate issue, with their adaptive vsync, that only works when the fps is above the refresh.
 
Nice read.
Interestingly nvidia solved the lower-than-refresh-framerate issue, with their adaptive vsync, that only works when the fps is above the refresh.

They didn't really solve it as you do still get some tearing with adaptive vsync enable.

LucidLogic actually did solve it with their VitruMVP tech.
http://www.lucidlogix.com/product-virtu-mvp.shtml

In titles its compatible with, it really does fix the tearing and responsiveness problems with vsync.
 
I always turn off VSYNC. Say team fortress 2, it is always in the high 200's in frames per second, so there would be no problem keeping up with VSYNC at 60fps cap. However I think it feels and looks smoother to run without VSYNC.
 
This was excellent.
Thanks for helping me stay awake in class! Very interesting read.
Worth necro'ing. Kudos to the OP
Nice read.
Too bad it's wrong on nearly every important point. That's not what causes tearing. That's not what vsync does. That's not how multiple buffering works. That's not what double buffering is for. And single buffering hasn't even existed for decades.

I cringe every time I see this bumped, because it seems to leave people feeling enlightened, when they've really just been misinformed...
 
I always turn off VSYNC. Say team fortress 2, it is always in the high 200's in frames per second, so there would be no problem keeping up with VSYNC at 60fps cap. However I think it feels and looks smoother to run without VSYNC.

And that makes sense. But if you are like me, running a midrange rig and trying to play BF3 with high settings and AA, I only get around 45fps anyways. This is below my screens refresh rate of 60 and tearing occurs. As mentioned a lot in this thread, it all comes down to personal preference. Some people would rather have a smooth experience. Some people would prefer to have a higher frame rate but choppy screen performance.
 
Too bad it's wrong on nearly every important point. That's not what causes tearing. That's not what vsync does. That's not how multiple buffering works. That's not what double buffering is for. And single buffering hasn't even existed for decades.

I cringe every time I see this bumped, because it seems to leave people feeling enlightened, when they've really just been misinformed...

Then please enlighten us. This was my first time reading it and it seems to match up with my understanding of vsync and what causes screen tearing. If this is wrong, then please correct it. I'd love to see your take.
 
Too bad it's wrong on nearly every important point. That's not what causes tearing. That's not what vsync does. That's not how multiple buffering works. That's not what double buffering is for. And single buffering hasn't even existed for decades.

I cringe every time I see this bumped, because it seems to leave people feeling enlightened, when they've really just been misinformed...

I found this thread an interesting read, but now I'm confused after your post.
 
Too bad it's wrong on nearly every important point. That's not what causes tearing. That's not what vsync does. That's not how multiple buffering works. That's not what double buffering is for. And single buffering hasn't even existed for decades.

I cringe every time I see this bumped, because it seems to leave people feeling enlightened, when they've really just been misinformed...


Please explain your position,I thought it was a good read,but if its not correct then add a bit more substance to your reply or stfu cause you are not helping anyone pushing that opinion as fact with no evidence to the contrary.
 
...add a bit more substance to your reply or stfu...
Sorry, I should have. A pretty dickish response on my part. I've just rebutted this goddamn thing too many times already... But at least I've got something to copy-paste :)

This is what is stated, implicitly or explicitly, in the first half of that post:
  • Unsynchronised rendering uses a single frame buffer
  • Renderers output the final pixels for the displayed frame one by one, row by row, top to bottom
  • Frame transmission to the monitor is instantaneous
  • Tearing happens when frame transmission occurs part-way through the rendering of the current frame
  • Double buffering exists to reduce tearing
  • Double buffering involves the physical copy of frames between front and back buffers
  • Tearing in double-buffered rendering happens when frame transmission occurs part-way through the copy process
All of those points are incorrect. The truth is that:
  • All 3D rendering is (at least) double-buffered.
  • A half-complete frame does not look like half of a complete frame. Renderers build the final image over multiple passes. The intermediate stages may look nothing like the final result. That is why we have double-buffering - so the monitor always has a completed frame to display while the other contains a half-drawn mess.
  • Swapping buffers does not involve a physical copy. It's as simple as flipping a switch; it's instantaneous. The front buffer becomes the back buffer, and the back buffer becomes the front buffer. It's essentially just a renaming.
  • Frame transmission takes a finite amount of time, potentially as much as the entire refresh interval (for a CRT, it's exactly the refresh interval minus the vblank time; on an LCD, it could be anywhere between the refresh interval and the maximum allowed by the DVI/HDMI/DP bandwidth).
  • Tearing occurs when this frame transmission is interrupted by a buffer swap. In between sending pixel n and pixel n+1, it's suddenly retrieving these pixels from the other buffer, and the rest of the frame is drawn from a different source image. At high framerates, you even see multiple swaps during a single transmission, leading to multiple tears in a single displayed frame.
The OP's description of the vsync mechanism itself was more or less on the right track, just built on a faulty premise - rather than delaying frame copies until after a screen refresh, it's constraining buffer swaps to occur between frame transmissions. So the details of the examples are off, though the outcome is the same.

The triple buffering explanation is wrong for the same reasons. But beyond that - at least, from what I've heard - the general approach (one front buffer, two back buffers) only applies to OpenGL (which accounts for approximately 0% of commercial releases these days...), and is impossible to accomplish in Direct3D. The closest you can get with D3D is a three-buffer queue, which solves the framerate-halving issue, but comes at the expense of increasing latency (the queue is longer, so there's potentially a longer delay between adding a frame and displaying it).

A few more points worth making about triple buffering:
  • There's a checkbox in your control panel, but it only applies to OpenGL
  • Contrary to what the OP said, you can force triple buffering (or at least, the nearest D3D equivalent, outlined above) in any DirectX title, using D3DOverrider (a standalone app bundled with RivaTuner)
  • I don't think triple buffering is necessary in SLI. I have no clue how many buffers there are or how they're laid out, but there appear to be enough of them to eliminate the classic double-buffering framerate issues. (No idea whether or not this is the case in Crossfire.)
 
Back
Top