How VSync works, and why people loathe it

The buffers dont flip at a fixed rate matched with the monitor refresh, they flip once the frame has been drawn, the tearing comes when the monitor is mid-refresh and the buffer flips, not the buffer flipping as the graphics card is drawing to it.

I think there is a minor delay between refresh on the screen, with CRTs there was a small delay where the cathode ray tube would reposition from the bottom of the screen to the top ready to start the new draw, however I believe it takes most of the 16.7ms to actuall draw the frame. If a 60hz monitor could draw the entire screen in say 8ms instead of 16.7 then they'd sell it as a 120hz monitor

But fundamentally Its possible for the buffer to flip as the screen is between drawing in which case you dont get a tear, but thats very unlikely.

I think he was referring to the time it takes to send the data over DVI/HDMI/whatever. Remember, the buffer gets sent over a cable to the monitor, and the monitor draws from that data - it doesn't draw directly from the buffer. But afaik DVI isn't setup like that, so it still takes 16.7ms to "draw" from the buffer even though in this case draw actually means transmit over a cable.

You dont understand it frame by frame but overall movement of objects through the scene is going to seem smoother because you're seeing the scene over a period of time rather than just a snapshot. Kind of like real motion blur captured by cameras with long aperture times, film at 24-25fps seems smooth because each frame is capturing information of the world over the period of the frame which visually shows as motion blur, where as traditional rendering shows snapshots and needs a higher frame rate to seem smooth.

But what I'm saying is that it doesn't appear like that. It still looks like a single snapshot of time. There isn't any added blur as a result, so it won't be any smoother.

Yes you still only get 60 complete frames/refreshes, but you see more information than a single frame is capable of telling, as I said you can infer direction and speed of movement of objects in the scene with one frame. Good for fast moving targets or rapidly chaning viewport direction.

No you can't. You can't infer direction or speed at all with one frame, even if that complete frame is composed of multiple frames, at least not without some serious analyzing of the scene (and depending on where the tears are, it won't be possible at all), and I dispute that you will be able to analyze and figure that out in 1/60th of a second.
 
I think he was referring to the time it takes to send the data over DVI/HDMI/whatever. Remember, the buffer gets sent over a cable to the monitor, and the monitor draws from that data - it doesn't draw directly from the buffer. But afaik DVI isn't setup like that, so it still takes 16.7ms to "draw" from the buffer even though in this case draw actually means transmit over a cable.

I don't think theres any significant delay to transmit the data across the cable, we ideally need some statistics on the matter to discuss it though, I have no idea off the top of my head.

But what I'm saying is that it doesn't appear like that. It still looks like a single snapshot of time. There isn't any added blur as a result, so it won't be any smoother.

There isn't any added motion blur, that was an analogy, the reason i used that is because it's a good example of how you can capture more information in a single snapshot if you capture it over a certain amount of time rather than taking an instant snapshot in time.

No you can't. You can't infer direction or speed at all with one frame, even if that complete frame is composed of multiple frames, at least not without some serious analyzing of the scene (and depending on where the tears are, it won't be possible at all), and I dispute that you will be able to analyze and figure that out in 1/60th of a second.

Yes, you absolutely can. If tear occurs straight through the middle of a fast moving object we can infer its movement direction by looking at the displacement between the 2 halfs of the object below and above the tear line.

Your brain does NOT analyse each frame one at a time in 1/60th of a second, but it does take more information from the scene if its comprised of more than one image.

Don't take my word for it, try it yourself, and this is something you can ALL try.

Download fraps, install that and set up the FPS counter in your games, now play a first person shooter game with low settings so the frame rate is very high, preferably above 100fps will help you see the effect. Now feel how smooth that is, not only how smooth it looks but how smooth it feels, how responsive the mouse is when you move it left/right.

Now turn on vsync and make sure you're constantly capped to 60fps, now see how smooth that is...not as smooth huh? Definately not as responsive, in fact many people here already aknowledge that there is "input lag", most of you should be able to feel this delay.

Now what is the difference here? One is a scene rendered at 60hz with refreshes that are many frames over a period of time stitched together to make one large frame, and one is 60fps all exactly synced with the refresh rate.

Both are displaying at 60hz, 60 evenly spaced refreshes, if tearing doesnt add more information into the scene then how do we explain one way is not only smoother but clearly more responsive as well?
 
I don't think theres any significant delay to transmit the data across the cable, we ideally need some statistics on the matter to discuss it though, I have no idea off the top of my head.

Of course there is, if there it didn't take long to transmit the image then the cables would be cable of ridiculously high bandwidths - which they aren't. Remember, 60hz 1920x1200 is the limit of single link DVI - which means it is continuously sending data basically non-stop if you are using a 1920x1200 display.

Monitors that are less than the limit of the cable could theoretically have their information transmitted in bursts and buffer on the monitor side, effectively lowering the "draw time" and lowering the chances of a tear, but I don't think it works like that.

There isn't any added motion blur, that was an analogy, the reason i used that is because it's a good example of how you can capture more information in a single snapshot if you capture it over a certain amount of time rather than taking an instant snapshot in time.

But you're analogy is fundamentally flawed. Motion blur works because of the blur, NOT because of the multiple times in the frame. Games running at over 60fps do not get blur, they are not any smoother.

Yes, you absolutely can. If tear occurs straight through the middle of a fast moving object we can infer its movement direction by looking at the displacement between the 2 halfs of the object below and above the tear line.

And if it tears above or below the object? Then you don't have a clue where it is going.

Also, if its moving fast enough to move a significant amount in less than 1/60th of a second, then you are going to perceive little more than a sudden streak across the monitor anyway.

Your brain does NOT analyse each frame one at a time in 1/60th of a second, but it does take more information from the scene if its comprised of more than one image.

And I dispute that.

Download fraps, install that and set up the FPS counter in your games, now play a first person shooter game with low settings so the frame rate is very high, preferably above 100fps will help you see the effect. Now feel how smooth that is, not only how smooth it looks but how smooth it feels, how responsive the mouse is when you move it left/right.

No, actually, it felt choppy due to the shit-ton of tearing. Mouse may have been more responsive, but I couldn't stand looking at the screen to tell. (playing L4D2)

Now turn on vsync and make sure you're constantly capped to 60fps, now see how smooth that is...not as smooth huh? Definately not as responsive, in fact many people here already aknowledge that there is "input lag", most of you should be able to feel this delay.

Mouse isn't quite as responsive, but the image is much, *much* smoother. It's like liquid, it's crazy smooth :p (again, playing L4D2)
 
Ugh, I can't stand playing shooters with Vsync on. It's as if the view continues moving even after you've stopped your mouse, and there's also a delay before it responds to mouse movement in the first place. It should be crisp and instantly responsive for fast-reaction games, and the only way you get that is with vsync off( though triple buffering does help a bit with vsync on).
 
I suspect that the data isnt sent in one large packet rather the monitor reads the data in chunks straight from the front buffer, each chunk being very small, probably just one horizontal row of pixels. I dont see how this would lower tearing.

But you're analogy is fundamentally flawed. Motion blur works because of the blur, NOT because of the multiple times in the frame. Games running at over 60fps do not get blur, they are not any smoother.

No it's not flawed, you're not paying attention. I'm not saying that the exact effect is motion blur, I'm making a comparison between similar traits that both tearing and motion blur have in common.

Motion blur (in traditional file) is a result of long shutter speeds, light is captured over a period of time rather than being an instant snapshot, this means the information captured in 1 frame is more than just 1 snapshot in time. This is comparable (not the same) as tearing, where we essentially have information from multiple points in time over the course of that frame.

motionblur.jpg


See this picture from 100fps.com which displays motion blur, we can infer the direction of the drumstick in a single frame because this frame was captured over length of time.

Because we're (potentially) rendering many frames in the span of one refresh we end up with 1 refresh made up of many samples over one period of time which means just like with motion blur, we're capturing more information in that one refresh than is available from just one frame (one instant snapshot in time)

And if it tears above or below the object? Then you don't have a clue where it is going.

Correct, if you're concerned with just one object then if the tear line doesnt occur through that object you're none the wiser. However we're not usually concerned with just one object, we percieve the entire scene especially considering it's normal to have the viewport rotated about a point frequently in games which means the entire scene is moving through our viewport and the whole thing tears.

Also, if its moving fast enough to move a significant amount in less than 1/60th of a second, then you are going to perceive little more than a sudden streak across the monitor anyway.

Correct, if you're focusing on one object then it's going to be hard to see, my example with an object was simple illustrate the concept, again we're not usually focusing on just one object we're interested in the entire scene and the entire scene tears as we rotated our viewpoint through it, this is very noticeable.

No, actually, it felt choppy due to the shit-ton of tearing. Mouse may have been more responsive, but I couldn't stand looking at the screen to tell. (playing L4D2)

We have to be careful here, how we percieve things and describe them is going to be a problem. Sure I can see tearing, it doesn't ever bother me that much so I'm OK with it, and I do recognise that the scene looks choppy in the sense that each refresh looks like its been sliced to pieces. But overall smoothness to me is better, the scene flows much better.

Mouse isn't quite as responsive, but the image is much, *much* smoother. It's like liquid, it's crazy smooth :p (again, playing L4D2)

The input lag is undeniable, I mean we agree here, responsiveness goes down, lots of gamers understand the tradeoff with "input lag"

My question is if we cannot percieve more than 60fps on a 60hz monitor, why do we notice the delay in mouse movements, if we're running capped at 60fps with vsync and 100+ fps feels more responsive, how can that be?

I think you'll find we can see a benefit with more than 60fps on display that can only refresh at 60hz, just like with motion blur in traditional shutter based cameras we can capture information over a period of time and our brains percieve this, it's not the same mechanism and the results are slightly different but fundamentally the principle of getting more information into one image (refresh/frame) is clearly possible.

You dispute my claim but do you offer any other reason for why we percieve input lag on vsynced outputs? Surely if im wrong there is some other mechanism causing us to feel this difference?
 
I always prefer having V-sync/triple buffering on. I just can't stand screen tearing, it totally ruins the game for me.
 
Your wrong, I don't see tearing at all and I own a 120 Hz LCD.

Tearing occurs at all frame rates, just in different amounts, if your FPS is less than your refresh rate then you wont tear every frame, if your FPS and refresh rate are approximately equal then you'll tear roughly every frame, if your FPS is higher than your refresh rate you'll start tearing more than once per frame.
 
I always prefer having V-sync/triple buffering on. I just can't stand screen tearing, it totally ruins the game for me.

It depends on the game for me. There are games in which its much more noticeable. Sometimes havnig V-sync on makes the game feel "less responsive" which almost bothers me as much as tearing does.
 
I see no tearing with my monitor with any game...Metro, BC2, MOH and so on.
 
I see no tearing with my monitor with any game...Metro, BC2, MOH and so on.
your monitor isnt magic and you do get tearing with vsync off. with a 120Hz monitor the noticeable tearing is probably greatly reduced though.
 
I suspect that the data isnt sent in one large packet rather the monitor reads the data in chunks straight from the front buffer, each chunk being very small, probably just one horizontal row of pixels. I dont see how this would lower tearing.

I believe what he originally meant was imagine if the monitor has its own buffer. Then when the monitor wants to do a refresh, it asks for the buffer on the PC side. If that data is then able to be sent faster than the refresh time (just for kicks lets say 4ms), then tearing is going to be less likely because then it doesn't matter how long the monitor takes to draw it, it is no longer using the data from the video cards buffers but from its own buffer. Then on the PC side there would effectively be a 4ms tear window followed by a 12ms window of time for the buffers to be swapped without tearing.

It doesn't actually work like this, but potentially it could.

No it's not flawed, you're not paying attention. I'm not saying that the exact effect is motion blur, I'm making a comparison between similar traits that both tearing and motion blur have in common.

Motion blur (in traditional file) is a result of long shutter speeds, light is captured over a period of time rather than being an instant snapshot, this means the information captured in 1 frame is more than just 1 snapshot in time. This is comparable (not the same) as tearing, where we essentially have information from multiple points in time over the course of that frame.

http://img195.imageshack.us/img195/2369/motionblur.jpg

See this picture from 100fps.com which displays motion blur, we can infer the direction of the drumstick in a single frame because this frame was captured over length of time.

Because we're (potentially) rendering many frames in the span of one refresh we end up with 1 refresh made up of many samples over one period of time which means just like with motion blur, we're capturing more information in that one refresh than is available from just one frame (one instant snapshot in time)

Yes, but what I'm saying is that the difference (lack of blur) is the important part. It is the blur over the *entire* image that makes it appear smooth to our eyes, NOT the multiple times part. I'm saying the similarities are irrelevant because it is the core difference that actually matters to our eyes.

The input lag is undeniable, I mean we agree here, responsiveness goes down, lots of gamers understand the tradeoff with "input lag"

I agree that there is input lag, but to me it isn't significant. I honestly can't tell a difference in input lag between vsync on and off, I just don't notice it. Possibly because the rest of my system has less than average input lag to begin with.

My question is if we cannot percieve more than 60fps on a 60hz monitor, why do we notice the delay in mouse movements, if we're running capped at 60fps with vsync and 100+ fps feels more responsive, how can that be?

Because of how vsync and games work. The input lag has nothing to do with our eyes perceiving multiple instances of time at all, but extra delays added into the process as a result of vsync. And really, it isn't vsync adding the input lag, but rather triple buffering to solve the FPS problem vsync introduces. To fix the problem of rapidly stuttering framerates as a result of vsync, we use triple buffering. Triplebuffering results in a *minimum* additional input lag of 1 frame (so, 16ms). THAT is where the input lag problem comes from. This is on top of the 1 frame input lag that is always going to be there (games update your movement and then draw the scene, thus you will always have at least the time it takes to draw the scene as input lag).

All the various lags begin to add up to something we can perceive. For some, the additional lag added by triple buffer will result in something they can perceive. For others, the extra lag won't be noticeable.

You seem to notice input lag with vsync + triple buffering. I assume you are gaming on an LCD. If so, if you switch to an LCD with 16ms lower input lag, you would then be able to enable vsync + triple buffering and have a net difference of 0. It isn't that we can perceive 16ms of input lag (and anyone who says they can is either a super hero or lying), but that the entire chain has lag throughout it. Making one part worse can result in input lag becoming noticeable, but you can also reduce lag in other areas to compensate.
 
It doesn't actually work like this, but potentially it could.

Ok well...I dont really see this is relevent.

Yes, but what I'm saying is that the difference (lack of blur) is the important part. It is the blur over the *entire* image that makes it appear smooth to our eyes, NOT the multiple times part. I'm saying the similarities are irrelevant because it is the core difference that actually matters to our eyes.

The pupose of the analogy was to illustrate the fact that we can store more information in one refresh than what is available in just one frame. Again I am not making a comparison between bluring and tearing, that's not what analogies are for.

If the basic principle of storing more information when captured over time (or multiple points of time) is valid then the argument that "having more than 60fps is useless", this isn't necessarily true.

I agree that there is input lag, but to me it isn't significant. I honestly can't tell a difference in input lag between vsync on and off, I just don't notice it. Possibly because the rest of my system has less than average input lag to begin with.

I have a pretty responsive system, I have a Q9450 @ 3.6Ghz and a 5970 @ 950/1200 which pushes games like CSS to 300+fps even with my very high settings, it also has razer hardware which by default push the USB ports to their maximum 1000hz polling rate, while I do run an LCD it has no significant electronics inside it, no scaler or anything like that, so there is very little display latency.

Significance of the lag isn't really what is in question here, some people feel it more than others, and this is understandable.

Because of how vsync and games work. The input lag has nothing to do with our eyes perceiving multiple instances of time at all, but extra delays added into the process as a result of vsync. And really, it isn't vsync adding the input lag, but rather triple buffering to solve the FPS problem vsync introduces.

But it doesn't matter if vsync adds a delay, I dont happen to think it does but for the sake of argument I'll entertain the idea. It does't matter because when using double buffering and getting 60fps there can be no delay in rendering, if you're seeing 60 unique frames per second then you're maxing out your monitors refresh rate, you know each frame must have been drawn in the past 1/60th of a second.

Tripple buffering is another thing entirely, but Im not using tripple buffering, most of us aren't using tripple buffering, there is no easy way for it to be turned on in DX games without installing 3rd party tools etc. Normal vsync just uses double buffering.

You seem to notice input lag with vsync + triple buffering. I assume you are gaming on an LCD. If so, if you switch to an LCD with 16ms lower input lag, you would then be able to enable vsync + triple buffering and have a net difference of 0. It isn't that we can perceive 16ms of input lag (and anyone who says they can is either a super hero or lying), but that the entire chain has lag throughout it. Making one part worse can result in input lag becoming noticeable, but you can also reduce lag in other areas to compensate.

This is all rather moot by this point, sure there are other delays and those add up over time, and some people can see this and some people can't. I agree with all of that, it doesn't invalidate what I'm saying.

My point from the begining is that I disagree with claims that having more than 60fps is a waste, we can definately see an improvement in both fluidity and responsiveness in our games when we use more than 60fps. If we can only draw 60 refreshes per second then how can this be, there has to be some mechanism that allows us to see 300fps being more responsive than 60fps.

That effect is tearing and we can see that working when we enable vsync and effectively kill tearing while at the same time maintaining the same number, and same frequency of images being displayed by the screen.
 
I agree that there is input lag, but to me it isn't significant. I honestly can't tell a difference in input lag between vsync on and off, I just don't notice it. Possibly because the rest of my system has less than average input lag to begin with.

Because of how vsync and games work. The input lag has nothing to do with our eyes perceiving multiple instances of time at all, but extra delays added into the process as a result of vsync. And really, it isn't vsync adding the input lag, but rather triple buffering to solve the FPS problem vsync introduces. To fix the problem of rapidly stuttering framerates as a result of vsync, we use triple buffering. Triplebuffering results in a *minimum* additional input lag of 1 frame (so, 16ms). THAT is where the input lag problem comes from. This is on top of the 1 frame input lag that is always going to be there (games update your movement and then draw the scene, thus you will always have at least the time it takes to draw the scene as input lag).

All the various lags begin to add up to something we can perceive. For some, the additional lag added by triple buffer will result in something they can perceive. For others, the extra lag won't be noticeable.

You seem to notice input lag with vsync + triple buffering. I assume you are gaming on an LCD. If so, if you switch to an LCD with 16ms lower input lag, you would then be able to enable vsync + triple buffering and have a net difference of 0. It isn't that we can perceive 16ms of input lag (and anyone who says they can is either a super hero or lying), but that the entire chain has lag throughout it. Making one part worse can result in input lag becoming noticeable, but you can also reduce lag in other areas to compensate.


No, you don't have to be superhuman to notice input lag. If you google mouse lag with vsync you will see lot of people with the problem. Maybe people who don't notice just don't have fast reactions?

This has been a major talking point in fps games for years, vsync and mouse lag and nearly all the people I have asked about it (on CS:Source) Friends on steam etc, the people with vysnc on to stop tearing and who don't notice the mouse lag tended to be on lower half of the scoreboard all the time.

So, maybe it's just like phospor trails on plasma TV's, some people are just more susceptible to seeing them, some people can't see them at all.

As for your argument about playing on an LCD, well, I noticed it even when I played on a CRT running at 100hz at 1024*768, with triple buffering on. I currently play on Pioneer LX5090 with a lightening fast response time and I can still notice the input lag.

So, for me, I notice input lag with vsync on. It's very, very smooth, can't there, but, it's like playing in molasses, I seem to be always waiting for the mouse cursor to catch up to my actual mouse movements. It sucks and makes FPS games completely unplayable.
 
No, you don't have to be superhuman to notice input lag. If you google mouse lag with vsync you will see lot of people with the problem. Maybe people who don't notice just don't have fast reactions?

No, you misunderstood. I was saying that just the delay from vsync is unnoticeable in and of itself. There are many delays between your mouse and your screen, which all add up. Human reaction times are in the neighborhood of 130-200ms - we are damn slow.

As for your argument about playing on an LCD, well, I noticed it even when I played on a CRT running at 100hz at 1024*768, with triple buffering on.

Triple buffering can add a significant delay. Triple buffering actually adds more of a delay than a low input lag LCD does. You can easily find LCDs with input lag in the 10ms or lower range - well below the lag from triple buffering.

The pupose of the analogy was to illustrate the fact that we can store more information in one refresh than what is available in just one frame. Again I am not making a comparison between bluring and tearing, that's not what analogies are for.

But you aren't showing more information, it is the exact same amount of information, just with parts of the screen occupying different instants of time - and I dispute that your brain is able to make that distinction.

If the basic principle of storing more information when captured over time (or multiple points of time) is valid then the argument that "having more than 60fps is useless", this isn't necessarily true.

But it isn't over time at all. A camera captures a slice of time, a game doesn't. Even with tearing, you still aren't capturing a slice of time at all - just parts of several instants.

But it doesn't matter if vsync adds a delay, I dont happen to think it does but for the sake of argument I'll entertain the idea. It does't matter because when using double buffering and getting 60fps there can be no delay in rendering, if you're seeing 60 unique frames per second then you're maxing out your monitors refresh rate, you know each frame must have been drawn in the past 1/60th of a second.

In which case vsync will be adding no input lag whatsoever, right up until it misses a refresh in which case it has to wait another frame before it can process input data.

Tripple buffering is another thing entirely, but Im not using tripple buffering, most of us aren't using tripple buffering, there is no easy way for it to be turned on in DX games without installing 3rd party tools etc. Normal vsync just uses double buffering.

Unless you are enabling vsync in game, and that game is smart enough to enable triple buffering at the same time.

My point from the begining is that I disagree with claims that having more than 60fps is a waste, we can definately see an improvement in both fluidity and responsiveness in our games when we use more than 60fps. If we can only draw 60 refreshes per second then how can this be, there has to be some mechanism that allows us to see 300fps being more responsive than 60fps.

No, there is only an improvement in responsiveness. There is a *loss* of smoothness with vsync off.
 
Tearing occurs at all frame rates, just in different amounts, if your FPS is less than your refresh rate then you wont tear every frame, if your FPS and refresh rate are approximately equal then you'll tear roughly every frame, if your FPS is higher than your refresh rate you'll start tearing more than once per frame.

PrincessFrosty, correct me if i'm wrong. When Vsync is enabled, game fps _cannot_ go above the refresh rate, is that correct?


Let me know if I misunderstood it.
 
PrincessFrosty, correct me if i'm wrong. When Vsync is enabled, game fps _cannot_ go above the refresh rate, is that correct?


Let me know if I misunderstood it.

That is correct. By the nature of how VSYNC works, your FPS cannot exceed your monitor's refresh rate.
 
Ok well lets just get right down to it. Ignoring everything else which is mostly a semantics argument at this point, we have both agreed that there is a drop in responsiveness.

How do we explain 300fps being more responsive than vsyncd to 60fps if our monitor can only display 60hz? You reject my claim that a torn frame can store information that is greater than the sum of its parts, so what other possible way could we percieve greater responsiveness?

It's not tripple buffering because I'm not using that
It's not other latencies because I'm using the same hardware for each test
It cannot be latencies involved with vsync, if we're maxed our frame rate at our refres rate then by definition the frame we're seeing has to be drawn in the previous 1/60th of a second

PrincessFrosty, correct me if i'm wrong. When Vsync is enabled, game fps _cannot_ go above the refresh rate, is that correct?

Let me know if I misunderstood it.

Correct, vsync delays the video card from flipping the buffers until you monitor has finsihed drawing the previous frame. In such a system it's impossible for your frame rate to exceed your monitors refresh rate.
 
Last edited:
Download fraps, install that and set up the FPS counter in your games, now play a first person shooter game with low settings so the frame rate is very high, preferably above 100fps will help you see the effect. Now feel how smooth that is, not only how smooth it looks but how smooth it feels, how responsive the mouse is when you move it left/right.

Now turn on vsync and make sure you're constantly capped to 60fps, now see how smooth that is...not as smooth huh? Definately not as responsive, in fact many people here already aknowledge that there is "input lag", most of you should be able to feel this delay.

Now what is the difference here? One is a scene rendered at 60hz with refreshes that are many frames over a period of time stitched together to make one large frame, and one is 60fps all exactly synced with the refresh rate.

Both are displaying at 60hz, 60 evenly spaced refreshes, if tearing doesnt add more information into the scene then how do we explain one way is not only smoother but clearly more responsive as well?

Thank you! I am glad I'm not the only one who can understand/experience this. People can argue vSync all day long; at the end of the day (especially in an FPS) with vSync on "input lag" does occur. Moving the mouse to look left/right is a perfect example, it feels sluggish with vSync on. Sure you can't "see" that 100 + FPS, but you can feel it.

Of course this is more prevalent in FPS games then any other due to the precision involved in taking aim at a target.
 
Triple buffering can add a significant delay. Triple buffering actually adds more of a delay than a low input lag LCD does. You can easily find LCDs with input lag in the 10ms or lower range - well below the lag from triple buffering.

With this comment, I actually decided to go back and read closer some of your posts. You don't really have a clue how triple buffering works do you? Triple buffering is designed to reduce the problem of input lag and to keep your framerates higher than 60 (or whatever your monitor's refresh rate is) So that with vsync enabled your frames don't take longer than 16ms to draw which means you don't drop down to running at 30fps.

Because of how vsync and games work. The input lag has nothing to do with our eyes perceiving multiple instances of time at all, but extra delays added into the process as a result of vsync. And really, it isn't vsync adding the input lag, but rather triple buffering to solve the FPS problem vsync introduces. To fix the problem of rapidly stuttering framerates as a result of vsync, we use triple buffering. Triplebuffering results in a *minimum* additional input lag of 1 frame (so, 16ms). THAT is where the input lag problem comes from. This is on top of the 1 frame input lag that is always going to be there (games update your movement and then draw the scene, thus you will always have at least the time it takes to draw the scene as input lag)..

Again, triple buffering does not work like this, it reduces lags. With vsync off everything there are two buffers the front and back buffer, the front buffer is sent to the screen and the back buffer is swapped to the front buffer and the back buffer is then drawn with the new data. but, if something changes in the meantime the new front buffer is sent to the screen which can occur before the old front buffer is finished drawing and this can cause tearing if the there are large enough differences between the old front buffer and the new one.

to combat this tearing, vsync was introduced. This means that the front buffer is locked until it has finished been drawn on the screen. Then it swaps the back buffer and front buffer and begins drawing the new back buffer. The problem is that this causes lag, becuase if something happens it can't be drawn until the next refresh of the buffers. Also it causes frame rate drops if the buffer takes longer than 16ms to draw.

Now, triple buffering is a solution to try and stop these two problems. It has two back buffers and one front buffer. The front buffer is still locked and can't be changed until it is finished been drawn on the screen, but, now the the card can produce frames as fast as it likes becuase it has two back buffers and can send the most up to date one to the front buffer at each refresh.

There is no additional lag introduced by triple buffering, the frame rate lag you are talking about happens with vsync on and triple buffering off.

Well that's the theory of triple buffering anyway, but for me, even with it on I notice the input lag and it's too noticable for me to play FPS games with.
 
Ok well lets just get right down to it. Ignoring everything else which is mostly a semantics argument at this point, we have both agreed that there is a drop in responsiveness.

How do we explain 300fps being more responsive than vsyncd to 60fps if our monitor can only display 60hz? You reject my claim that a torn frame can store information that is greater than the sum of its parts, so what other possible way could we percieve greater responsiveness?

It's not tripple buffering because I'm not using that
It's not other latencies because I'm using the same hardware for each test
It cannot be latencies involved with vsync, if we're maxed our frame rate at our refres rate then by definition the frame we're seeing has to be drawn in the previous 1/60th of a second

Because you see the tear. If your FPS is higher than your refresh rate, when you move your mouse you will get feedback from that action quicker, in the form of a tear. Tears are fairly large visual artifacts that jump out. That is NOT the same as smoothness, nor does that mean that you are actually able to process and benefit from newer information, just that you see *something* happen very quickly from your action, but that doesn't mean you understand *what* happened.

With this comment, I actually decided to go back and read closer some of your posts. You don't really have a clue how triple buffering works do you? Triple buffering is designed to reduce the problem of input lag and to keep your framerates higher than 60 (or whatever your monitor's refresh rate is) So that with vsync enabled your frames don't take longer than 16ms to draw which means you don't drop down to running at 30fps.



Again, triple buffering does not work like this, it reduces lags. With vsync off everything there are two buffers the front and back buffer, the front buffer is sent to the screen and the back buffer is swapped to the front buffer and the back buffer is then drawn with the new data. but, if something changes in the meantime the new front buffer is sent to the screen which can occur before the old front buffer is finished drawing and this can cause tearing if the there are large enough differences between the old front buffer and the new one.

to combat this tearing, vsync was introduced. This means that the front buffer is locked until it has finished been drawn on the screen. Then it swaps the back buffer and front buffer and begins drawing the new back buffer. The problem is that this causes lag, becuase if something happens it can't be drawn until the next refresh of the buffers. Also it causes frame rate drops if the buffer takes longer than 16ms to draw.

Now, triple buffering is a solution to try and stop these two problems. It has two back buffers and one front buffer. The front buffer is still locked and can't be changed until it is finished been drawn on the screen, but, now the the card can produce frames as fast as it likes becuase it has two back buffers and can send the most up to date one to the front buffer at each refresh.

There is no additional lag introduced by triple buffering, the frame rate lag you are talking about happens with vsync on and triple buffering off.

Well that's the theory of triple buffering anyway, but for me, even with it on I notice the input lag and it's too noticable for me to play FPS games with.

No, not at all. In page flipping triple buffering (what you described), input lag will be reduced if your FPS is much higher than your refresh rate, yes, but unfortunately that isn't the method that is used. DirectX's triple buffering isn't page flipping, but render ahead (a queue of completed frames). Completed frames are shown in the order they were completed, regardless of whether or not there is a newer completed frame to be shown. That will result it *INCREASED* input lag if your FPS is higher than your refresh rate. At least 1 frame+, depending on how big the render ahead queue actually is (from 0 to 8 frames with 3 being the default). So with the default triple buffering in DirectX combined with higher FPS than refresh rate can easily result in ~32ms of additional input lag. And some games do actually use a larger render ahead queue, adding even more input lag if your FPS exceeds your refresh rate.

Also, even with page flipping triple buffering, input lag is only reduced if your FPS is significantly faster than your refresh rate. Your FPS would need to average at least twice your refresh rate to get consistently lower input lag, actually. If your FPS is less than that but still exceeds your refresh rate, then input lag will bounce between being the same as double buffering + vsync to being the same as double buffering w/o vsync.
 
What he's saying does make perfect sense. For the sake of an easy example, let's say you're rendering at a steady 600FPS to a monitor running at 60Hz.

Each frame will be a composite of the last 10 frames rendered by the graphics card, meaning you will see 600 (partial) frames every second, but you're only seeing one a one-tenth slice of each of those frames. This is the effect you end up with while panning your camera without vertical sync (Note, the tearing here is exaggerated):




Keeping the above example (600FPS going to a 60Hz monitor); if you enable v-sync, you get only every 10th frame delivered to the monitor, but it's the whole frame (not a composite). The other 9 frames are effectively thrown away. This means no visible tears in the image, but it also means you don't see what might have happened in one of those 9 other slices during that 1/60th of a second.

What I'm wondering is, why don't they simply implement Frame Blending? Instead of tossing out the extra 9 frames, blend them into the 10th frame. You could effectively get 10-sample motion blur FOR FREE using this method, like so (Note, the blur is also exaggerated):



Best part is, since this is still a composite (blended instead of mosaic), you still get to see information from all 10 frames in the final frame, and there's NO TEARING.
 
DirectX's triple buffering isn't page flipping, but render ahead (a queue of completed frames). Completed frames are shown in the order they were completed, regardless of whether or not there is a newer completed frame to be shown. That will result it *INCREASED* input lag if your FPS is higher than your refresh rate

Does this mean that if you use V-sync with triple buffering and keep your FPS below your refresh rate, you can get a smooth experience without the input lag?
 
Does this mean that if you use V-sync with triple buffering and keep your FPS below your refresh rate, you can get a smooth experience without the input lag?

You'll still get the input lag from vsync, but you won't get the fps stutter problem.

What I'm wondering is, why don't they simply implement Frame Blending? Instead of tossing out the extra 9 frames, blend them into the 10th frame. You could effectively get 10-sample motion blur FOR FREE using this method, like so (Note, the blur is also exaggerated):

There are a couple of reasons.

1) Input lag isn't even close to a problem or noticeable for 99% of gamers. Hence why so many people can enjoy gaming on a console, which are usually running at 30fps with vsync on with TVs that have crazy input lag. Thus, the vsync + render ahead is perfectly acceptable

2) Games typically don't render at 600fps ;) Once you start getting into the low 100s too many bottlenecks emerge.

3) What you propose would be rather expensive and hard to implement, and doesn't help the input lag problem at all. 60fps with vsync on is perfectly smooth, so the extra motion blur is rather wasted.
 
You'll still get the input lag from vsync, but you won't get the fps stutter problem.



There are a couple of reasons.

1) Input lag isn't even close to a problem or noticeable for 99% of gamers. Hence why so many people can enjoy gaming on a console, which are usually running at 30fps with vsync on with TVs that have crazy input lag. Thus, the vsync + render ahead is perfectly acceptable

2) Games typically don't render at 600fps ;) Once you start getting into the low 100s too many bottlenecks emerge.

3) What you propose would be rather expensive and hard to implement, and doesn't help the input lag problem at all. 60fps with vsync on is perfectly smooth, so the extra motion blur is rather wasted.
most PS3/360 console games certainly dont have vsync on from what I have seen. I even watch those head to head comparisons over on lensoftruth.com and they always say which system had more tearing on a given game.
 
thanks for the info. but i don't understand why the fps able to be shown in double-buffer vsync would be refresh rate / N (integer)...
 
also, is there a way to adjust refresh rate levels on an lcd monitor. my dell s2309w says it is at 60hz, is that the only level it can be at? it seems quite low for a good monitor
 
1) Input lag isn't even close to a problem or noticeable for 99% of gamers. Hence why so many people can enjoy gaming on a console, which are usually running at 30fps with vsync on with TVs that have crazy input lag. Thus, the vsync + render ahead is perfectly acceptable
You're forgetting the fact that gamepad input (used with consoles) masks input lag BIG TIME. That's one of the major reasons you don't see a lot of keyboard/mouse support for consoles, because people would really start to notice the problem.

2) Games typically don't render at 600fps ;) Once you start getting into the low 100s too many bottlenecks emerge.
Not a problem, you can still frame blend with only 100FPS, you just get 1.6 samples instead of 10 samples. You still elemental tearing, you still get a nicely blended image, and you still see information from the "tween" frames that would otherwise be discarded.

3) What you propose would be rather expensive and hard to implement, and doesn't help the input lag problem at all. 60fps with vsync on is perfectly smooth, so the extra motion blur is rather wasted.
Actually, it's not difficult to implement at all. Failing a driver-side implementation, it's very easy to write a post-processing pixel shader that would handle this.

It could also, in fact, be used to eliminate any input lag you get from normal v-sync. Like I said before, it's still a composite image (just like without v-sync), except this is a blended composite instead of a mosaic composite.

It's true that, if you waited until the last frame frame to do the blending, you'd have the same input lag as v-sync (though with much smoother looking movement). The trick is, you don't have to wait for all the frames to start displaying the output.

* Finish Frame 1, start sending it to the monitor.
* Finish Frame 2 before Frame 1 finishes drawing
. . - Leave what's already drawn of Frame 1 on the monitor alone.
. . - Blend Frame 1 and 2 and continue drawing
* Frame 3 finishes before frame 1+2 composite finished drawing.
. . - Leave what's already drawn of Frame 1, Frame 1+2 composite alone.
. . - Blend Frame 1+2 composite with Frame 3

And so on and so forth. The end result is blended tearing with partial motion blur.



You have none of the input lag from v-sync since you aren't waiting for the last frame, the tearing isn't as noticeable since it's blended, you aren't tossing out any information from the tween-frames, AND you get some motion blur out of it.

The tears wont always happen in the same place, and since each image only lasts 1/60th of a second, it should all blend nicely together when in-motion.

60fps with vsync on is perfectly smooth, so the extra motion blur is rather wasted.
That is your opinion and/or a failing of your own eyesight. I can clearly tell the difference between 120FPS content and 60FPS content on a 120Hz screen (hint: the 120FPS content looks smoother). Tween-frame motion blur, using either of the methods explained above (full frame or blended mosaic), would really help out at 60Hz / 60FPS.
 
Last edited:
You're forgetting the fact that gamepad input (used with consoles) masks input lag BIG TIME. That's one of the major reasons you don't see a lot of keyboard/mouse support for consoles, because people would really start to notice the problem.

How do you figure? Wired mouse/keyboard input lag is extremely small, whereas console controllers use wireless. If anything, gamepad input will have *increased* input lag.

Not a problem, you can still frame blend with only 100FPS, you just get 1.6 samples instead of 10 samples.

So, what, the bottom half will be blurred and the top sharp? That would... suck.

Actually, it's not difficult to implement at all. Failing a driver-side implementation, it's very easy to write a post-processing pixel shader that would handle this.

And games already do motion blurring on their own.

That is your opinion and/or a failing of your own eyesight. I can clearly tell the difference between 120FPS content and 60FPS content on a 120Hz screen (hint: the 120FPS content looks smoother). Tween-frame motion blur, using either of the methods explained above (full frame or blended mosaic), would really help out at 60Hz / 60FPS.

Sorry, but I'm not buying that. People make all sorts of claims that they can hear/see the differences between things that a blind study finds isn't true.
 
How do you figure? Wired mouse/keyboard input lag is extremely small, whereas console controllers use wireless. If anything, gamepad input will have *increased* input lag.
The analog stick control input system masks input lag. You'll feel input lag far less with a gamepad than with a mouse.

That's a big reason why consoles stick to gamepads. it's also why OnLive used a gamepad for their streaming gaming service demonstrations (to mask the input lag)


So, what, the bottom half will be blurred and the top sharp? That would... suck.
You don't seem to understand that this effect would only show up while things are MOVING on screen. There are a lot of variables that help out the effect while it's in motion.

The frame blending will tend to be more prevalent near the bottom of the screen, but the point at which the screen tears dictates which parts of the screen get more samples. The screen will not tear in the same place from one frame to the next, which when combined with the tween-frame blending applied to each individual frame, will smooth the tears right out of the image.

The difference between the frames that make up each blended composite should be fairly small since they're all occurring within 1/60th of a second, which means the motion blur will be fairly subtle. Just enough to smooth out the tearing while adding some additional motion smoothness.

And games already do motion blurring on their own.
Motion blur in most games is done after-the-fact with a pixel shader using a directional blur. Using actual tween-frames would be superior (but only works when your framerate is higher than your refresh rate), while in this case also helping to eliminate tearing without the pitfalls of v-sync.

Sorry, but I'm not buying that. People make all sorts of claims that they can hear/see the differences between things that a blind study finds isn't true.
First of all, I'd love to see you actually link to such a study. Everything I've read shows that the response time of human vision is contrast dependent. The larger the difference between two images, the faster you can flip between the two and still tell that you're flickering between two different images. The human eye was able to pick up a flicker at something astonishing like 250Hz when flickering between Black and Bright Green.

Second, if the human eye really weren't able to perceive much beyond 60Hz, then persistence of vision would make tearing invisible on a 120Hz display. This is not the case.
 
It seems like a lot of these details are making the issue more complicated than it really is. It's not that complicated to try it on/off/with triple buffering, and decide for yourself which option you like the best.
 
No, not at all. In page flipping triple buffering (what you described), input lag will be reduced if your FPS is much higher than your refresh rate, yes, but unfortunately that isn't the method that is used. DirectX's triple buffering isn't page flipping, but render ahead (a queue of completed frames). Completed frames are shown in the order they were completed, regardless of whether or not there is a newer completed frame to be shown. That will result it *INCREASED* input lag if your FPS is higher than your refresh rate. At least 1 frame+, depending on how big the render ahead queue actually is (from 0 to 8 frames with 3 being the default). So with the default triple buffering in DirectX combined with higher FPS than refresh rate can easily result in ~32ms of additional input lag. And some games do actually use a larger render ahead queue, adding even more input lag if your FPS exceeds your refresh rate.

Also, even with page flipping triple buffering, input lag is only reduced if your FPS is significantly faster than your refresh rate. Your FPS would need to average at least twice your refresh rate to get consistently lower input lag, actually. If your FPS is less than that but still exceeds your refresh rate, then input lag will bounce between being the same as double buffering + vsync to being the same as double buffering w/o vsync.

Ok, thanks for that info, I never knew most of that, what I understood about flip queue and render ahead was that one was an ati name and one was nvidia name for the same thing, and that you could change the value of these to try and reduce lag (which I have tried also)

But, now I understand that it's a directx issue and that there is a lot of misunderstanding out there about the naming terms.
 
Poll:
Do you guys prefer:
1. Vsync on - Triple Buffering with 120Hz cap + Max Render Frame = 1 OR
2. Vsync off at 120Hz + Max Render Frame = 0 (FPS will always be above Refresh Rate)

State reason.
 
normally I'd complain about a necro but I actually remember this article. Good read.
 
It seems like a lot of these details are making the issue more complicated than it really is. It's not that complicated to try it on/off/with triple buffering, and decide for yourself which option you like the best.

Yes, but that doesn't lead to a 10 page discussion, now does it? :p

The analog stick control input system masks input lag. You'll feel input lag far less with a gamepad than with a mouse.

That's a big reason why consoles stick to gamepads. it's also why OnLive used a gamepad for their streaming gaming service demonstrations (to mask the input lag)

Consoles use gamepads because you can't sit on a couch with a mouse and keyboard. Although now they are doing the whole motion crap, which is only going to add even more input lag, lol

Also, analog sticks would only slightly mask input lag of the analog sticks (and even then not much) - button presses would be every bit as susceptible to noticeable input lag as a keyboard.

You don't seem to understand that this effect would only show up while things are MOVING on screen. There are a lot of variables that help out the effect while it's in motion.

The frame blending will tend to be more prevalent near the bottom of the screen, but the point at which the screen tears dictates which parts of the screen get more samples. The screen will not tear in the same place from one frame to the next, which when combined with the tween-frame blending applied to each individual frame, will smooth the tears right out of the image.

The difference between the frames that make up each blended composite should be fairly small since they're all occurring within 1/60th of a second, which means the motion blur will be fairly subtle. Just enough to smooth out the tearing while adding some additional motion smoothness.

Possibly, but you would still need a very high FPS to see much/any of a difference.

First of all, I'd love to see you actually link to such a study. Everything I've read shows that the response time of human vision is contrast dependent. The larger the difference between two images, the faster you can flip between the two and still tell that you're flickering between two different images. The human eye was able to pick up a flicker at something astonishing like 250Hz when flickering between Black and Bright Green.

Second, if the human eye really weren't able to perceive much beyond 60Hz, then persistence of vision would make tearing invisible on a 120Hz display. This is not the case.

Fair enough, but at the same time that situation really doesn't happen in most games - at least it doesn't in games that are designed to not give you a seizure ;)
 
What he's saying does make perfect sense. For the sake of an easy example, let's say you're rendering at a steady 600FPS to a monitor running at 60Hz.

Each frame will be a composite of the last 10 frames rendered by the graphics card, meaning you will see 600 (partial) frames every second, but you're only seeing one a one-tenth slice of each of those frames. This is the effect you end up with while panning your camera without vertical sync (Note, the tearing here is exaggerated):


This is a good example, I have done this in the past to illustrate tearing but I couldn't find the image. Basically each one of those tears, 1 through 10 is a newer frame the further down the image you go.

Tear 1 is nearly 15ms old, where as tear 10 could be as little as about 2ms old, and all the tears in between are going to be samples from between those points.

If you're running vsync then you get one pefectly synced refresh but the whole refresh is going to be the equivelent of just frame 1 of the torn refresh, the oldest part.

What happens if we're panning right in that scene and we stop panning right mid refresh, then you get tear lines from frame 1 to about 5, but then frames 6-10 are more or less aligned because your viewport isn't moving. Then what we have is 50% of the scene is giving us feedback that we've stopped moving our mouse, where as with vsync on its taking only the oldest information in the scene, rendering that and we have to wait for the next frame to see that we've stopped moving our mouse.

This isnt so bad with say pressing a button and firing a gun and seeing the feedback, don't get me wrong, it's still noticeable to a degree, but its bearable. However mouse movement when aiming and tracking a target are not the same thing, when you track your target your give continual input to the mouse, and you're getting continual feedback from the screen to inform you how best to change what you're doing with the mouse.

This continual feedback only works in low latency scenarios where feedback feels instant. What happens when you have percieveable lag is that you end up misinformed of where you're really aiming and most of the time overshooting. Someone else in this thread mentioned that vsync made it feel as if they were stopping their mouse from moving but it was only stopping on screen a bit later. It's because you're basing your decision to stop moving your mouse on old information.
 
Everyone has their own opinion, obviously...but I don't get how anyone can stand to game with screen tearing. It completely, 100% ruins the experience for me. What's the point of playing a game at triple-digit frames per second if shit is going to be flickering and looking weird?
 
Trying to sort through this thread to what it boils down to after one understands the technicalities of V-sync and triple buffering:

User experience when using an LCD:
1. For optimal response times priority over eye candy and visuals - Online fps like Battlefield BC2: No Vsync, no Triple buffering
2. Immersion and Visuals priority in single player in games like Dragon Age, Mass Effect - Vsync + Triple Buffering (either both or just Vsync on)

Is that about right in a nutshell?
 
Back
Top