How did Mass Effect 3 achieve Vsync without mouse lag?

tzhu07

Gawd
Joined
Nov 21, 2010
Messages
566
This is one of the most immediate things I noticed. In pretty much all other games, when there is vsync, the mouse lags like crazy.

But in Mass Effect 3, vsync appears to be enabled by default (and I see no image tearing at all). I also turned off mouse dampening. When looking around, the mouse is super responsive, as if there was no vsync. I think the coders for this game should be commended for that, since it's a long-standing issue in PC gaming.

Are other game developers missing something? Am I missing something?
 
Maybe the game by default forces triple buffering? This is a frame queuing technique that combats the input lag from vsync.
 
Maybe the game by default forces triple buffering? This is a frame queuing technique that combats the input lag from vsync.

It's weird you should mention that. I thought of Portal 2 and how it has triple-buffering, but the mouse in that game still noticeably lags with vsync on.
 
No, ME3 doesn't use triple buffering. It does, however, use a framerate cap. Go to:

\Users\[username]\My Documents\BioWare\Mass Effect 3\BIOGame\Config\GamerSettings.ini

At the bottom of the [SystemSettings] section, add these three lines:

UseVSync=True
SmoothFrameRate=True
MaxSmoothedFrameRate=62

These are the default settings, but these entries don't appear in the .ini until you put them there.

As has been observed in many threads in the Video Card forums here, running with VSync turned on and a framerate cap set to somewhere within two FPS of the monitor's refresh rate (for a 60Hz monitor, that means between 58 and 62), you get VSync without lag without needing to force triple buffering.
 
modern console devs, sucessfully lowering the standards of gamers everywhere.

No, ME3 doesn't use triple buffering. It does, however, use a framerate cap.

it's actually a combination of both, the way ue3 does it. not traditional triple buffering, but sort of a "fake" triple buffering through the directx api. instead of a strict 3 buffer system in sync with your refresh rate, they just render a few frames ahead and swap them out in queue.
 
Last edited:
It's weird you should mention that. I thought of Portal 2 and how it has triple-buffering, but the mouse in that game still noticeably lags with vsync on.

Tripple buffering helps with mouse lag but doesn't eliminate it entirely.

What you have to understand is that vsyncs job is SPECIFICALLY to delay the rendering of screen updates (by repeating old ones) if they are not 100% ready to be displayed, so tripple buffering or not there is always some additional latency between input and output with vsync on.

As has been observed in many threads in the Video Card forums here, running with VSync turned on and a framerate cap set to somewhere within two FPS of the monitor's refresh rate (for a 60Hz monitor, that means between 58 and 62), you get VSync without lag without needing to force triple buffering.

I'm interested in this idea, I haven't observed that myself in these forums, and I cannot understand why a frame rate cap would help. I suspect what you mean by this is specifically related to the Unreal 3 engines smooth frame rate feature which most games on the U3 engine seem to use by default, it's something I have to constantly turn off with Unreal games because I find it makes it far less playable, and that was before I moved to a 120hz monitor.
 
Tripple buffering helps with mouse lag but doesn't eliminate it entirely.

What you have to understand is that vsyncs job is SPECIFICALLY to delay the rendering of screen updates (by repeating old ones) if they are not 100% ready to be displayed, so tripple buffering or not there is always some additional latency between input and output with vsync on.

not really true... triple buffering is capable of getting rid of it entirely when done right, that's what it's for, and also why ogl games don't have to deal with this. the whole point of having a buffer is so that you always have fresh frames ready for the next update. problem lies with the directx api, not vsync. basically all dx games actually fake triple buffer by just rendering a few frames ahead, with no way to drop old frames. this is where your input lag comes from, not really vysnc, which is supposed to be used in combination with real triple buffering.

the design purpose of this api, and engines like unreal that use it, being to create the illusion of a smooth framerate to stretch old hardware out, and sync to standard displays/controllers with less rendering work. so instead of maintaining strict timing, detail is reduced (objects popping in, missing/rescaled textures, etc), and it can still look smooth at the price of input lag. since when you're designing games for thumbsticks, who really gives a shit.

I think me3 feels so smooth simply because it was ported efficiently, and you're running it with modern hardware, on tight claustrophobic maps that were tuned for consoles. so when this is done properly, and they stay well within their resource budgets, all ports like this should fly on the pc. sad part about this is most devs don't, so we always end up with laggy fucked up trainwrecks any time you try to use vsync with mkb.

nice article on this here, skip ahead to the last page in the update section that explains the difference:


In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default).

The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed).

In order to maintain smoothness and reduce lag, it is possible to hold on to a limited number of frames in case they are needed but to drop them if they are not (if they get too old). This requires a little more intelligent management of already rendered frames and goes a bit beyond the scope of this article.

Some game developers implement a short render ahead queue and call it triple buffering (because it uses three total buffers). They certainly cannot be faulted for this, as there has been a lot of confusion on the subject and under certain circumstances this setup will perform the same as triple buffering as we have described it (but definitely not when framerate is higher than refresh rate).

Both techniques allow the graphics card to continue doing work while waiting for a vertical refresh when one frame is already completed. When using double buffering (and no render queue), while vertical sync is enabled, after one frame is completed nothing else can be rendered out which can cause stalling and degrade actual performance.

When vsync is not enabled, nothing more than double buffering is needed for performance, but a render queue can still be used to smooth framerate if it requires a few old frames to be kept around. This can keep instantaneous framerate from dipping in some cases, but will (even with double buffering and vsync disabled) add lag and input latency. Even without vsync, render ahead is required for multiGPU systems to work efficiently.

So, this article is as much for gamers as it is for developers. If you are implementing render ahead (aka a flip queue), please don't call it "triple buffering," as that should be reserved for the technique we've described here in order to cut down on the confusion. There are games out there that list triple buffering as an option when the technique used is actually a short render queue. We do realize that this can cause confusion, and we very much hope that this article and discussion help to alleviate this problem.
 
not really true... triple buffering is capable of getting rid of it entirely when done right, that's what it's for, and also why ogl games don't have to deal with this. the whole point of having a buffer is so that you always have fresh frames ready for the next update. problem lies with the directx api, not vsync. basically all dx games actually fake triple buffer by just rendering a few frames ahead, with no way to drop old frames. this is where your input lag comes from, not really vysnc, which is supposed to be used in combination with real triple buffering.

It is true. In "normal" rendering updates are sent to the monitor as soon as they are finished and so you get only a minimal amount of input latency, there is always some small amount, no processing is instant.

The specific function of vsync from a timing point of view is to artificially delay rendered frames to the screen until the next physical refresh is ready to occur, that is quite literally what vsync was designed for, and it ALWAYS adds latency between device input and output on the screen compared to vsync off, even when using triple buffering.

This is due to the fact that the brain can interpret the tearing on screen and gather more information from that one torn refresh (an amalgamation of many world updates) than from a single non-torn refresh. Tearing essentially allows you to recognise changes to the world (such as your orientation) mid-refresh where as with all types of vsync enabled rendering you have at most 1 world update per refresh to the user.

It's extremely trivial to test, launch a 3D game you can render with an arbitrarily high frame rate, maybe >300fps, try some sharp mouse movements. Then turn vsync on with proper triple buffering and you'll FEEL the difference in responsiveness.
 
hm yes obviously a technical combination of both, but I still believe it's possible to lower the perceivable amount of lag to a degree that's not really noticeable @ 60+ hz. point being it's the rendering implementation getting in the way of this, not vsync itself that is the main contributor to the majority of this lag you experience.
 
Well I've used triple buffering in OpenGl games before and experienced the implementation in Source and Unreal engine games and while it's improved over regular vsync it's still distracting for me personally.

But then I tend to run my games at a very high frame rate, often in the hundreds. But then I come from a background of more old school shooters like Quake/UT where high frame rates and smooth input were essential to competitive play, there's nothing really that fast these days outside of Quakelive and Tribes Ascend.

I recently upgraded to a 120hz monitor and wow what a difference, I'd used my LCD panel for so long I'd forgotten just how laggy they are, it's easy to become complacent...when you're talking about latency on the scale of 60hz or less than 1/60th it doesn't seem much but then when you use a 120hz panel the difference is massive, so its really relative to what you're used to...
 
This is one of the most immediate things I noticed. In pretty much all other games, when there is vsync, the mouse lags like crazy.

But in Mass Effect 3, vsync appears to be enabled by default (and I see no image tearing at all). I also turned off mouse dampening. When looking around, the mouse is super responsive, as if there was no vsync. I think the coders for this game should be commended for that, since it's a long-standing issue in PC gaming.

Are other game developers missing something? Am I missing something?
You're over-thinking this.

1. Not all games with Vsync on have mouse lag
2. Not all games without Vsync have screen tearing
 
It should also be noted that ME3 is one of the least demanding big-name titles around graphically....thanks in part to those awful textures.

Heck I can completely max out ME3 with RadeonPro and *.ini tweaks...and still be running at the framerate cap, and still have lots of VRAM to spare...at 5300x1050 Eyefinity.
 
Which one? I'm not sure I could settle for a TN monitor of any kind but I'm still interested to see what you went for.

I know right...I was running (and still have) a Dell 3007WFP-HC which is a lovely IPS panel with amazing colours and viewing angles, so the move was daunting one.

I picked the BenQ XL2420T, part of the move to 120hz was for smoother and more responsive 2D gaming but also to give 3D a shot. I wanted the best 3D experience possible and this is pretty much the only Nvidia 3D vision ready monitor in the UK/EU right now, so choice was limited anyway.

It's a TN panel but so are all he 120hz ones, no choice there, colours aren't as bad as I was expecting, once it's calibrated, out of the box the colours are awful, but easily fixed. Vertical viewing angles are terible as you'd expect, horizontal are surprisingly good. I moved away from media on my PC monitors to a projector anyway so I'm never seated off-center and viewing angles don't bother me.

It's a really well made monitor with loads of features/inputs/options, some of them are gimmicky but it doesn't downplay the overall quality in my opinion. Compared to a 30" IPS monitor it's a relatively cheap investment anyway, something that I don't regret thus far, I just need to work out a good way of getting use out of both monitors in the workspace I have.
 
Back
Top