I didn't know this either. I have to ask: what the heck is the point of this? It seems totally and utterly useless (well okay it deals with the integral fraction framerates I guess...). The page flipping method just seems so much more obvious, easy to implement and beneficial, there must be a reason they chose this method instead.No, not at all. In page flipping triple buffering (what you described), input lag will be reduced if your FPS is much higher than your refresh rate, yes, but unfortunately that isn't the method that is used. DirectX's triple buffering isn't page flipping, but render ahead (a queue of completed frames). Completed frames are shown in the order they were completed, regardless of whether or not there is a newer completed frame to be shown. That will result it *INCREASED* input lag if your FPS is higher than your refresh rate. At least 1 frame+, depending on how big the render ahead queue actually is (from 0 to 8 frames with 3 being the default). So with the default triple buffering in DirectX combined with higher FPS than refresh rate can easily result in ~32ms of additional input lag. And some games do actually use a larger render ahead queue, adding even more input lag if your FPS exceeds your refresh rate.
Interesting idea, and I think it might be possible to make it work with some effort. Possibly without even modifying the render pipeline... Still, your simplified description seems like it's probably masking some serious practical problems, and I think the technique would still only be usable with framerates at least a couple times higher than the refresh rate.It's true that, if you waited until the last frame frame to do the blending, you'd have the same input lag as v-sync (though with much smoother looking movement). The trick is, you don't have to wait for all the frames to start displaying the output.
No, but it's pretty clear to me that the brain can process information at these kind of time scales. For example consider MPAA tracking dots in theaters these days (if you haven't noticed them, example. They last one frame, are often present in busy action sequences with lots of visual stimuli, or just before a cut, and are of moderate contrast. They're not at all hard to notice, and you can clearly discern the structure and probably transcribe the dots from memory shortly afterward. Yes it's 40ms, but the time scale is similar and not outside the realm of possibility.Fair enough, but at the same time that situation really doesn't happen in most games - at least it doesn't in games that are designed to not give you a seizure
Plus, and far more importantly IMO, we're not talking about interpreting a unique stimuli, but a feedback loop that's already set up and is basically 'subconscious'. The problem is that your brain, in aiming at the enemy, overshoots the mark because the feedback is delayed. In this continuous loop context, my gut feeling is that the brain's response is very different than any reaction or perception test would indicate. In poking around looking for a paper I thought I read about a similar situation, I found a different paper citing that an update rate of 1000Hz is necessary for haptic feedback in simulated surgery (for training; no delays in this example) or surgeon accuracy is reduced. Not suggesting that's necessary with visual feedback, but certainly feedback systems like this are very sensitive.