nVidia has significantly more input lag than AMD

That is not the same as what you write.

The connotations and point behind them are. Do not assume that your understanding of the English language is authoritative.

Well you found one good reason to ignore this result the sample size. And I would state that when it comes to technical things sometimes when you test mechanics it would be odd that they would appear in one test with one sample and not with others if there is no reason to doubt the testing methodology (you know not trusting websites as PCper which royally fucked up their freesync testing when the panel was at fault).

If I were 'ignoring' the result, then I wouldn't be posting. I'd like to see the result proven one way or another. And again, the application of the most basic critical thinking results in a big question mark here given the sample size of one. If you have paid attention to the evolution of graphics at all, you know that there are dozens of external factors that can cause tiny issues. There are plenty that can cause large, game-breaking issues. We're not talking about one variable here; we're talking about hundreds, and yet, we have a sample of one.

Technically, that's wholly inadequate to prove anything.
 
Why isn't there a bigger fuss about this?

Probably because it means absolutely bugger all in real term performance.

More AMD fanboi's grasping at straws for the few weeks that the company may be ahead of the rivals.....
 
Obviously there are framebuffers involved. I get that, but maybe not everyone here does. Why not share a bit?

Its not just involved the the part of syncing that makes.. well things in sync


To begin with: what the Hades is a framebuffer
I frame buffer is well a buffer that contains a frame.
Its eaisirs to pictures its a a canvas for a painter. This Buffer is an areas in the Video Ram that holdes the binarure values of the images beeing shown on screen or being created right now

Games can run in with diffrent numbers of framebuffers depending with diffrent benefits

Singleframebuffering ( pretty old and im unsure if this has ever existed but i believe i have seen a few games runnign in sigle buffer)
In a singe buffere system. The GPU ( i am going be using GPU for any levels of GPU even before the chips became GPU'es and was just simple pixels pushing pipelines) draw the finished picture in to the same area of RAM that is also beeing used to read from to send the picture to screen.
This is the simplest and lowest ressouece memory wise to work with. But often unfinsihed pictures would be sent to he screen creating flickering ( background was drawn but not your character so he disappears for a moment.

This has not been in use for decades and its pretty irrelevant for todays talks about framebuffers


Next step Double buffering
This was the next step to avoid the draw back of single buffering. in this case the GPU has 2 framebufers. as front and a back buffer
If we go back to the painter analogy. it works like the paintes having one canvas to paint on in the back of his store aka the (BACK buffer) and another one that show his lasts painting in his front windows ( aka the front buffer)
At some time when this painter is done he goes out and puts his new paintein in the front windows and moves the old paint back to the back of his store and draw another picture on the canvas
This is what we call a buffer swap on the gpu. IF before screen was reciving from buffer A it would now recived the new picture from buffer B and the GPU would change from drawing in buffer B to draw in buffer A
Front buffer ialways what is beeing shown. back buffer is alwys the one beeing drawn/redered in

The problen quickly arise with this method as some outv viewer of this painter paintings was not done when she swaped and only got to see some of the picture before a new one came ont.
the same happesn with our monitor however out monitore recives the pictures in a pixels by pixels stare from top to bottom.
If this buffer swap happens while the monitro was only halfways done showing on the screen it would contiiunes showing from where it is ucrently updating but with the new picture instead
top half would be from one picture from one framebuffers. and the bottom one would be another picture from another buffere.

any movemen or difference between this pciturese woul create a visible border where things are mismatched
this what we called teaering because that pixture looks like it been torn assiseed and the put back together slightly wrong



Very Valeualbe VSync:

Vsync comes to fix the tearing issue.
Under vsync the buffer swap aka changisn the canvas are not don until the viewer/minors is done with the current pictures. this way a full picutere is always shown. better pciture uality but now a new pictures had t iwaut before it is started to get shown and we have abit more delay from from render to screen whichadds to the total input2output delay.
and since we have to wait for the screen to star a new refresh we can never make more bufferswaps than refresh and terby not show more frames then the refresh,. aka the FPS is now capped to the HZ of the monitor ( not 60 as many for some wrid resone things it is. its just happens to be that most montiors was 60hz for the longest time)
another issues introduce if you frame rates is to slow

lets say you run with 50 fps on a 60hz monitor
thats 20ms render time and a 16.6 refresh tinervals

you will not always have a frame reader for when hte monitor strts and then you have to wait for the next refresh intervall to make the buffer swap.
and in between the gpu cant do anything. Buffer A is the front buffer showing haveing te pciture beeing displaets. and buffer B is the back buffer showing the newer pictures that has to go up at next refresh interval.
no framebuffer left to draw another pictures

this means that isntad of swappign b uffer every 20ms we are doomed to do it only every 333 ms ( every other refrsh interval) and our frame rates dropped from 50fps to 30 fps by enabline vsync.



Turbo triple buffering
in comes triple buffering.
tripel buffering tries to take on the vsync slow downs
by having a third buffer the gpu can ontine render in the 3rd buffer even when we are wiating for a new refresh cycles with a ready pictures

it sounds really nice except for DX has a very linary approcx meaning the frame are always shown in order so you are now increse latany
you will be dounga buffer sweap to drop frame1 to show frame2 even thoug frame 3 is ready.

OpenGL skip to the newest frames and thegby should have tremendously better latancy that dx under triple buffering.

Both DCX and OpenGLA triple bufferign stop rednering when alle 3 buffer are full

so in caseyou gpu can do 180fps you ar still only getting 60fps out on your monitor wiich vsync/tripel buffering and the might be small delays for you gpu waiting fory ou monitor to catch up



Gsync/Freeesync
Gsync fresync works reverse. and are only really needed for fps < hz
what Fsync fresync does it dealy the refreshcycles
if you fastes refesh cycles is 16.6ms (aka 60hz) nd you render time is 20ms (aka 50fps)
instead od doign tis refesh cycles evry 16,.6 ms and there by you have to wait to the 2nd ano arrivend at 33.ms before buffer swapping.
gsync/freesync decides that check of there is a new backbuffer ready for swap. if not it delays the refresh cycles.
once at the 20ms amrk the backbuffer is ready.
refresh and bufferswpa is initialsies

this is JUST a delay of the refresh of the monitor but its works nicely for low fps smoothnes as we now get the render frames in a more stable maner

aka every 20ms instead of 16.6 33 16 33 so its reduce some jitter compared to tripe buffer vsync


nwo gsync and frresync does not doe a vsync on its onw. so its cape of doing a buffer sawp in betwe refrsg cycles if you ar redner faster frames than you monitor can show

aka if you do 130fsp aka 8.3ms rednertimes and you minotor r fastes is refersh rate is 16.6 you will still get 2 fram buffe swaps per refresh cycles. and terby off cause tearing again when you fps goes above hz.


to fix the tearing you can re eanble vsync and run vsync +free/gsync for optimal dispaly quality

but agia rembmer under double buffering with vsync that will cuase abit of input delay compared to be able to use all those nice frames




FAST SYNC
is bsasicaly a glorified API hidden truple buffering
it fixes the FPS drop comapre to vsync and dobuel buffering by providing that third buffer to render into
but unliek the traditionel triple buffering. This ont conmtinoues dropping a old backbuffer and renderon back and foth

so whiel the front buffer is beeing shown on scree. back buffer A might hold next pictures Bakc buffer B is getting the third picture ready
once third pcitures is ready. Backbuffer A is dropped and te furth pcitures wil be render there
once a buffer swap can be done ( at refresh cycles start) the front buffer will be swapped with the newest finished backbuffer image

yen thougg you will never see more fps than hz in this mode. dues not beeginableto fswap front buffer faster than refrech cycles.
you still get a lot newer frame showned and there improvet render to screen latancy




So in short:

gsycn freescync = for when fps is lower than you monitors refresh rate
Fast sync = for when fps is above erefresh rate

No sync= low fps you dont have sync and dont mind tearing or when fps is above> hz and you dont mind tearing


so when is the best option is not set in stone and realies alot on prefferences and how you FPS is in relation to you max refresh rate




I do apologize for typos

but as you can see understabding how framebuffer basicalyl works and how buffer swaps works because they are the very thing itself wea re syncing to when we talk about sync.
 
Last edited:
It somewhat scares me that no one mentioned "Maximum Pre-Rendered Frames" setting
Neither here nor in these input lag testing videos.

I hope Nvidia package this setting in some "anti-lag" feature so that people are aware of it and it can be actually tested by reviewers.

BTW. It sucks Nvidia input lag on fixed refresh rate monitors given I do still use fixed rate monitor (Sony GDM-FW900 CRT). I hope they will fix it soon
 
how can nvidia even compete at this point? market share completely from fanboy fumes?

- The input lag is negligle to the vast majority of users.
- (traditional) Gsync displays have less input lag than freesync
- When freesync is enabled nVidia and AMD have similar input lag
- the only case in this video where input lag was (still negligible) but a decent % higher was where freesync was not enabled
- AMD doesn’t have a card even close to the 2080ti
- if you really cared about input lag you’d go with a traditional gsync display anyways.

If you’re running a fixed refresh rate monitor I don’t think input lag is super high on your priority list. Especially since the average human reaction is 200-250ms. The differences in the video are pretty negligible to the typical user.
 
Last edited:
- The input lag is negligle to the vast majority of users.
- (traditional) Gsync displays have less input lag than freesync
- When freesync is enabled nVidia and AMD have similar input lag
- the only case in this video where input lag was (still negligible) but a decent % higher was where freesync was not enabled
- AMD doesn’t have a card even close to the 2080ti
- if you really cared about input lag you’d go with a traditional gsync display anyways.

If you’re running a fixed refresh rate monitor I don’t think input lag is super high on your priority list. Especially since the average human reaction is 200-250ms. The differences in the video are pretty negligible to the typical user.
I could never use Vsync because of lag yet Nvidia Adaptive Sync never had this issue and was my go to method on fixed refresh rate monitors. AMD enhanced sync I am not sure about since I use FreeSync and yes neither AMD or Nvidia do I find input lag much of a problem except it does seem like Nvidia lag increases more then AMD with higher resolutions from my experience or I notice it more. Take 4K, framerate around 45fps, the 1080Ti starts to feel terrible for input lag while I hardly notice any difference with a Vega card. In other words with high fps Nvidia is smooth as butter but once the frame rate drops Vega does much better there, just that Vega is there more often :D. Anyways I can ignore a number of things while gaming but most sensitive to input lag and colors.
 
I could never use Vsync because of lag yet Nvidia Adaptive Sync never had this issue and was my go to method on fixed refresh rate monitors. AMD enhanced sync I am not sure about since I use FreeSync and yes neither AMD or Nvidia do I find input lag much of a problem except it does seem like Nvidia lag increases more then AMD with higher resolutions from my experience or I notice it more. Take 4K, framerate around 45fps, the 1080Ti starts to feel terrible for input lag while I hardly notice any difference with a Vega card. In other words with high fps Nvidia is smooth as butter but once the frame rate drops Vega does much better there, just that Vega is there more often :D. Anyways I can ignore a number of things while gaming but most sensitive to input lag and colors.

As I mentioned, this is where G-Sync shines. When you're running something like 4k with a modern game and sitting around the 40-50 FPS mark where the input lag starts to become very noticeable. With real hardware G-Sync it helps out a lot here compared to just running adaptive sync.
 
As I mentioned, this is where G-Sync shines. When you're running something like 4k with a modern game and sitting around the 40-50 FPS mark where the input lag starts to become very noticeable. With real hardware G-Sync it helps out a lot here compared to just running adaptive sync.
Does running a 1080Ti using Gsync on a non certified Gsync 4K freesync monitor count? ;) If it does then Nvidia lag sucks at lower frame rates with adaptive sync is all I am saying. If AMD has less lag at higher frame rates then at lower ones Nvidia would only get proportionally worst, so what does not seem like much can become much larger when you need it most at lower frame rates. This would explain why when Kyle did the Doom testing pitting Nvidia Gsync setup to a FreeSync setup with Doom that the experience gamers overall preferred the AMD setup better. Plus as a Nvidia user and from others, all exclaim how much better 144mhz monitors are and 60hz suck so much - is that lag that those fans are seeing?
 
Does running a 1080Ti using Gsync on a non certified Gsync 4K freesync monitor count? ;) If it does then Nvidia lag sucks at lower frame rates with adaptive sync is all I am saying. If AMD has less lag at higher frame rates then at lower ones Nvidia would only get proportionally worst, so what does not seem like much can become much larger when you need it most at lower frame rates. This would explain why when Kyle did the Doom testing pitting Nvidia Gsync setup to a FreeSync setup with Doom that the experience gamers overall preferred the AMD setup better. Plus as a Nvidia user and from others, all exclaim how much better 144mhz monitors are and 60hz suck so much - is that lag that those fans are seeing?

60hz monitors do suck, and it's why I haven't used them for years. Now, in some games you're still going to be under 60 FPS, but in a game like DOOM where you can hit 120FPS easy it's well worth it having a panel that will support the higher refresh rate.

Either way, hardware G-Sync helps in both scenarios. As seen from the test in this thread, it helps up at the top end, and again, the effect at lower frame rates is pretty noticeable as well.

Finally, I don't really give a shit if i'm using nvidia or AMD. The only reason I have a 2080ti is because it's the only card on the market that allows me to play the latest games at an acceptable framerate at 4k. I've stated numerous times i'd buy AMD over nvidia if they at least offered something with near 2080ti performance - But they don't.

Although I will say, that currently, you can only get a 144hz HDR display w/ G-Sync support, so that would have forced me to go nvidia anyways. The reality is that for the super-high end display/GPU market nvidia is the only choice. AMD isn't even there.

Now, if I were building a lower-budget setup to play games @ 1080p I would 100% go AMD.
 
Does running a 1080Ti using Gsync on a non certified Gsync 4K freesync monitor count? ;)

I think you know the answer to that- but to add, it depends heavily on how well implemented Freesync is. As we all know, Freesync is a bit of a shitshow with respect to implementation quality. Better implementations should be almost indistinguishable from G-Sync, average implementations can be pretty bad.

If it does then Nvidia lag sucks at lower frame rates with adaptive sync is all I am saying.

Pointedly, lower framerates mean more input lag. Increasing the frametime means that the time between user input and monitor output has been lengthened. This is a vendor-agnostic problem and one of the prime advantages of higher refresh rate monitors.

This would explain why when Kyle did the Doom testing pitting Nvidia Gsync setup to a FreeSync setup with Doom that the experience gamers overall preferred the AMD setup better.

As much as I respect Kyle and the care to get the configurations close, he wasn't able to get them close enough- which is why he didn't consider the test authoritative. Using different panels in the monitors is a huge disparity that cannot be reasonably accounted for.

Plus as a Nvidia user and from others, all exclaim how much better 144mhz monitors are and 60hz suck so much - is that lag that those fans are seeing?

Again, nothing to do with Nvidia (or AMD or Intel or...), but yes, this is a form of lag. It's in the signal chain between user input and the pixels on the screen changing in response. Which is, if I were to dream a bit, something that we should see more of an emphasis on testing from reviewers.
 
I think you know the answer to that- but to add, it depends heavily on how well implemented Freesync is. As we all know, Freesync is a bit of a shitshow with respect to implementation quality. Better implementations should be almost indistinguishable from G-Sync, average implementations can be pretty bad.



Pointedly, lower framerates mean more input lag. Increasing the frametime means that the time between user input and monitor output has been lengthened. This is a vendor-agnostic problem and one of the prime advantages of higher refresh rate monitors.



As much as I respect Kyle and the care to get the configurations close, he wasn't able to get them close enough- which is why he didn't consider the test authoritative. Using different panels in the monitors is a huge disparity that cannot be reasonably accounted for.



Again, nothing to do with Nvidia (or AMD or Intel or...), but yes, this is a form of lag. It's in the signal chain between user input and the pixels on the screen changing in response. Which is, if I were to dream a bit, something that we should see more of an emphasis on testing from reviewers.
Well the 4K FreeSync monitor is probably a bad implementation but works good enough with AMD cards. On Nvidia there is a noticeable difference when going under 50fps between the 1080 Ti and the Vega. If that is Nvidia drivers and a flaky implementation I don't know. What I do is use GSync (adaptive sync) and vertical sync which seems to work well with Nvidia plus SLI (play games that support it) - Anyways it is a rather nice experience in that respect so no complaints here at 60 fps speeds and minor drops with forced Gsync. In this case I don't see the normal Vsync lag with GSync enabled in the drivers - looks like GSync and Vertical Sync with Nvidia work together. Now if I don't use Vertical Sync the 1080 Ti's spaze out it seems when frame rates go over 60fps, glitchy with some massive tearing (this is not an issue with the Vega's) as a note. 1080Ti's are used for mining now and VR where they are outstanding in.
 
Well the 4K FreeSync monitor is probably a bad implementation but works good enough with AMD cards. On Nvidia there is a noticeable difference when going under 50fps between the 1080 Ti and the Vega. If that is Nvidia drivers and a flaky implementation I don't know. What I do is use GSync (adaptive sync) and vertical sync which seems to work well with Nvidia plus SLI (play games that support it) - Anyways it is a rather nice experience in that respect so no complaints here at 60 fps speeds and minor drops with forced Gsync. In this case I don't see the normal Vsync lag with GSync enabled in the drivers - looks like GSync and Vertical Sync with Nvidia work together. Now if I don't use Vertical Sync the 1080 Ti's spaze out it seems when frame rates go over 60fps, glitchy with some massive tearing (this is not an issue with the Vega's) as a note. 1080Ti's are used for mining now and VR where they are outstanding in.
Are you comparing single GPU configuration with SLI? WTF?
 
What I do is use GSync (adaptive sync) and vertical sync which seems to work well with Nvidia plus SLI (play games that support it)

Let me just say this: while I do not at all discount your experience, using a multi-GPU setup with today's extremely poor game support is really the worst way you could judge input lag. I do get that the performance may be needed and I do lament the lack of development of the multi-GPU ecosystem with VR and 4k120 displays becoming accessible to enthusiasts.
 
Yeah, SLI is dead at this point. I have the money to run two 2080tis if I wanted to, but I settled on a single because I’m done with SLI.
 
So the TLDR of this thread for input lag:

- Gsync > Freesync
- nVidia freesync = AMD freesync
- AMD vsync is slightly less shitty compared to nVidia’s shitty vsync
- an old guy like my probably wouldn’t notice regardless

That sums it up?
 
So the TLDR of this thread for input lag:

- Gsync > Freesync
- nVidia freesync = AMD freesync
- AMD vsync is slightly less shitty compared to nVidia’s shitty vsync
- an old guy like my probably wouldn’t notice regardless

That sums it up?

Pretty much.

Although I still want to see a test comparing full hardware gsync w/ software freesync/gsync on a panel that is exactly the same in all ways except for the FPGA G-Sync hardware.
 
Are you comparing single GPU configuration with SLI? WTF?
argggh, games without SLI support, no SLI is used. I had also CFX Vega FE's which due to shitty driver support I went to the Vega 64 LC. 1080 Ti SINGLE on 4K monitor seems to have more lag when less than 50 fps that the Vega seems to have, this is very noticeable. I've been gaming more with the Vega LC due to it supports HDR and FreeSync 2 better and is fast enough for 1440p gaming. Basically have multiple systems, monitors, configurations which some times gets changed around. Sorry for any confusion there. There are a number of older games that do support SLI plus even newer ones like Shadow Of The Tomb Raider has the best support I've ever experienced using mGPU - nothing else compares. When I get around playing that game again, I will be using both 1080 Ti's and the HDR FreeSync 2 monitor. Unfortunately all of my experiences is more subjective, also my memory limited experiences. Some grains of salt should be added. Bottom line in my experience at lower frame rates lag on Nvidia cards are more noticeable.
 
If you truly care about input lag, then you already had v-sync forced off long ago and that control has been collecting dusts for decades.

G-SYNC adds more latency. It can't not. It doesn't make the screen draw a new frame faster. V-sync off + max refreshrate gives your eyes the latest info the soonest, even if it's only half the screen, that's still new info and still coming at you faster than having to wait for the rest of the frame to render.

How this isn't inherently obvious to most here baffles me, it really does. :confused:
 
Yeah, SLI is dead at this point. I have the money to run two 2080tis if I wanted to, but I settled on a single because I’m done with SLI.
every year someone says SLI is dead.

i guess my PC is a fucking zombie because it still works.

The driver support is there, the engine support is there, the hardware support is there, the need is there, and the games just aren't, today.

this
 
  • Like
Reactions: noko
like this
argggh, games without SLI support, no SLI is used. I had also CFX Vega FE's which due to shitty driver support I went to the Vega 64 LC. 1080 Ti SINGLE on 4K monitor seems to have more lag when less than 50 fps that the Vega seems to have, this is very noticeable. I've been gaming more with the Vega LC due to it supports HDR and FreeSync 2 better and is fast enough for 1440p gaming. Basically have multiple systems, monitors, configurations which some times gets changed around. Sorry for any confusion there. There are a number of older games that do support SLI plus even newer ones like Shadow Of The Tomb Raider has the best support I've ever experienced using mGPU - nothing else compares. When I get around playing that game again, I will be using both 1080 Ti's and the HDR FreeSync 2 monitor. Unfortunately all of my experiences is more subjective, also my memory limited experiences. Some grains of salt should be added. Bottom line in my experience at lower frame rates lag on Nvidia cards are more noticeable.
Most SLI profiles use AFR (Alternate Frame Rendering) which by how it works have more input lag than single card that can push the same framerates.
http://developer.download.nvidia.com/assets/events/GDC15/GEFORCE/SLI_GDC15.pdf said:
Input latency does not reduce with increased performance

My own SLI tests confirm that. Latency does not really improve with SLI AFR if not become slightly worse (games often have worse than 100% scaling)

Solution: SFR (Single Frame Rendering)
Games with SFR profiles (or when it is forced in driver) tend to have worse scaling than AFR but input latency is reduced proportionally increased with framerate just like on single GPU.
In my own tests I remember switching AFR to SFR and game ran with less input lag and less than on single card.
Imho SLI SFR is much better and NV should concentrate their efforts on this mode
... for obvious reasons they did not. People decide to buy second (or third/fourth) card because they saw charts with framerates...

SLI worked great on 3dfx when it was always SFR and worked in pretty much applications without any profiles =)

AMD also uses AFR in CFX. Not sure if in all games though.

Multi GPU stuff requires additional effort on game developers to make it work and even more to make it work well. Some types of effects are not even very compatible with it (when resources are generated by GPU per-frame basis and used in next frame) limiting what can be done. And in the end there are not that many people with SLI setups anyway... and those people seems to be unaware that adding more GPUs to their system is not the same as getting faster GPU.

There are of course even more issues with SLI/CFX like frame pacing issues. All in all it is imho all not worth it. If SLI/CFX worked like on 3dfx then hell yes but it does not.
 
If you truly care about input lag, then you already had v-sync forced off long ago and that control has been collecting dusts for decades.
G-SYNC adds more latency. It can't not. It doesn't make the screen draw a new frame faster. V-sync off + max refreshrate gives your eyes the latest info the soonest, even if it's only half the screen, that's still new info and still coming at you faster than having to wait for the rest of the frame to render.
How this isn't inherently obvious to most here baffles me, it really does. :confused:
Having lowest input lag is not the same as having good game experience
Also to have better input lag than G-/Freesync you need to have frame rate much higher than monitor refresh rate which is often not the case.
 
Having lowest input lag is not the same as having good game experience
Also to have better input lag than G-/Freesync you need to have frame rate much higher than monitor refresh rate which is often not the case.

This thread is about input lag though. :cool:

Nothing beats v-sync off with high refresh rates and frame rates. I found that at the point where G-sync feels it's no longer getting in the way, the frame/refresh rate is high enough that tearing is virtually imperceivable so it's moot.

It's only really good for slower paced 'medium frame rate' games where you're necessarily looking at the scenery. But as soon as you start saying 'input lag' you just need to forget about G-sync/Free-sync from my experience.
 
This thread is about input lag though. :cool:
Nothing beats v-sync off with high refresh rates and frame rates. I found that at the point where G-sync feels it's no longer getting in the way, the frame/refresh rate is high enough that tearing is virtually imperceivable so it's moot.
It's only really good for slower paced 'medium frame rate' games where you're necessarily looking at the scenery. But as soon as you start saying 'input lag' you just need to forget about G-sync/Free-sync from my experience.
Have you not seen input lag measurements linked in this thread?
At least in CS:GO it is the G-Sync that have the best latency on Nvidia hardware ~19ms g-sync limited@139fps vs ~25ms v-sync off with no framerate limiter.

Even ignoring this issue AMD have ~17ms with uncapped frame rates and v-sync off. Is in your opinion 2ms difference worth having tearing and stuttering?
Gaming with V-Sync off when you can easily get VRR monitor is simply stupid.
 
G-SYNC 101: Input Lag & Test Methodology

https://www.blurbusters.com/gsync/gsync101-input-lag-tests-and-settings/3/

My opinion (which is meaningless) - If this is a AMD vs nVidia experience - Then lets compare the best technology vs best technology - after a true base line, one can from there mix it up with like model cards from each...

No matter, this is a pretty good write up which gives a good idea as to input lag with gsync (as well as comparisons to other methods)


The chart above (in URL) depicts anywhere from 3 to 3 1/2 frames of added delay. At 60Hz, this is significant, at up to 58.1ms of additional input lag. At 240Hz, where a single frame is worth far less (4.2ms), a 3 1/2 frame delay is comparatively insignificant, at up to 14.7ms.

In other words, a “frame” of delay is relative to the refresh rate, and dictates how much or how little of a delay is incurred per, a constant which should be kept in mind going forward.


Input Lag: Not All Frames Are Created Equal
When it is said that there is “1 frame” or “2 frames” of delay, what does that actually mean? In this context, a “frame” signifies the total time a rendered frame takes to be displayed completely on-screen. The worth of a single frame is dependent on the display’s maximum native refresh rate. At 60Hz, a frame is worth 16.6ms, at 100Hz: 10ms, 120Hz: 8.3ms, 144Hz: 6.9ms, 200Hz: 5ms, and 240Hz: 4.2ms, continuing to decrease in worth as the refresh rate increases.

With double buffer V-SYNC, there is typically a 2 frame delay when the framerate exceeds the refresh rate, but this isn’t always the case. Overwatch, even with “Reduced Buffering” enabled, can have up to 4 frames of delay with double buffer V-SYNC engaged.
 
every year someone says SLI is dead.

i guess my PC is a fucking zombie because it still works.



this

Out of the games I played last year maybe 1/8th of them would have ran that much better on two 2080tis in SLI. I cannot justify spending another 1000 so like 2 games would have run better.

So yes, in my book SLI is dead.
 
It's only really good for slower paced 'medium frame rate' games where you're necessarily looking at the scenery. But as soon as you start saying 'input lag' you just need to forget about G-sync/Free-sync from my experience.

I want to start by saying that you're not wrong, but that your perspective may be skewed more by what you play, and that your examples, which make sense for concise explanation, do lack some of the breadth that this argument needs.

What I'm getting at are the 'in between' situations. Situations where framerates can be up into the 150FPS+ range, but can also be in the sub-70FPS range. Sometimes in the same game, but more often for a typical gamer, across a breadth of games. So, as opposed to saying 'it's [VRR] only good for slower paced 'medium frame rate' games', it might be better to frame the argument as 'VRR is less useful at very high framerates where it can increase latency by a fraction while tearing is already largely imperceptible'.

The main reason to point this out is that games like CS:GO or Overwatch (etc.), perhaps some users would benefit from disabling VRR for those games specifically, but.for nearly every other scenario, you want VRR.
 
Out of the games I played last year maybe 1/8th of them would have ran that much better on two 2080tis in SLI. I cannot justify spending another 1000 so like 2 games would have run better.

So yes, in my book SLI is dead.

If I keep mentioning SLI / CFX, it's not because I feel differently- my last two setups were SLI, and the one before that CFX!- but because I feel that the qualifier 'right now' is needed. While part of the current dearth of SLI and CFX support can be traced both to the lack of Split-Frame Rendering (SFR) support and the challenge that DX12 and Vulkan represent, it's also important to note that GPU vendors have continued to work to get support into game engines and that we very much can determine a need for multi-GPU performance in the coming years.

So yeah, dead right now, but could come back at any time. I imagine that we're just one killer app away from SLI and CFX being back in demand, and with high resolution, high refresh rate display devices becoming more accessible alongside ray tracing becoming feasible, it seems that such an app could be right around the corner.
 
how can nvidia even compete at this point? market share completely from fanboy fumes?

Don't confuse buthurt whiney fanboys on forum with a true representation for the total market.
If anything, NVIDA's launch of their Turring series has shown that forums hold no value as a datapoint....except for how buthurt some people are....go figure ;)
 
If you truly care about input lag, then you already had v-sync forced off long ago and that control has been collecting dusts for decades.

G-SYNC adds more latency. It can't not. It doesn't make the screen draw a new frame faster. V-sync off + max refreshrate gives your eyes the latest info the soonest, even if it's only half the screen, that's still new info and still coming at you faster than having to wait for the rest of the frame to render.

How this isn't inherently obvious to most here baffles me, it really does. :confused:

That's why the far distant future solution is just to run at like 1000hz which is so fast you couldn't see tearing or motion blur.

In terms of AMD being better, AMD has SUCH an opportunity here that they're wasting.

Everyone hates Nvidia right now. They want HDMI 2.1. Nvidia isn't providing it.

Hi AMD. Where are you? TAKE ADVANTAGE OF THIS NOW.
 
That's why the far distant future solution is just to run at like 1000hz which is so fast you couldn't see tearing or motion blur.

In terms of AMD being better, AMD has SUCH an opportunity here that they're wasting.

Everyone hates Nvidia right now. They want HDMI 2.1. Nvidia isn't providing it.

Hi AMD. Where are you? TAKE ADVANTAGE OF THIS NOW.

Well, generally, the problem is they can offer HDMI 2.1 but really it doesn't mean much because you won't have the GPU horsepower from the AMD side to hit the FPS needed to run any semi-modern titles at a decent frame rate at 4k.
 
If I keep mentioning SLI / CFX, it's not because I feel differently- my last two setups were SLI, and the one before that CFX!- but because I feel that the qualifier 'right now' is needed. While part of the current dearth of SLI and CFX support can be traced both to the lack of Split-Frame Rendering (SFR) support and the challenge that DX12 and Vulkan represent, it's also important to note that GPU vendors have continued to work to get support into game engines and that we very much can determine a need for multi-GPU performance in the coming years.

So yeah, dead right now, but could come back at any time. I imagine that we're just one killer app away from SLI and CFX being back in demand, and with high resolution, high refresh rate display devices becoming more accessible alongside ray tracing becoming feasible, it seems that such an app could be right around the corner.
I sure see a need and maybe great selling point in the future for two GPU or more solutions - ray tracing could really use the extra processing ability, if Nvidia or AMD can 3x current generation (Nvidia) ray tracing ability and then combine that with two cards - that would be one impressive jump for gaming visuals. Maybe not totally dead but on the back burner. Now I would think AMD would push for multiple card solutions for no other reason then to sell more GPUs. If you have less customers then sell more to them is all.
 
I sure see a need and maybe great selling point in the future for two GPU or more solutions - ray tracing could really use the extra processing ability, if Nvidia or AMD can 3x current generation (Nvidia) ray tracing ability and then combine that with two cards - that would be one impressive jump for gaming visuals. Maybe not totally dead but on the back burner. Now I would think AMD would push for multiple card solutions for no other reason then to sell more GPUs. If you have less customers then sell more to them is all.
With ray tracing multi GPU rendering could be done in the same fashion as it was done on 3dfx hardware so line by line or even pixel by pixel, and scaling would still be very good.
The main issue here is rasterization part which is harder to translate to multi-GPU world.

And if games were to be fully path traced without rasterization it would make more sense to just make RT-focused hardware. Current RTX cards are rasterization-focused with little RT hardware slapped on and first generation of it at that.

Take tesselation performance of first Nvidia card to implement it and later iterations. Hardware refinement alone brought a lot of performance improvements. There is no reason to not believe the same will happen with ray tracing hardware even if they do not increase die area taken by it by a lot.

BTW. Doing multi-GPU with AFR (only viable solution for hybrid rendering) would increase input lag This is not a solution we want. Only place I would see multi-GPU being used is VR. One GPU per eye is the most obvious and easiest to implement solution and one without increased latency.
 
- The input lag is negligle to the vast majority of users.
- (traditional) Gsync displays have less input lag than freesync
- When freesync is enabled nVidia and AMD have similar input lag
- the only case in this video where input lag was (still negligible) but a decent % higher was where freesync was not enabled
- AMD doesn’t have a card even close to the 2080ti
- if you really cared about input lag you’d go with a traditional gsync display anyways.

If you’re running a fixed refresh rate monitor I don’t think input lag is super high on your priority list. Especially since the average human reaction is 200-250ms. The differences in the video are pretty negligible to the typical user.

So the input lag difference is negligible unless when it's a same few percent with a 9900k vs whatever else. Understood.
 
With ray tracing multi GPU rendering could be done in the same fashion as it was done on 3dfx hardware so line by line or even pixel by pixel, and scaling would still be very good.
The main issue here is rasterization part which is harder to translate to multi-GPU world.

And if games were to be fully path traced without rasterization it would make more sense to just make RT-focused hardware. Current RTX cards are rasterization-focused with little RT hardware slapped on and first generation of it at that.

Take tesselation performance of first Nvidia card to implement it and later iterations. Hardware refinement alone brought a lot of performance improvements. There is no reason to not believe the same will happen with ray tracing hardware even if they do not increase die area taken by it by a lot.

BTW. Doing multi-GPU with AFR (only viable solution for hybrid rendering) would increase input lag This is not a solution we want. Only place I would see multi-GPU being used is VR. One GPU per eye is the most obvious and easiest to implement solution and one without increased latency.

NVIDIA had as had their tessellation located in their PolyMorph Enginev 1.0 + 2.0 + 3.0 (fixed-function geometry pipeline) who themselfes located in their GPC's (Graphics Processing Clusters) blocks, so add a GPC, add a tesselation unit ( more tesselation power)...the more tesselation power you get.
(unlike AMD who went with a dedicated tesselator bolck and ran into "too much tesselation"...aka their hardware was not up to par)
GF100:
1263608214xxTstzDnsd_1_22_l.gif


NVIDIA has done the same thing with their RT-cores (part of GPC):
upload_2019-7-2_12-26-19.png


So they keep folowing the concept of coupling thing to a GPC...and then scaling up the performance via adding GPC's.

For a RTX 2080 Ti that gives you:
6 GPC's
72 SM's
4608 CUDA cores
576 Tensor cores
72 RT cores
288 texture units
36 PolyMorph engines.

Compared to Fermi:
4 GPC's
16 SM's
512 CUDA cores
64 texture units
16 PolyMorp engines

Gives them a good platform for adopting performance to where games need it when designing a SKU, they have re-suffled units as part of the evolution of their GPU's:
upload_2019-7-2_12-37-45.png
 
What? How did a 9900k enter the convo?
Because most people say it doesn't matter in this case (dGPU) but suddenly it does in the few cases that CPU has the lead at 520 vs 500fps lol. Related subjects and hypocrisy.
 
Because most people say it doesn't matter in this case (dGPU) but suddenly it does in the few cases that CPU has the lead at 520 vs 500fps lol. Related subjects and hypocrisy.

I am not one of those people but I figured you were going in that direction. I run a 2700x since <~100-110 Hz it’s basically parity with Intel. I am more sensitive to changing frame times than input lag though. One of the reasons I stayed away from first gen Ryzen but second gen is negligible, for me.

Input lag your brain naturally compensates for and I question if anyone could notice ~10-20 ms in a blind test. If I have vsync on or off I can’t notice the difference.

Variance in frame times messes with my motion prediction so I find that more annoying.
 
Back
Top