Nvidia Driver settings

Still openGL only, just like AA gamma correction.

And I don't use nvcp to edit driver settings, I use nvidia inspector. 1000x better.
 
The main thing I do is change to 16x Anisotropic filtering, use High quality texture filtering, Clamped LOD bias and turning all the filtering optimisation off. Anything less than that tends to be immediately apparent in most games. The only reason I would change the settings from that is if it causes problems. (which is rare)

I tend to leave antialiasing settings up to the game unless they don't have an option built in. Forcing AA in the nVidia control panel can often have lower performance or worse image quality. I will often bump up the transparency antialiasing to 4x supersampling or higher depending on the game though. (if you are using NVIDIA Inspector to set that, make sure you use SSAA and not SGSSAA) I find that multisampling or 2x isn't enough, and 8x can have quite a performance hit without much of a visual benefit over 4x.

NVIDIA Inspector has the ability to unlock other anti-aliasing options such as 8, 16 and 32x CSAA, Supersampling and combined MSAA+Supersampling options. Unfortunately, SSAA seems to be very blurry on nVidia cards right now. Even if you allow for a negative LOD bias, it doesn't actually seem to have any effect when SSAA is enabled, even if you set it to -3. (but it's very obvious if you then switch to MSAA and forget to change it back to 0 and Clamped)

Definitely leave SSAO off. Most games don't support it anyway, and those that do tend to have their own implementation built in. I've yet to find a game that actually looks better with SSAO enabled, and it can often incur a massive performance hit. Burnout Paradise goes from 25% load to 85% load on my GTX570 for example and it looks much worse with it on.


I've been leaving maximum pre-rendered frames at 3 so far as I believe that's required for triple-buffering, and raising it will probably increase memory usage quite a bit. If you need to absolutely make sure you have the least amount of input lag, you will probably want to set this to 0 and force v-sync off. (make sure you do that on a per-application basis and not globally though)

As for triple-buffering, I force v-sync and triple-buffering (v-sync is required for it to work correctly) with D3DOverrider which is installed with RivaTuner. (note: this causes problems in some games but can be disabled on a per-application basis) Without triple-buffering, even if the game is running at 60fps (which everything has done on my system so far) it isn't smooth and triple-buffering fixes that. Note: you can't use triple-buffering with SLI setups, it only works on a single GPU. (but if you're running SLI, you probably don't care about things running smoothly, you just want high framerate numbers)

The only time I would disable v-sync and triple-buffering was if I just couldn't keep a game running at 60fps, and it would run at 45-50fps without it. (because v-sync will drop you down to 30fps if it can't hold 60) I would much rather turn down some settings and play at a triple-buffered 60fps than play at variable framerates with v-sync off and screen tearing though.
 
Personnaly i use force 16x af with high quality filtering. I put transparent aa at multisample and use
Game aa or force it in the game profile (ex. Hot puirsuit 2010)
 
NVIDIA Inspector has the ability to unlock other anti-aliasing options such as 8, 16 and 32x CSAA, Supersampling and combined MSAA+Supersampling options. Unfortunately, SSAA seems to be very blurry on nVidia cards right now. Even if you allow for a negative LOD bias, it doesn't actually seem to have any effect when SSAA is enabled, even if you set it to -3. (but it's very obvious if you then switch to MSAA and forget to change it back to 0 and Clamped)

Read my posts in this thread: http://hardforum.com/showthread.php?t=1591352
I discussed the real reason why that occurs.

I've been leaving maximum pre-rendered frames at 3 so far as I believe that's required for triple-buffering,

No it isn't. Triple buffering in nvcp is for openGL only. Max prerendered frames sets the maximum number of buffers in the d3d swap chain. According to nvidia increasing this improves framerate stability in exchange for increasing input lag. I have found that is does usually increase average framerate slightly as well as framerate stability and input lag as you raise it. However I leave it at 0 (0 totally disables the swap chain) because I have found that in many games raising it any higher causes severe microstuttering to occur sometimes.

and raising it will probably increase memory usage quite a bit.

No it doesn't. Swap chain buffers don't use very much video memory. At 1920 x 1080 in RGB888 format you're looking at about 6.3MB per buffer.
 
Read my posts in this thread: http://hardforum.com/showthread.php?t=1591352
I discussed the real reason why that occurs.
Are you saying that the compatibility bits should be changed away from the Nvidia defaults? I have yet to find a game where SSAA worked without blurring the image.

For example, Portal really blurs the image with SSAA, when it is set to the Source engine AA Compatibility mode. (0x00000018)

Can you provide examples/details, rather than saying "you're doing it wrong".

EDIT: From searching around, it seems that you have to create new AA compatibility modes in Nvidia Inspector. There's a good list of working values here:
http://www.forum-3dcenter.org/vbulletin/showthread.php?t=490867

With a custom AA compatibility value set, portal with 2xSSAA definitely looks as sharp as 16xCSAA, but with better antialiasing overall. There are actually some areas where CSAA seemed to look better, but overall SSAA is best. (and 3xSSAA is unplayable)
No it isn't. Triple buffering in nvcp is for openGL only. Max prerendered frames sets the maximum number of buffers in the d3d swap chain.
Yes, but I am doing this in conjunction with forcing triple-buffering through D3DOverrider. In that case, should you not be leaving the setting at 3, or should you use 0 and triple-buffering will sort things out itself?

No it doesn't. Swap chain buffers don't use very much video memory. At 1920 x 1080 in RGB888 format you're looking at about 6.3MB per buffer.
Fair enough. Still doesn't seem worth going over 3 if it increases input lag though.
 
Last edited:
Yes, but I am doing this in conjunction with forcing triple-buffering through D3DOverrider. In that case, should you not be leaving the setting at 3, or should you use 0 and triple-buffering will sort things out itself?

I should make it perfectly clear that my opinions on d3d swap chains are as follows: it is a totally pointless and utterly stupid idea that does way more harm than good

However I should also point out that that is just my own personal opinion based on experience playing with the option, and not based on technical understanding.

So I would leave it at 0 (which disables the swap chain completely) regardless of whether you want to force triple buffering or not.

With a custom AA compatibility value set, portal with 2xSSAA definitely looks as sharp as 16xCSAA, but with better antialiasing overall. There are actually some areas where CSAA seemed to look better, but overall SSAA is best. (and 3xSSAA is unplayable)

Please be more specific. Nvidia inspector supports several different methods of SSAA. TRSSAA (transparency supersampling), SGSSAA (sparse grid supersampling, must be combined with MSAA to work), OGSSAA (ordered grid supersampling, the XxX modes), and HSAA (hybridsampling, the xS modes).

Also, 3xSSAA does not exist in nvidia inspector. Did you mean 3x3?

[begin off-topic statements]
After you learn to understand/play around with nvidia inspector you'll find that in many cases properly filtered older games can look better than newer games simply due to texture clarity/lack of artifacting (I count aliasing as a type of artifacting). However I warn you, once you start to notice the shader aliasing present in most modern games that SSAA can get rid of you won't be able to stand playing games without it. Quake III with 32xS HSAA, a negative lod bias, 16xAF, and forced v-sync at 1920 x 1200 looks amazing. I'm still waiting for the day I can use SSAA with crysis (2-3 more generations and a single high end card should be able to do it). It really does look like something carved by gods.
[end off-topic statements]

Fair enough. Still doesn't seem worth going over 3 if it increases input lag though.

Never really card about input lag to be honest. I mainly left it off because I figured out it was causing minor micro-stuttering in my games.
 
Last edited:
I should make it perfectly clear that my opinions on d3d swap chains are as follows: it is a totally pointless and utterly stupid idea that does way more harm than good

However I should also point out that that is just my own personal opinion based on experience playing with the option, and not based on technical understanding.

So I would leave it at 0 (which disables the swap chain completely) regardless of whether you want to force triple buffering or not.
From a limited amount of testing, it looks like setting this to 0 does not interfere with triple-buffering, so I'll keep it at 0. The only reason I was using 3 was because I thought it would be required for triple-buffering.

Please be more specific. Nvidia inspector supports several different methods of SSAA. TRSSAA (transparency supersampling), SGSSAA (sparse grid supersampling, must be combined with MSAA to work), OGSSAA (ordered grid supersampling, the XxX modes), and HSAA (hybridsampling, the xS modes).
I was using 2x2 SSAA in the antialiasing section with 0x004010C1 for Portal. This fixes SSAA blurring the image, but the transparency options either had no effect, or blurred the image again depending on what was selected.

With my previous antialiasing settings, transparencies were smoother, but texturing and object edges were more aliased. 2x2 SSAA looks much better overall.

Also, 3xSSAA does not exist in nvidia inspector. Did you mean 3x3?
Sorry, I meant 3x3 SSAA.

[begin off-topic statements]
After you learn to understand/play around with nvidia inspector you'll find that in many cases properly filtered older games can look better than newer games simply due to texture clarity/lack of artifacting (I count aliasing as a type of artifacting). However I warn you, once you start to notice the shader aliasing present in most modern games that SSAA can get rid of you won't be able to stand playing games without it. Quake III with 32xS HSAA, a negative lod bias, 16xAF, and forced v-sync at 1920 x 1200 looks amazing. I'm still waiting for the day I can use SSAA with crysis (2-3 more generations and a single high end card should be able to do it). It really does look like something carved by gods.
[end off-topic statements]
Believe me, I do notice it. I just figured that it was futile trying to get rid of it, as SSAA options just seemed to blur the picture, however that does not appear to be the case—you just need the correct compatibility bit for SSAA to work.

Unfortunately, it seems my system (i5-2500K, GTX570SC) can only handle 2x2 SSAA in games, if that. Running at anything less than 60fps and it isn't worth it. From the load amounts I'm seeing combined with the framerate, I'm not sure that buying a GTX580 would actually help make anything playable. It's only about 15% quicker than a stock 570, right? (and this is overclocked) SLI is not an option as I don't want stuttering.

Getting the AA compatibility bits to work correctly seems tricky as well. I'm trying to get burnout Paradise working and it either crashes when loading, or GPU load spikes to 99% and the game is unplayable, even when selecting MSAA options. (using the in-game 8xMSAA option it runs at 60fps with around 25% load on the GPU)
 
Unfortunately, it seems my system (i5-2500K, GTX570SC) can only handle 2x2 SSAA in games, if that. Running at anything less than 60fps and it isn't worth it. From the load amounts I'm seeing combined with the framerate, I'm not sure that buying a GTX580 would actually help make anything playable. It's only about 15% quicker than a stock 570, right? (and this is overclocked) SLI is not an option as I don't want stuttering.

Try using 16xS. I've always found the hybrid modes to be the best choice. It's 4xRGMSAA + 2x2OGSSAA + auto negative lod bias. Also make sure negative lod is allowed not clamped (it's a setting in nvidia inspector). The performance hit should only be moderately higher than 2x2 OGSSAA and I think you'll find the image quality far superior. Don't kill your bank account trying to upgrade your rig to use SSAA, time and progress (as in faster hardware over time) solve these problems in a far better way.

Getting the AA compatibility bits to work correctly seems tricky as well. I'm trying to get burnout Paradise working and it either crashes when loading, or GPU load spikes to 99% and the game is unplayable, even when selecting MSAA options. (using the in-game 8xMSAA option it runs at 60fps with around 25% load on the GPU)

Yes. This is because unlike the developers, we have no idea what's going on internally with the graphics engines. Really makes you wish developers bothered to add it into the game in the first place doesn't it. I can understand why they don't though. They figure so few people know/care about SSAA and the performance hit is so high that why bother spending the time to write custom shaders without a good framework?

is this guide any good? http://www.tweakguides.com/NVFORCE_6.html

thats what i follow (blindly), i just dont have the time to try every setting for every game.

Yes that's a good guide for nvcp.
 
I always force triple buffering and vsync, as well as lower the brightness. My monitor's on the cheaper end and emits a high-pitched noise if I lower the brightness on its own settings.

Somewhat related: has anyone ever found a fix for color settings not applying at startup? It seems like it happens in every beta driver release, and is fixed in the WHQL.
 
Somewhat related: has anyone ever found a fix for color settings not applying at startup? It seems like it happens in every beta driver release, and is fixed in the WHQL.

Never had that problem before.
 
Hmm, that's odd. I get it in literally every beta driver. The settings are saved properly, they just don't apply themselves at startup. I have to go into the nvidia control panel, move the slider (even though it's already where I want it), move it back and apply.

Edit: My color settings also revert after exiting certain games, but not others. This has always happened so I've learned to deal.
 
So I would leave it at 0 (which disables the swap chain completely)

Max prerendered frames sets the maximum number of buffers in the d3d swap chain.

There is a lot of confusion surrounding this setting, so I just joined [H]F to help clear it up (hopefully).

From what I understand, max pre-rendered frames relates more to the CPU than it does double or triple buffering. I've done a few experiments with pre-render set to 1 (which is generally what I prefer) and since triple buffering still works, it seems to disprove your theory. After all, if you took away the front & back buffers (which is what the swap chain provides), you're in for some serious performance problems.

See the following quote from the developer of RivaTuner and D3DOverrider.

The article mixes and confuses two compelely different (and absolutely independent!!!) things : swap chain buffers flipping (which allows double or triple buffering and so on) and internal Direct3D driver's frame precaching technologies called render ahead (in NVIDIA driver terms) or flip queue (in AMD driver terms). Those things coexist in rendering pipeline and do not affect each other. And there is no any 'true' and 'false' triple buffering, TB term always and everywhere means nothing but swap chain length. D3DOverrider or any other tool allowing you to force TB (e.g. DXTweaker, ATT or even ingame TB usage option) never change anything but swap chain length during swap chain creation. You cannot change the way swap chain buffers are being processed, and changing it has no sense. Render ahead or flip queue technologies do not work at framebuffers level at all. Frames are being internally precached by driver at command buffer level so precached queue length is independent of swap chain length (other words it doesn't depend on double/triplebuffering usage). The main idea behind render ahead / flip queue is to allow the driver to avoid stalling the CPU and precache the set of rendering commands required to draw current frame when GPU is still busy with rendering the previous frame. The main idea behind TB is minimizing tearing (with VSync disabled) or minimizing performance drop (with VSync enabled when framerate is lower than the refresh rate). And once again, those two things coexist and work independently of each other.

Some drivers save color, others don't.

NVIDIA is working on a fix for this. We were told that the reason the settings don't get applied at startup is because Windows itself is overriding them. If you've ever noticed a slight delay before colors get applied at startup/logon (in drivers where color settings actually work), this is probably why.
 
Last edited:
I've done a few experiments with pre-render set to 1 (which is generally what I prefer) and since triple buffering still works, it seems to disprove your theory.

Why would it not work? And what theory are you talking about?

After all, if you took away the front & back buffers (which is what the swap chain provides), you're in for some serious performance problems.

I thought that that the swap chain was totally unrelated to double/triple buffering. I thought it was used as a precache. Perhaps I was mistaken, I remember reading some documentation for some d3d11 functions to control swap chain length that seemed to suggest they did not interfere with the framebuffer(s). I appreciate the quote by the way, thank you for providing a source. It stops me from having to waste time on google digging through information.
 
Why would it not work? And what theory are you talking about?

With your theory that says "max prerendered frames sets the maximum number of buffers in the d3d swap chain", triple buffering wouldn't work at anything less than 3.

In Direct3D the backbuffer count is set during swap chain creation. 1 is for double buffering, 2 is for triple buffering, and so on.. (the frontbuffer is the 2nd or 3rd buffer respectively).
 
Last edited:
Back
Top