Is there any way at all to bring back true FSAA?

Nazo

2[H]4U
Joined
Apr 2, 2002
Messages
3,672
So one truly fun feature I've noticed in all modern games is that even if you set the drivers themselves to force FSAA the setting will be ignored. This seems to go in league with everyone trying to force us to switch to cheap shaders like FXAA (which I'm guessing are much more console friendly since FXAA requires little by way of resources in comparison even to the far superior SMAA.) Given that FSAA works at the 3D level though, I find it inconceivable that FSAA is truly "broken" or would actually cause problems. (In fact, I saw one guide suggesting renaming games to things like "Bioshock.exe" to try to trick drivers, but they were using nVidia. Or even if this method worked back then it doesn't now because I tried and couldn't fool it apparently.) I've been wondering if games essentially send some sort of flag to the drivers that tell them to turn off FSAA even if it's set to forced. To this end, I wonder: might there be some way to block this? I'm getting very fed up with FXAA and even SMAA has its limits in the things I can try to force it into using the likes of SweetFX (not to mention that SweetFX isn't even compatible with DX10+. I found GemFX which seems to work in 10+ and apparently even works in Windows 8+, but SMAA is permanently grayed out no matter what I do. I still really want true anti-aliasing anyway though.)

Has anyone ever managed to do this?
 
Don't both camps have DSR/VSR that essentially are FSAA (Full Scene/ SuperSampling)? This is enabled at a driver level and does not require hacks...
 
I've been wondering if games essentially send some sort of flag to the drivers that tell them to turn off FSAA even if it's set to forced. To this end, I wonder: might there be some way to block this?

(this will be long sorry)

The problem is deferred rendering. It is not a flag sent to the driver to turn off FSAA. It's the nature of using deferred rendering.

Basically think of it like this (back story):

When you connect the monitor to the PC, video needs to be presented to the monitor. So if you have a 1920x1080 monitor, you need a 1920x1080 front buffer allocated. The front buffer is what is displayed to the user.

When people program video games, they need a back buffer too... So you don't see the screen drawing itself to the front buffer like we did in games from the Commodore 64 era. (Watch a video of Elite from the C64) Today, everything is drawn to the back buffer, and when rendering is complete, the back buffer gets copied over to the front buffer.

With "real" FSAA, this is what happens:

The back buffer is actually enlarged typically x2 (or 3840x2160), FSAA occurs, and the image is shrank back down to 1920x1080 so it can be copied over to the front buffer which matches the monitor resolution. Which is why FSAA is an expensive operation.

Why deferred rendering exists:

Game developers learned at one point that when the back buffer was enlarged and they were performing their rendering, especially with heavy shading, that the performance of their game was suffering because now pixels shaders need to run for a 3840x2160 resolution vs a 1920x1080 resolution. So they invented deferred rendering. Which means they created another rendering buffer outside of the front buffer and back buffer, and they hard code in the size, be it 1920x1080 or whatever you pick in the game. Rendering occurs there. Also, I believe floating point render targets (for advanced HDR lighting) and such require the deferred buffer as well. So if you want HDR or speed, you had to create a deferred buffer.

How this is in conflict with FSAA:

Since another buffer is hard coded to a resolution, then copied to the back buffer, the image is now scaled up to 3840x2160 from the 1920x1080 and now we have a pixelated enlarged copy of the deferred buffer on the back buffer. Since rendering occured in this 3rd buffer, it was not FSAA. The back buffer render was actually just a 2d simple rectangle that takes up the entire screen with a "texture" of the deferred render buffer. FSAA occrs on this rectangle, and the contents inside are not FSAA (since it's a texture), only the edges, so only the single pixel on the border of the back buffer is being FSAA. This is why deferred rendering is not FSAA'd at all.

The only thing that can be done is to allow hardware FSAA on the deferred buffer: I don't believe drivers/directx, etc allows/handles that. Maybe in DirectX 12, not sure. I'm also not sure why that hasn't been allowed or if the industry will allow that in the future, but seems like something they should do.
 
Last edited:
(this will be long sorry)

The problem is deferred rendering. It is not a flag sent to the driver to turn off FSAA. It's the nature of using deferred rendering.

Basically think of it like this (back story):

When you connect the monitor to the PC, video needs to be presented to the monitor. So if you have a 1920x1080 monitor, you need a 1920x1080 front buffer allocated. The front buffer is what is displayed to the user.

When people program video games, they need a back buffer too... So you don't see the screen drawing itself to the front buffer like we did in games from the Commodore 64 era. (Watch a video of Elite from the C64) Today, everything is drawn to the back buffer, and when rendering is complete, the back buffer gets copied over to the front buffer.

With "real" FSAA, this is what happens:

The back buffer is actually enlarged typically x2 (or 3840x2160), FSAA occurs, and the image is shrank back down to 1920x1080 so it can be copied over to the front buffer which matches the monitor resolution. Which is why FSAA is an expensive operation.

Why deferred rendering exists:

Game developers learned at one point that when the back buffer was enlarged and they were performing their rendering, especially with heavy shading, that the performance of their game was suffering because now pixels shaders need to run for a 3840x2160 resolution vs a 1920x1080 resolution. So they invented deferred rendering. Which means they created another rendering buffer outside of the front buffer and back buffer, and they hard code in the size, be it 1920x1080 or whatever you pick in the game. Rendering occurs there. Also, I believe floating point render targets (for advanced HDR lighting) and such require the deferred buffer as well. So if you want HDR or speed, you had to create a deferred buffer.

How this is in conflict with FSAA:

Since another buffer is hard coded to a resolution, then copied to the back buffer, the image is now scaled up to 3840x2160 from the 1920x1080 and now we have a pixelated enlarged copy of the deferred buffer on the back buffer. Since rendering occured in this 3rd buffer, it was not FSAA. The back buffer render was actually just a 2d simple rectangle that takes up the entire screen with a "texture" of the deferred render buffer. FSAA occrs on this rectangle, and the contents inside are not FSAA (since it's a texture), only the edges, so only the single pixel on the border of the back buffer is being FSAA. This is why deferred rendering is not FSAA'd at all.

The only thing that can be done is to allow hardware FSAA on the deferred buffer: I don't believe drivers/directx, etc allows/handles that. Maybe in DirectX 12, not sure. I'm also not sure why that hasn't been allowed or if the industry will allow that in the future, but seems like something they should do.
This. The long and short of it is forcing traditional FSAA is going to break the deferred rendering techniques used by nearly every game these days if you force it.

DSR and VSR aims to fix this as it is allowing the game itself to render at the higher resolution while the drivers handle downsampling the image after the rendering process is completed and presented to the front buffer. This way it doesn't interfere with the deferred rendering process.
 
The point of deferred rendering is to decouple lighting and geometry. Lighting is done as almost a sort of post processing.

MSAA is possible to do, but memory and bandwidth is already a problem at that point and that's a good way to further exasperate the problem. Supersampling in all its naivety should be fine without hoops to jump though.
 
I find 1440p DSR on a 1080p display looks better than 4xMSAA, its more detailed and higher framerate (using a 980ti).
Its even better on a projector where the sharpening setting is very handy.
When I had a 290x, that behaved similarly with VSR but I had to use lower res than 1440p because I wanted a bit more performance. (no sharpen/soften with VSR though)
 
Dont bother. DSR/VSR and optionally enhanced with with mild shader AA like fxaa or smaa is far superior and needs less resources. Its essentially SSAA with less resolution restrictions and FXAA improves it even further because higher rendering resolution hides the blurring downsides of them. Multisampling FSAA is old and outdated tech now.
 
Back
Top