Forgive me for not posting this in the FXAA quality comparison thread, but this is more for performance comparisons.
On my desktop, I noticed a huge difference in performance between 4xMSAA and FXAA. This was further corroborated by posts here, elsewhere, and [H]ardOCP's in-depth performance comparison. However, I enabled it on my laptop for World of Warcraft (C2D 2.4ghz, GeForce GT 130M 1GB) and thought the performance was a little weird. So, I did some comparisons. Without going into too much detail, across all tests the ONLY differences were the AA settings. However, I'm not a professional, so take this with a grain of salt. Results will be given in min/max - avg frame rates.
No AA (baseline)
79/133 - 110.133
Framerates seem pretty high, but I had most of the settings dialed down lower than I'd play it (and vsync off of course). For this comparison I wanted to remove some bottlenecks.
4x MSAA (in-game setting)
60/87 - 71.95
Game is still more than playable despite the high performance hit.
FXAA (NV CP forced)
47/79 - 56.95
While still very playable, performance takes a brutal hit, not only over baseline, but in comparison to 4xMSAA as well. Due to this, I decided to try one more run for comparative purposes...
4xMSAA (in-game) + TrSSAA (NV CP forced)
61/91 - 72.933
You've go to be shitting me!? No performance hit over 4xMSAA? Well, WoW isn't a VRAM intensive game, and SSAA uses VRAM more than memory bandwidth, so this makes sense. Bottom line is that in this case, you can get better visual quality AND performance by NOT using FXAA. So, why is this? Well, since FXAA is shader-based while MSAA is memory bandwidth intensive, obviously memory bandwidth wasn't the bottleneck. There are a lot of cheap GPUs out there (like mine) that come with larger amounts of RAM, sometimes even GDDR5, just to make them more appealing to a casual gamer. In these cases, the cards will be bottlenecked far before they ever need that amount of memory or the associated bandwidth. So, that GT 640 that just came out might suffer with either FXAA or MSAA. But, if there is a GDDR5 version released, we can pretty much expect MSAA to be fine and FXAA to hurt performance more.
I'm going to run these tests tomorrow on my desktop when I get it back up and running (waiting on my monitor which should arrive then). In the meantime, seems that FXAA wasn't the performance silver bullet we all thought it was. It still takes resources that may or may not be more limited than memory bandwidth. If you have a newer, mid-range or higher GPU, FXAA will still likely consume far less resources than MSAA. For budget card though, FXAA may be a waste.
On my desktop, I noticed a huge difference in performance between 4xMSAA and FXAA. This was further corroborated by posts here, elsewhere, and [H]ardOCP's in-depth performance comparison. However, I enabled it on my laptop for World of Warcraft (C2D 2.4ghz, GeForce GT 130M 1GB) and thought the performance was a little weird. So, I did some comparisons. Without going into too much detail, across all tests the ONLY differences were the AA settings. However, I'm not a professional, so take this with a grain of salt. Results will be given in min/max - avg frame rates.
No AA (baseline)
79/133 - 110.133
Framerates seem pretty high, but I had most of the settings dialed down lower than I'd play it (and vsync off of course). For this comparison I wanted to remove some bottlenecks.
4x MSAA (in-game setting)
60/87 - 71.95
Game is still more than playable despite the high performance hit.
FXAA (NV CP forced)
47/79 - 56.95
While still very playable, performance takes a brutal hit, not only over baseline, but in comparison to 4xMSAA as well. Due to this, I decided to try one more run for comparative purposes...
4xMSAA (in-game) + TrSSAA (NV CP forced)
61/91 - 72.933
You've go to be shitting me!? No performance hit over 4xMSAA? Well, WoW isn't a VRAM intensive game, and SSAA uses VRAM more than memory bandwidth, so this makes sense. Bottom line is that in this case, you can get better visual quality AND performance by NOT using FXAA. So, why is this? Well, since FXAA is shader-based while MSAA is memory bandwidth intensive, obviously memory bandwidth wasn't the bottleneck. There are a lot of cheap GPUs out there (like mine) that come with larger amounts of RAM, sometimes even GDDR5, just to make them more appealing to a casual gamer. In these cases, the cards will be bottlenecked far before they ever need that amount of memory or the associated bandwidth. So, that GT 640 that just came out might suffer with either FXAA or MSAA. But, if there is a GDDR5 version released, we can pretty much expect MSAA to be fine and FXAA to hurt performance more.
I'm going to run these tests tomorrow on my desktop when I get it back up and running (waiting on my monitor which should arrive then). In the meantime, seems that FXAA wasn't the performance silver bullet we all thought it was. It still takes resources that may or may not be more limited than memory bandwidth. If you have a newer, mid-range or higher GPU, FXAA will still likely consume far less resources than MSAA. For budget card though, FXAA may be a waste.