Really good read about FPS as a benchmark

mrpc9886

Limp Gawd
Joined
Feb 1, 2008
Messages
404
"We're counting through all five of our 60-second Fraps sessions for each card here. As you may have inferred by reading the plots at the top of the page, the Radeons aren't plagued with a terrible problem, but they do run into a minor hiccup about once in each 60-second session—with the notable exception of the Radeon HD 6970. By contrast, the Nvidia GPUs deliver more consistent results. Not even the older GeForce GTX 260 produces a single hitch."

Full article here:
http://techreport.com/articles.x/21516/1

I for one would love to see charts comparing video cards by milliseconds to render a frame exceeding a value of <insert value here>.

See page 3.

FPS is to big, we need something a bit more granular.
 
I learned a few things from this article.

1) I need a 120hz monitor
2) I'm done with AMD

But seriously, what a plot twist! I always tried to fix microstuttering with v-sync and sometimes it'd work, sometimes it wouldn't. I find it hard to believe that both companies are aware of the problem and yet seem to have no control over it. Maybe if microstuttering was kicked into the image quality department instead of the FPS department it'd get some more acknowledgement.

Good read otherwise, thanks.
 
Yep, I've linked this article quite a few times in regards to both frametimes for single cards, and microstuttering for multi-card solutions.... unfortunately it tends to get dismissed by AMD fans due to it showing them in a more negative light, and glossed over by nVidia fans due to it being what it is, while being ignored by the rest for being "too long an article to bother reading". TechReport's reviews are great thanks to them using this metric and really deep-diving into the tech behind the curtain rather than just saying "Well, this one FEELS smoother... in the real world."
 
That's why benchmarks are to be considered very carefully. Batman dx11 benchmarks can look good yet the game is still horribly glitchy (in my opinion, unplayable) when maxed with PhysX, even on the very best cards.

In any case, yes TechReport makes some good articles and I keep linking their benchmarks because showing min/max/average frametimes is simply a whole lot better than showing the average FPS.

Probably worth mentioning that this was the final blow that made me switch back to nvidia: at some point I started noticing that some of my games (openGL ones in particular) felt completely wrong on my AMD card despite good FPS. You can call that a ragequit but I'm not ever going to touch AMD again (because I've had other issues with them too: stability; drivers, various odd bugs etc).
 
That's why benchmarks are to be considered very carefully. Batman dx11 benchmarks can look good yet the game is still horribly glitchy (in my opinion, unplayable) when maxed with PhysX, even on the very best cards.

I think that's just the engine, especially with DX11 enabled. Pretty much no matter what you do, the game still hitches a little when transitioning in large open areas, or when entering an area that has PhysX effects. I've pretty much got a top of the line CPU, GTX680, and SSD and I get those things at 1920x1080. On an older HDD, the effect can be more pronounced.

Some of it relates to Nvidia's PhysX, though. I used to run a mixed setup (5870 + GTS250 for PhysX) and I'd never get that weird PhysX transition with any games. However running on an all Nvidia setup I see it with most games.
 
That really is a great article. Changed my perception of gaming metrics for sure. I would love to see HardOCP incorporate some of the same metrics that RechReport uses. They really quantify the realworld-gameplay experience well.
 
I really liked the idea of showing "jitter" and using the 99% - 50% results to obtain a spread of how much of a difference you might have between the fastest and slowest frame times. It wouldn't have to be a competition between cards but rather a way to check to ensure that the card can handle the current settings. (If jitter is <x amount of ms, game should play okay) It would basically give numbers to back up the claims that a certain setting felt unplayable even though the frame rate shows it should be.
 
http://hardforum.com/showthread.php?t=1713434

so awesome i posted it three days ago, in this forum. lol.

http://techreport.com/articles.x/21516

This article ultimately explains why many people do still see a difference between 120Hz and 60Hz, and the origin of micro-stuttering, especially when it comes to multi GPU solutions.

Instead of averaging all the frames in a whole second, it postulates that we should focus on how often an individual frame takes too long to render, which ruins the perception of fluidity, regardless of how imperceptibly excessively fast every other frame might render in that same second.

this article is almost a year old, but they've recently done an update for cpu and gaming.
this is an absolutely awesome read from a site ive never heard of before today.
 
http://hardforum.com/showthread.php?t=1713434

so awesome i posted it three days ago, in this forum. lol.

What is your point? I posted it several months ago, if you care. LOL, then I guess, at your post?

March 13th 2012: http://hardforum.com/showpost.php?p=1038488875&postcount=331

Others posted it around when the article came out, too.

Ocean said:
a site ive never heard of before today.
You'd never heard of Tech Report until a couple of days ago? Yet you've known of HardOCP for 9 years almost per your forum reg date?
 
Some of it relates to Nvidia's PhysX, though. I used to run a mixed setup (5870 + GTS250 for PhysX) and I'd never get that weird PhysX transition with any games. However running on an all Nvidia setup I see it with most games.

I'll be honest, that's really counter-intuitive to me. Then again, I've never done it, and can't really contribute to the discussion :(
 
I'll be honest, that's really counter-intuitive to me. Then again, I've never done it, and can't really contribute to the discussion :(

Hey, I'm with you - it sounds backwards. However, that's how it is...or at least was back when I ran that set-up. My guess is that it has something to do with how quickly the PhysX items are loaded or something to that affect.
For instance, in both Batman games, if you run the benchmarks - whenever you hit a new scene with PhysX effect, the framerate drops out for about 1/2 second. I'm guessing that's when it's essentially loading those effects or something related to how the card is displaying them.
With the mixed setup, there was no pause or no bottoming out at all. It's like that second card is primed and ready to roll, while with Nvidia-only hardware, it has to decide how/where to load 'em.
 
Hey, I'm with you - it sounds backwards. However, that's how it is...or at least was back when I ran that set-up. My guess is that it has something to do with how quickly the PhysX items are loaded or something to that affect.
For instance, in both Batman games, if you run the benchmarks - whenever you hit a new scene with PhysX effect, the framerate drops out for about 1/2 second. I'm guessing that's when it's essentially loading those effects or something related to how the card is displaying them.
With the mixed setup, there was no pause or no bottoming out at all. It's like that second card is primed and ready to roll, while with Nvidia-only hardware, it has to decide how/where to load 'em.

You're correct... if you run a single card with PhysX (nVidia) you end up splitting resources between rendering and physics calculations, resulting in somewhat-varying performance as it changes dynamically (and reallocates resources) to the best of my knowledge. If running in SLI, you can set one card to be a dedicated card, or just add a third lower-end card to handle PhysX (or second if you are running one main card), but I haven't seen a big enough hit yet that I thought I needed a dedicated card for PhysX.
 
For a little while I ran a GTX570 and a GTS250 and I still got that little hitch...even when I told the Nvidia drivers that I wanted to use the 250 specifically for PhysX. My guess is that there's still something that takes a millisecond to register, but isn't present on a hybrid setup.
 
For a little while I ran a GTX570 and a GTS250 and I still got that little hitch...even when I told the Nvidia drivers that I wanted to use the 250 specifically for PhysX. My guess is that there's still something that takes a millisecond to register, but isn't present on a hybrid setup.

The GTS 250 is considered below the baseline for flawless physX performance. I believe offhand that a GTX 280 is recommended (or 260 with the 216 cores), but I could be wrong.
 
I look forward to Tech Reports podcast each week, these guys are light years ahead of others out there on this whole framerate testing methodology, Scott is my fuckin hero.

FPS testing is simply not good enough anymore. The constant stutter in games today drives me apeshit! We should be at a constant 120fps@120hz with the hardware level we're at in 2012.

Too bad the Windows platform is a giant turd, and most the dev's today are lazy console coders.
 
I think that's just the engine, especially with DX11 enabled. Pretty much no matter what you do, the game still hitches a little when transitioning in large open areas, or when entering an area that has PhysX effects. I've pretty much got a top of the line CPU, GTX680, and SSD and I get those things at 1920x1080. On an older HDD, the effect can be more pronounced.

Some of it relates to Nvidia's PhysX, though. I used to run a mixed setup (5870 + GTS250 for PhysX) and I'd never get that weird PhysX transition with any games. However running on an all Nvidia setup I see it with most games.


Ive had the same issue with my set up running that resolution. I noticed the stutter a bit less with a evga super clocked 680 over the 670 FTW im using now but it was still there. I still think it may be my tv being one of those switch to 120 hz models. its a Samsung 40' led lcd.

This tech report article is great by the way, I read it a few weeks ago when I first seen it on [H].
 
For a little while I ran a GTX570 and a GTS250 and I still got that little hitch...even when I told the Nvidia drivers that I wanted to use the 250 specifically for PhysX. My guess is that there's still something that takes a millisecond to register, but isn't present on a hybrid setup.

I never really gave any thought to running a hybrid setup. Im not sure if there would be a point for me to do that with a 670 ftw. Have you noticed a difference from using one card to both?
 
Very interesting article, thanks for posting the link. Really will make me look at benchmarks differently now. Hopefully more people start taking notice so that maybe AMD and nVidia will put some R&D into making multiGPU setups smoother
 
Nothing new to me but good to see it. Best setup is nvidia + 120hz even if you can't hit 120 fps.
 
Do you think this article is still relevant for today's ATI cards? Only reason I ask is I'm going to have to switch back to ATI if I can't get the nVidia I just got to stop RSODing on me :(.
 
Do you think this article is still relevant for today's ATI cards? Only reason I ask is I'm going to have to switch back to ATI if I can't get the nVidia I just got to stop RSODing on me :(.

What happened to your card? did it eventually become unstable or?
 
There is no good consistent fps measurement in what you see when watching/gaming at the screen.
Just ask kyle.
and quote" somewhat to our surprise, representatives from both AMD and Nvidia quickly and forthrightly acknowledged that multi-GPU micro-stuttering is a real problem, is what we measured in our frame-time analysis, and is difficult to address. " end quote

When a single card solution in like 2 generations when next die shrink happens, then multi gpu will become more extreme choices due to one card works then just fine.

and quote"We asked him to identify a major game engine whose internal timing works well in conjunction with GeForce frame metering, but he wasn't able to provide any specific examples just yet. "end quote

so when ask what to use both or all companies have issues with multi gpu set up and deals with it.
best option is single card solution.

I also find this conclusion funny, quote"Presumably, a jitter pattern alternating between five- and 15-millisecond frame times would be less of an annoyance than a 15- and 45-millisecond pattern. The worst example we saw in our testing alternated between roughly six and twenty milliseconds, but it didn't jump out at me as a problem during our original testing. Just now, I fired up Bad Company 2 on a pair of Radeon HD 6870s with the latest Catalyst 11.8 drivers. Fraps measures the same degree of jitter we saw initially, but try as I might, I can't see the problem. We may need to spend more time with (ugh) faster TN panels, rather than our prettier and slower IPS displays, in order to get a better feel for the stuttering issue."end quote.

so even then they cant find a problem "live".
so people trying to quote the article to downplay either amd or Nvidia didnt read the article.
 
Its hard to diagnose the issue, there is a lot of factors involved including displays and drivers. The one thing i did notice in the article, http://techreport.com/review/21516/inside-the-second-a-new-look-at-game-benchmarking/3.
This benchmark shows the 560 ti providing a good baseline of performance without a stutter. After seeing this I figured my 670 was just lacking proper drivers being the 560 ti has be out for a long enough time for the drivers to mature.

Im anxious to test the new 306.02? driver that was released because in the notes it stated fixes a vsync stuttering issue. I believe im suffering from this but I think its due to my 40' led lcd tv. Im also suffering from input lag (noticeable in fps games)
 
i've felt this issue first hand in a couple of games (mostly with my old 4890's).
This just proves once more that you should always go with the most expensive single gpu solution you can afford and not 2 cheaper ones.
 
Yep, I've linked this article quite a few times in regards to both frametimes for single cards, and microstuttering for multi-card solutions.... unfortunately it tends to get dismissed by AMD fans due to it showing them in a more negative light, and glossed over by nVidia fans due to it being what it is, while being ignored by the rest for being "too long an article to bother reading". TechReport's reviews are great thanks to them using this metric and really deep-diving into the tech behind the curtain rather than just saying "Well, this one FEELS smoother... in the real world."

You've got to love the way that fanboys dismiss articles about recent cards. Tahiti is vastly improved when it comes to frametimes. Thats easy to see in any recent review but no thats not the way that you want to see things.
 
i've felt this issue first hand in a couple of games (mostly with my old 4890's).
This just proves once more that you should always go with the most expensive single gpu solution you can afford and not 2 cheaper ones.

QFT
 
Explains why my eyeballs prefer single card for gaming.
 
Back
Top