Do video cards go faster than 60fps?

venm11

2[H]4U
Joined
Oct 6, 2004
Messages
2,236
You commonly see game benchmarks showing a rendering rate faster than the display rate, eg. 60hz. I can see this for benchmarks, but surely this isn't true in the real world -- it would be a waste of energy (literally).

Do games/GPUs actually render beyond 60hz in normal gameplay?
 
Yes. Think about making a 3d video of a badger dancing and saving it to your hard drive. If it we're locked at 60 hertz, you'd have to wait real time for the render to complete. Some people also use displays that are capable of 120-200Hz.
 
Also, even if you average 90FPS, your FPS may dip below 60FPS at certain points.

There's actually a feature that will "lock" your video card down to the refresh rate of your monitor. It's called vertical sync. It has some downsides though, such as input lag, and it can slow down your card too much sometimes. On the other hand it reduces a nasty effect known as screen tearing. It works well for some games in my experience, but not others.
 
Do games/GPUs actually render beyond 60hz in normal gameplay?

Most of the time no, because you will use the gfx cards horsepower to increase the resolution or quality.

CAN games render beyond 60Hz? Easily.
Reduce the res and/or quality and you can get 1000's FPS in some games, not that you can see them though :)
 
A few things.

FPS counters tend to show average FPS and so 60FPS would probably be something in the region of min 30fps and max 90fps (at the extreme) with peaks and troughs which only last fractions of a second at those values.

To maintain a 60FPS minimum all the time you really need about an averge closer to 90.

Another thing you don't consider is monitor refresh speed, this can differ from 60Hz, older CRT monitors can go a lot higher, my Iiyama VMP 454 19" can do something like 240Hz in 1024x768.

Vsync is a function built into the drivers, and some games directly, that allows you to sync up the fps of the game with the refresh of the monitor to eliminate a rendering artifact commonly known as tearing. This also has the effect of capping the video cards rendering speed and would eliminate "wasted fps".
 
Well an area that it could be useful in the real world would be for time based antialiasing, motion blur, an accumulation buffer, whatever you want to call it.

Basically the idea is this: You render multiple virtual frames for every frame you actually intend to display, and then combine them. This allows you to blend together the virtual frames to create a smoother image which give the illusion of smoother motion.

This is why movies work. Movies in the theatre and DVDs from those movies are 24 frames per second (23.976 if you want to get real precise). YA RLY. How can such a low frame rate look at all smooth? Well because there is motion blur just as a consequence of how film works. If you freeze frame on a high motion shot, you'll notice that fast moving objects are blurring. This gives a smoother look over all.

So even if you have an LCD that only outputs 60 fps, as most do, it would be a benefit to you if a graphics card could handle more frames and combined them down. Unfortunately, I don't know of any consumer cards that can do this. Accumulation buffers are something you'll find in high end visualization systems but only 3dfx ever tried it on consumer cards (they called it a t-buffer) and it flopped.

Just saying that it isn't worthless over all. Same sort of deal as why have a card that can push so many pixels it can do more than are on the screen. Well, because maybe you want to render more of them and then combined them to do spatial anti-aliasing.
 
Rendering multiple frames into a buffer before the final image is sent to the frame buffer sounds like it would cause some latency issues between input and rendering.

You only need turn on tripple buffering to see input lag to make FPS games almost unplayable.
 
No more than simply waiting for the frame to be rendered in the first place. Remember we are talking about rendering more frames than can actually be displayed. Doesn't matter how fast anything else is, if you are only showing 60 fps you'll never see something happen any quicker than that. When you do something, you won't see the result until the next frame is displayed.

This (if implemented properly) wouldn't add any lag at all. The problem is that you need to be able to render significantly faster than your output target. So if you want to maintain 60 fps, you'd need to be able to sustain a minimum of 120fps (2 virtual frames per real one) for this to really be any use. That's one of the reasons you don't see it on consumer cards. Generally speaking on any modern game you are struggling to consistently maintain a framerate as high as the display can handle. Well until you can do that, there's no gain screwing with accumulation buffers.
 
Rendering multiple frames into a buffer before the final image is sent to the frame buffer sounds like it would cause some latency issues between input and rendering.

Well, let's say you want 60fps with vsync, if you can render 2 or more complete frames in 1/60th of a second, I don't see how it's going to cause any lag to blend them into one frame.

I believe the old RTHDRIBL demo uses this technique to achieve motion blur. It works very well.
 
Because you need more than 1 frame to achieve motion blur effects.

Take an input from the user to decide where the current viewpoint is, then render that scene, then take further input from the user to decide how the viewpoint has changed, then do your processing on both images and then spit out the result to the user.

The latency is going to be the same (if you buffer 2 frames) to tripple buffering where theres an added delay of 1 frame of rendering between your mouse input and what you see on screen.

If you want to observe this horrid effect, load up a first person shooter like HL2 and enabel tripple buffering which is supposed smooth out performance with Vsync set on, having that even minor delay between your mouse movements and the response on screen makes aiming very difficult, it's like aiming with an analogue console controller where you just overshoot all the time (although for different reasons)

*edit*

Well, let's say you want 60fps with vsync, if you can render 2 or more complete frames in 1/60th of a second, I don't see how it's going to cause any lag to blend them into one frame.

True enough the effect gets less as your frame rate increases, you would need a minimum FPS of 120 in this case, to acheive that maybe 150fps average...
 
You are confusing the way triple buffering works with an accumulation buffer. Triple buffering is a render ahead strategy. The game renders up to two frames ahead of what is actually on screen now. You've got the displayed frame, the frame that is to next be displayed, and then the frame being worked on. That is why lag is added.

This isn't the case with a technology like this. What you are doing is rendering multiple frames in the time it takes to display one frame, combining them, then displaying the result. No added latency. However, as I said, it takes a lot to make useful.

It isn't something that is done at all now on consumer hardware, I am just giving a potential use of frame rates in excess of what a display can handle.
 
You are confusing the way triple buffering works with an accumulation buffer. Triple buffering is a render ahead strategy. The game renders up to two frames ahead of what is actually on screen now. You've got the displayed frame, the frame that is to next be displayed, and then the frame being worked on. That is why lag is added.

This isn't the case with a technology like this. What you are doing is rendering multiple frames in the time it takes to display one frame, combining them, then displaying the result. No added latency. However, as I said, it takes a lot to make useful.

It isn't something that is done at all now on consumer hardware, I am just giving a potential use of frame rates in excess of what a display can handle.

No I'm not confusing the 2, I'm using it as an example of something similar.

For this technology to provide anything useful the two frames have to have user input taken in to account, otherwise they'd both be identical and you'd be bluring between two images that are the same resulting in the same image.

Whenever you take user input, then add additional processing time tacked onto that you feel the input latency increase. As I said above you can effectively lessen this latency by rendering faster, you'd need to render twice as quickly (120 fps) to actually achieve the same input delay.
 
As I said above you can effectively lessen this latency by rendering faster, you'd need to render twice as quickly (120 fps) to actually achieve the same input delay.

This is what sycraft is saying.

The game would be taking user input and rendering frames at 120fps, every two frames would be merged into one and displayed at 60fps. There's no extra lag because it's not rendering ahead.
 
To OP, I guess you cant render faster than the limit of your monitor, i think all agreed to that. But I recall reading that games that have higher FPS "feel" better. Like first person shooters, the input feels more responsive.

I cant speak with authority on this, cause I rarely get anything above 60fps, but that is what I recall reading.

But from driving game experience ( I do a lot of that), I can tell you that low frame rates has a huge negative impact on driveability.

So I can see how that higher than 60fps would make driving\shooting\moving response much better eventhough it would "look" no better.
 
This is what sycraft is saying.

The game would be taking user input and rendering frames at 120fps, every two frames would be merged into one and displayed at 60fps. There's no extra lag because it's not rendering ahead.

Of course in order to achieve a decent looking motion blur, you would be needing to blend many many frames together, otherwise you'd just be seeing double all of the time (especially in any FPS game where the entire field of view shifts around at high speeds.)
 
Yeah well 2x the frame rate needed just isn't practical at all...

And as hughJ said, you'd need a few more than 2 frames to create a decent motion blur effect, lets say 3 to be a safe minimum, that's 180fps still. Not going to happen, not when current motion blur looks pretty reasonable and renders quickly.
 
So.... from interpreting your responses, the answer to my question is that the GPU renders as much as it can as fast as it can, even if it's exceeding the actual display rate (60hz or whatever).

I'm familliar with vertical sync, but what I'm talking about is different. If the required work is done (rendering the next 1-2 frames) and doing anything additional is wasted, why do it? It just expends electricity and creates more heat. As the rendered frame rate bounces around, it could save a good amount of energy on average.
 
FPS counters tend to show average FPS and so 60FPS would probably be something in the region of min 30fps and max 90fps (at the extreme) with peaks and troughs which only last fractions of a second at those values.

That depends on the counter. Some counters count the actual frames rendered in a second and display that number. Some show the average, like you said. And some counters calculate the FPS based on how long it takes to render the frame (so if it takes 20ms to render a frame, that frame could be rendered 50 times per second).

Counters that update once per second are likely using one of the first 2 techniques, and counters that update constantly, in real-time, are likely using the 3rd technique.

Because of varying techniques, you will frequently see minor disparities between FRAPS and a given game's built-in counter.
 
Even though you can't really SEE the difference between 60 and 200FPS, in some games you can definitely feel the difference, doesn't matter if it's input, or simply moving around a map. Games that are based off tick rate like, Counter-Strike Source, Quake Wars, and Quake4, I know when the frame rate isn't high because the game itself slows down (i.e. takes longer to switch weapons etc.) other games, like BattleField2, and CoD4 you know it by how smooth the game is running.
 
Back
Top