Is a high refresh rated monitor worth it if I cannot get more than 45fps in a favorite game?

Straypuft

Limp Gawd
Joined
Dec 8, 2011
Messages
181
Favorite game: Microsoft Flight Simulator
Monitor: LG LG29UM67-P, 60mhz-75mhz

As title states, cannot achieve 60fps, Thinking of ending my time with Ultrawide since Ive had this monitor for 7 years, I do rescale the desktop resolution for my game to 1920x1080 from 2560x1080 in hopes of better performance/less pixels to draw with MSF, This makes black bars on the sides of my screen which I do not mind, I average around 45fps, with my build which is listed in my signature.

I may be interpreting this wrong but I have always been under the impression that 60mhz would lock my max fps to 60fps.
 
Monitors last from build to build so it depends on what your plans are for the future. The same is true for other things like cases, PSU's and watercooling buy good now and they last a long time. That said I've always had the top of the line components and use all the eye candy in games and never really hit over 144 fps on old game never latest titles where 60 fps was common. If you turn down the eye candy and compete competitively then high FPS could be an advantage.
 
My current monitor has 43,510 hours on-time. Buy the best you can afford for what is most important to you.
 
I may be interpreting this wrong but I have always been under the impression that 60mhz would lock my max fps to 60fps.

Yes that's a mistake. Your monitors refresh rate (in Hz, not MHz) is independent of the actual framerate being displayed.

Your monitors refresh rate is a maximum value. What is actually rendering frames is your graphics card.

So to answer your question, no a monitor with a higher refresh rate will not change the 45fps your graphics card is producing.

You want a graphics card that can produce fps values close to your monitors refresh rate
 
So to answer your question, no a monitor with a higher refresh rate will not change the 45fps your graphics card is producing.
However, there's a beneficial side effect. Variable refresh rate (VRR) like FreeSync and G-SYNC can make 45fps stutter less.

45fps looks like perfect 45Hz (framerate=Hz) with perfect zero erratic-stutters, when a good game engine is working well on a VRR display with a good "Low Frame Rate compensation" algorithm (LFC).

For the few 60 Hz newbies unfamiliar with variable refresh rate (VRR), brand names include FreeSync and G-SYNC. Many advertise a "Hz range" like "FreeSync 48Hz to 144Hz". Very important numbers if you want to delete your erratic-stutter of 45fps! Stick to monitors that advertise a wide VRR range (of 2.5x between minimum Hz and maximum Hz), in order for LFC to work successfully to de-erratic-stutter ultra low frame rates.


Favorite game: Microsoft Flight Simulator
Monitor: LG LG29UM67-P, 60mhz-75mhz

As title states, cannot achieve 60fps, Thinking of ending my time with Ultrawide since Ive had this monitor for 7 years, I do rescale the desktop resolution for my game to 1920x1080 from 2560x1080 in hopes of better performance/less pixels to draw with MSF, This makes black bars on the sides of my screen which I do not mind, I average around 45fps, with my build which is listed in my signature.

I may be interpreting this wrong but I have always been under the impression that 60mhz would lock my max fps to 60fps.
FS 2020 works amazingly well (less stutters) when you enable G-SYNC or FreeSync variable refresh rate.

VRR simply means 45fps automatically means the monitor runs at 45 Hz -- the refresh rate changes every single frame -- to keep things smoother (like perpetual perfect framerate=Hz VSYNC ON even through fluctuating frame rates). You can see an example of smooth-looking fluctuating framerates at a demo animation at testufo.com/vrr (fiddle with the settings at top)

A monitor uses an "LFC" (Low Frame Rate Compensation) algorithm to smooth out frames that goes below minimum VRR Hz. Destutter your 45fps = beautiful! But it only works with wide VRR ranges. Because LFC requires a wider VRR range -- and 48fps is often sometimes swinging below min Hz of FreeSync (48Hz), that means 45fps performs better on a monitor with a VRR range at least 2.5x between minimum Hz and maximum Hz. So that means a 165 Hz 2560x1440p IPS monitor would play much smoother at 45fps VRR than a native 60Hz monitor.

Fluctuating low framerates (such as 45fps) always looks smoother on a display with a wider VRR range, since you don't have the stutters from the framerates repeatedly entering/exiting VRR range, due to better LFC logic. The cheap VRR only does 48Hz-75Hz range, so you get a lot of stutters everytime your framerate exits that framerate range, so avoid that junk.

Frame rate dips below 48fps VRR are less stuttery on 165Hz+ VRR monitors!

Also, 45fps frames are transmitted over the video cable in just 1/165sec on a 165Hz monitor, so you get less latency (faster clicks) at lower frame rates;

The newer post-2020 high-Hz IPS panels look great nowadays (the wide-gamut 165Hz 1440p panels look great in FS 2020 without costing an arm and leg), so you're no longer compromising color quality like yesteryear early SDR TN 120Hz-144Hz panels of a decade ago that only had 72% NTSC coverage of color gamut.

TL;DR: Yes, it's worth getting a higher-Hz monitor for 45fps content, due to better-behaving smoother lower-lag VRR / FreeSync / G-SYNC
 
Also, 45fps frames are transmitted over the video cable in just 1/165sec on a 165Hz monitor, so you get less latency (faster clicks) at lower frame rates;

I don't think this is right, varying the bitrate would make the signalling more complex and slow down the transmission rate of each frame. The amount of time it takes a frame to be transmitted should depend on the bitrate of the cable and the resolution of the monitor. Regardless of the frame rate it will be size_of_frame/effective_bandwidth. If you're running at a resolution where the video mode supports 240 FPS all frames will go down the cable in 1/240 of a second, if you're running at 240hz the cable will be maxed out at 60hz it'll be only utilizing 25 of it's bandwith; but be able to get frames to the display 3ms faster than if running in a slower mode that maxed out at 60hz.
 
To answer the OP's question, it really depends on a lot of factors. The only "gotcha" that I think that would actually improve the situation would be VRR like others have mentioned. Everything will be dependent on image quality. How's the contrast? How's the color accuracy? So much more than just framerate and refresh rate in monitors.
 
I don't think this is right, varying the bitrate would make the signalling more complex and slow down the transmission rate of each frame. The amount of time it takes a frame to be transmitted should depend on the bitrate of the cable and the resolution of the monitor.
Incorrect.

Short Answer:

This is not how it currently works. The GPU intentionally pauses before it transmits the next pixel row. Because of compatibility reasons, the GPU honors the Horizontal Scan Rate of the EDID/E-EDID/DisplayID (CEA861 Extension Block) — and spews out 1 scanline* at a time, and then idles.

So if your Horizontal Scan Rate in your EDID is 135KHz, that’s 135000 pixel rows per second, output from GPU output 1/135000sec apart, regardless of cable bandwidth. That’s why QFT exists at all (a HDMI forum invention but it apparently works on DisplayPort, DVI and VGA connections too)

*Or how much it fits in DisplayPort/HDMI micropacket at a time.

Long Answer:
Over the last 100 years, a signal source (analog TV broadcast or a GPU or a video player) sprays out one scanline - one pixel row at a time. For GPU that is the “Horizontal Scan Rate” listed in a Custom Resolution Utility. So if it’s 67500 Horizontal Scan Rate, that’s 1/67500 delay before the GPU puts out the next pixel row. So the GPU output is throttling to the currently configured Horizontal Scan Rate.

This is even beamraceable too — I wrote Tearline Jedi which did the equivalent of raster interrupts on GeForce and Radeons.





VSYNC OFF tearlines are just rasters, and are able to be controlled precisely with certain APIs like D3DKMTGetScanLine() which various software such as RTSS Scanline Sync uses, and Special K latent sync. At say, 135KHz scanrate, a 1/135000sec delay to Present() during VSYNC OFF, moves a tearline downwards by 1 pixel.

We’ve stuck to the same raster signal delivery methodology for a century — in this signal layout, from analog to digital:

VideoSignalStructure.png


698CE1D7-6030-4EC1-8D9E-9825742C3A62.png


Normally these numbers are automatically computed (e.g. CVT formula by VESA) but you can override this with manual numbers to create your own QFT mode that has confirmed latency decreases with a photodiode latency tester.

In the past, porches were overscan for CRT, and sync were guard delays for moving CRT electron beam to the left edge (horizontal sync) or top edge (vertical sync). This was converted 1:1 perfectly during the analog to digital transition, which is why to this day, nearly unbuffered (only line-buffered) VGA-to-DP and DP-to-VGA and HDMI-to-VGA and VGA-to-HDMI adaptors exist — they follow the signal temporally, as it’s a bidirectional 1:1 conversion at the pixel clock.

There is packetization going on too, but it’s only at the scanline level, and even DSC is still following the raster methodology invented by Baird/Farnsworth early TVs of the 1920s. Even a 360Hz VRR signal is still following the same pixel-delivery sequence in a synchronous time-metered-out way.

If you enable VRR, the GPU does lock the dotclock to maximum, and transmit refresh cycles at max Hz velocity, only varying VBI size to change the refresh rate in real time. VRR is fixed-horizontal-scanrate, variable-vertical-refresh-rate.

In fact, VRR can be adaptored to analog domain — that’s why some Multisync CRTs (without fussy refresh rate blackout electronics) work fine with FreeSync, when you use an unbuffered HDMI-to-VGA adaptor on an AMD card and use the ToastyX Freesync Range in an EDID override for the CRT tube — and adaptor the HDMI to VGA. The FreeSync is preserved in the analog domain! It’s amazing how minor a modification to the old-fashioned raster methodology was done, to enable VRR.

In fact, QFT and VRR is very similar. VRR is a superset of QFT because it always transmits refresh cycles at max scanout. But the problem is QFT is almost never used with fixed-Hz modes. So 60Hz signal even on a 240Hz display, is transmitted 1/60sec from first to last pixel of a refresh cycle — even if it’s DisplayPort and HDMI. A high bandwidth cable essentially idles between pixel rows! This is already proven in latency tests & in raster-manipulation (delaying 1 unit of scanrate to move tearline down by 1 pixel), the very exact algorithm used in RTSS Scanline Sync which I helped Guru3D add.

QFT is identical-signal wise to a perfectly-low-framerate capped VRR mode of exactly the same pixel clock. Basically QFT is a fixed-Hz version of VRR for its faster frame transmission benefits. But fixed-Hz modes of low Hz, using standard VESA CVT / CVT-R / etc EDID formulas, are very slow-spewing out of a GPU output (confirmed), even on a HDMI 2+ and DisplayPort 2+ cable. That’s the problem, and that’s what DIY Custom Resolution QFT modes solves.

Latency measurements confirm the slower scanout of lower Hz and faster scanout of higher Hz.

If you open NVIDIA Custom Resolution, you see something called a “Pixel Clock”. The pixels are timed based on that, and aren’t accelerated unless you intentionally do so (higher pixel clock via a VRR mode or a QFT mode).

So multiple evidence that I’m correct and you’re incorrect:
- Analog to digital transition preserved temporals / timings independently of digital bitrate
- Unbuffered adaptors that convert HDMI/DP to VGA. Would not work if HDMI/DP goes at max bitrate for low Hz of small VBIs
- RTSS Scanline Sync exists; I helped them (credited)
- Special K Latent Sync exists; I helped them (credited)
- WinUAE Amiga emulator beam raced sync (synchronization of emulator raster to real world raster); I helped them (credited)
- Existence of display overclockers (transferring porch headroom to Hz)
- Tearline Jedi experiments
- RTSC/QueryPerformanceCounter tests on D3DKMTGetScanline() concurrent with an Arduino photodiode (using my in-house tester). Lag is highest right above a tearline, and lag is lowest right below a tearline, frameslices are individual latency gradients along their vertical dimensions!

I even also put my photodiode tester on a modified version of Tearline Jedi and have confirmed 100% of all of the above findings too — VSYNC OFF tearlines are just rasters, and if you lagtest one location on the screen, you can predict the input lag of a different part of the screen simply by knowing the horizontal scanrate and knowing the sync technology and knowing where the tearlines are, as long as it’s a horizontal scanrate multisync panel. Tested low Hz and high Hz modes, and findings were absolutely confirmed.

After I did all that stuff in Tearline Jedi, I informed Guru3D (RTSS Scanline Sync) and Kaldiaen (SpecialK Latent Sync) and they added the scanline sync features because of what I told them. So I helped them introduce a new sync technology that combined the VSYNC ON look-feel with the low lag of VSYNC OFF, by raster-steering (beamracing) the tearlines offscreen, ideally just above the top edge of the screen (near end of VBI = lowest lag = big difference if using QFT ultralarge VBI to create an ultra low lag fixed-Hz VSYNC ON mode with QFT + RTSS S-Sync).

Scroll back the Release Notes of those apps (e.g. RTSS, about a couple years back) and you’ll see the introduction of the respective scanline sync feature credited to my findings.

So my research is already in several production apps that depend on me being right about the Present()-to-photons black box — which Blur Busters is an expert of.

Yes, it is amazing that signal-level raster/scanning behaviors of nearly 100 years ago. Those were initially Nipikow wheels in the 1920s by the early TV inventors like Baird and Farnsworth, until the first electronic glass CRT tubes of 1930s You heard me right — the legacy signal delivery topology of 1920s-1930s still exist temporally unmodified, in a 390Hz VRR signal today.

You can shrink the porches and overscan quite a hella lot (down to 1 pixel) for many modern displays — essentially digital comma separators between scanlines and between refresh cycles — but for the most part, 1080p 60Hz is still a derivative of 1980s Japan Muse HD standard which specifies 1125 scanlines (1080 visible + 45 blanking), that was preserved during ATSC 1.0 and 3.0, and was simply doubled for 4K (2250 scanlines) so we are wasting lots of bandwidth on current HDMI and DIsplayPort. That’s why display overclockers for some LCDs reduce the porches to get overclocking headroom to increase the Hz — like the BenQ 144Hz XL2720s overclocked to >200Hz (there’s a hack in a Blur Busters Forums thread successfully used by hundreds) — tweaking the porches to transfer bandwidth wasted in big porches (overscan) to increase the Hz without increasing pixel clock.

But at the end of the day, when watching Netflix on a 4K display, you’re wasting more than 10% of the bandwidth (4400 pixel wide by 2250 pixel tall signal where only 3840x2160 is visible), and heavily underbandwidthing your 8K 60Hz HDMI cable even if the HDMI transceiver built into the latest Fire TV or Apple TV is bursting out the scanlines faster, it’s still intentionally delaying between scanlines to stick to the industry standard 4K horizontal scanrate for maximum compatibility’s sake.

A custom EDID hack (e.g. custom Linux modeline in a hacked Android device) can nerly zero-out the inefficiencies, but fundamentally, signal delivery is fundamentally staying temporally-accurate to the industry standard signal timings, regardless of analog or digital (to the error margin of DP.HDMI micropacket).

Needless to say, I understand this stuff than maybe 90% of the average newly-hired computer monitor engineer — some they just source a panel from the panel manufacturer (e.g. AUO) and outsource a panel scaler developer (e.g. MSTAR) and just roll it into a branded monitor only utilizing standard timing formulas — using a Chinese engineer of only the basic knowledge of standard VESA timing formulas. (QFT is non-VESA timings).

NVIDIA and AMD built automatic QFT into their VRR tech (that’s why 60fps at 240Hz feels lower lag than fixed-Hz 60Hz), but nobody gave display manufacturers instructions how to do QFT for fixed-Hz modes, so this is where this small company jumps in to explain to people that QFT is possible to be done DIY. QFT is the only way to create a fixed-Hz 60Hz mode that feels as low-lag as 60fps at 240Hz. No consortium has developed a QFT standard for fixed-Hz modes that is port-independent as all the money is on VRR. Even Microsoft Present() timing is end-of-VBI aligned, which needs to be adjusted to beginning-of-VBI aligned, to maximize QFT lag reductions (And that’s where RTSS Scanline Sync comes in to help unlock massive QFT lag reductions).

This is why all of this is not so well documented; but it’s relatively simple once someone understands what those numbers mean in a Custom Resolution Utility, and that they’re still temporally-accurate in the digital era (aka GPU video output idles between pixel rows, to throttle signal to match horizontal scan rate defined in the EDID/E-EDID/DisplayID! Yes, annoying).
 
Last edited:
Back
Top