More DLSS...

Eventually, sometime in the future, fake frames are perceptually lossless relative to real frames.

Once that happens, it doesn't really matter if fake frame versus real frame.

Having 4K 240fps+ UE5-quality graphics is unfortunately not going to be possible through existing traditional workflows.

Netflix/Blu-Ray/Digital Cinema is only ~1 to 2 full frame per second, and the magic of video compression uses interpolation mathematics. Links to video compression papers and specs.

View attachment 525091

Figure 1: Example frame sequence of a typical video compression stream.
I = fully compressed original frame
P = unidirectionally predicted (interpolated) frame.
B = bidirectionally predicted (interpolated) frame.

Ever since MPEG1 was invented, video compression uses interpolation mathematics. MPEG2, MPEG4, H.262, H.263, H.264, H.265, H.266, you name it. Originally, you saw the video pulsing artifacts in early video compression, but now recent video compression (even light compression ratios) now are perceptually lossless -- you can't tell apart the quality of I frames, B frames and P frames anymore!

Likewise, GPU is doing something of this sort in 3 dimensions.

DLSS 3.0 has lots of problem, but it is a harbinger of the FSR/XeSS/DLSS future of frame rate amplification. And some cool reprojection demo ("Frame Rate Independent Mouse Look") just got released, that can produce 10:1 frame rate amplification ratios.

Strobeless motion blur reduction requires ultra high frame rates, so any of these algorithms is fair game to avoid eye-searing PWM-based strobe backlights. Holy Grail is brute framerate-based motion blur reduction.

People still do have a choice to go uncompressed video, or to go with native rendering. But people need to have the choice of various kinds of frame rate amplification via algorithms like supersampling / interpolation / extrapolation / reprojection / etc.

Just like video compression has access to the ground truth (original video frames) to make the interpolation better quality than a black-box man-in-the-middle interpolation chip... You also have the opportunity for a GPU reprojector to know the full input rate (e.g. 1000Hz mouse) to have a low-lag frame rate amplification system such as reprojection algorithms. The DLSS 3.0 does not use this yet, but I would bet that DLSS 4.0 will factor more data to make it less black box, lower lag, and fewer artifacts, until it perceptually disappears.

The same will happen as XeSS / FSR / DLSS leapfrog over each other over the years, using various reprojection / extrapolation / supersampling / interpolation / etc algorithms.

It will take years before three dimensions (GPU) is as good as the algorithms for two dimensions (glitch-free source-end video compressors, instead glitchy of man-in-the-middle interpolation), but the architecture is slowly headed in that direction.

This stuff increasingly is far beyond grandpa's Sony Motionflow "Soap Opera Effect" Interpolation setting on the TV...

This is why "fake frame" terminology will slowly become outdated over the years and decades; as GPU rendering goes multi-tiered in the 4K-8K 1000fps 1000Hz future, and algorithms become more lagless & artifactless over the long-term.

Many people love motion blur. But not all use cases can allow it.

Not everyone wants or need the extra frame rate, but others do.

For example, simulating real life (Holodeck) requires it. VR badly needs it, because real life doesn't strobe, and we need flickerless methods of motion blur reduction, and accurate simulation of analog real-life frame rates in a perfect "Holodeck" display requires brute framerates at brute refresh rates, to avoid extra Hz-throttled/frametime-throttled sample-and-hold motion blur above-and-beyond real life. All VR headsets are forced to flicker to lower persistence per frame (which not everyone can stand), because we don't yet have the technology to do sufficiently brute ultra high framerate=Hz based motion blur reduction yet. Instead of flashing a frame for 1ms to reduce blur, you use 1000 different 1ms frames, to get the same blur, without strobing. You can see a demonstration of motion blur that interferes with VR and Holodecks as an animation at: www.testufo.com/eyetracking

So some displays really needs the brute framerate-based display motion blur reduction -- so all algorithms are fair game if you want photorealistic graphics at photorealistic flicker-free brute framerates (low persistence sample-and-hold with zero strobe-based motion blur reduction).

Yes, as Wright Brothers, DLSS 3.0 has some high lag issues with some settings, and some artifacts. Nontheless, it needs to be acknowledged that the move to 4:1 frame rate amplification ratios was predicted by my article more than 3 years ago, and adding reprojection is the missing piece of puzzle that can reduce artifacts while increasing frame rate amplification ratios to 10:1 -- making possible 4K 1000fps UE5-quality potentially on existing GPUs such as RTX 4090, once displays are available, and reprojection algorithms are added to frame rate amplification algorithms (whether be XeSS / FSR / DLSS / etc).

Since retina refresh rate for sample-and-hold isn't until the quintuple digits (though diminishing curves means geometrics such as 60Hz -> 240Hz -> 1000Hz -> 4000Hz with GtG=0 and framerate=Hz to retain human-visible benefits). If you've seen the behaviors of refresh rates, you need to double Hz and frame rates to halve display motion blur.

Just like whatever color pill you take in The Matrix, the original frame and the interpolated frame has become virtually indistinguishable in all video compression codecs -- if you ever use streaming, it's always permanently interpolating. Fake Frames, indeed!

P.S. I am cited in over 25 research papers, so I am not trolling on this topic. 😉

I am well aware your not trying to troll me. Nothing you said is incorrect, but frame rate is critical for you not to notice the glitches caused by interpolation and short of a specific use case I think most everyone that buys this card will leave it off or prefer DLSS 2. VR is it's own animal no doubt. Video compression for NETFLIX and others is well known and easy to do and you can still see artifacts caused by it, usually in excessively dark scenes and sudden camera movement, live sports often shows you just how much compression is being used. I am sure all companies will continue to work on this tech, will see if the public thinks it's a great idea or not, but fake frame it is for now ;)
 
What exactly is a "fake frame" to you?
Any frame is that is generated by interpolation... fairly simple really. Getting more complex than that is kinda just distracting from what it is.

Not that it is a bad thing per-say. Can look great, but doesn't change that it is a fake frame.
 
Yeah, if you want to keep using the terminology, be fair. 3D rendering is just generating a fake version of real life, anyway. And AI supersampling is just creating fake pixels. And raytracing denoising is just creating fake detail.

Long term, "fake frames" becomes metaphorically more non-sequitur once the GPU rendering architecture spreads beyond the triangle-render workflow.

Unfortunately, that's going the way of the dinosaur to a more hybrid approach.

What is important is the low SNR (signal to noise ratio) between the different methods of frame creation (triangle-draw or adjustment to an existing frame that was triangle-drawn).

The "fake frames" are getting more truth added to it over time (e.g. controller data) which makes the frame even less fake. For example, ASW 2.0 reprojection in Rift VR made things a lot more realistic than bearing with 45fps stutter.

This is a huge semantical rabbit hole, which exists only because grandpa's interpolation was super-bad and laggy.

Real world is not drawn in triangles anyway. What's important is a signal-to-noise ratio being low relative to real life (e.g. passing the Holodeck Turing Test, VR=real life), in a scenery diff.

It's all getting murky. And the terminology favoritism game will evolve over time anyway. But I like to nip the "fake frame" in the bud, since it impedes progress and consumer acceptance of new technologies.

Until recently, it was thought to be unobtainium to go sufficiently dramatically up the diminishing curve of returns (in framerate and in Hz), but here we are. There is already an engineering path to 8K 1000Hz display, after all. Getting concurrent near-retina-resolution with near-retina-framerate is a very tall order without help from frame rate amplification technologies.

We are hurtling towards an eventual where the framerate increase benefit (e.g. 10:1) outweighs ever-fainter artifacts (e.g. becomes perceptually prefeable over other undesirable options like lowering detail levels).

Some of the frame rate amplification methods actually reduce latency (the Frame Rate Independent Mouselook demo, for example)

One asks oneself, philosophically, is the fakeness by lower detail level outweighing the fakeness of various kinds of alternative frame generation methods? That's a genuine napkin self-exercise.

Sure, right now, DLSS 3.0 artifacts/lag are objectionable to some, but they won't always be, and we already are on a technological path to 10:1 frame rate amplification ratios. This is a strobeless path to creating 90% motion blur reductions -- 10x less display motion blur even for 2D displays, not just VR. Some people hate blur -- it makes them motionsick, not everyone can play PS or XBox due to motion blur headaches. Some are bothered by stutter, some are bothered by tearing, etc.

Remember, strobeless framerate=Hz blur reduction is a replacement for both VRR and for strobing, since by definition it is concurrently stutter-free and blur-free, so it hits multiple birds with one throw. In that sense, the frame rate increase benefit starts to massively outweigh the diminished artifacts, for some.

Reprojection techniques needs to be more popular, as it's superior in low-lag and (when done properly) fewer-artifacts than interpolation techniques.

More ability to generate frames in any method (whether polygonal, or raytraced, or accurately-modifiedd (from controller data) previous frames, etc, is better when it's not discouraged by "fake frame" terminology.

As we get more bottlenecked by Moore's Law slowdown (by transistor size department), optimization becomes necessary, and this forks to a multilayered GPU rendering workflow that produces much better and preferable visuals to today, where the ever-larger framerate-increase ratio benefit begins to outweigh the ever-diminishing disadvantages (e.g. decreased lag, decreased artifacts, decreased compromises like not needing to reduce detail level, etc).

The crossover point. Perhaps you may wait until then to cease calling it a "fake frame" at the crossover point, but it is already hard work convincing people of the merits of a 1000fps 1000Hz. At GtG=0 (e.g. OLED/MicroLED) going 120fps-vs-1000fps with framerate=Hz is actually much more noticeable than 60fps-vs-120fps, due to the ultra-dramatic jump of the diminishing curve of returns.

Some display technologies can do it with only a minor cost increment over 60Hz, so the less progress-retarding from terms like "fake frames", the better. That's why my Frame Rate Amplification Article (which I wrote years ago) addresses the cautionary tale of the "fake frame" terminology. If you haven't read it yet, click the Research tab at top of Blur Busters and check it out.

We're talking benefits for 2D planar displays, not just VR either -- though I illustrated that extreme example to point something out that more people more unamiously agree on.

Any scene rendered by a GPU is always synthetic -- it's still a fake version of real life. From that point of view, some percentage of population would just call it a fake version of real life. The phrase "fake frame" is just then, an invented layered terminology on something that's already fake.

But, there are now situations where what people call "fake frame" here (at a given performance envelope) are more realistic than the original rendered frames (when adjusted to try to meet the performance envelope -- e.g. turning down detail and resolution massively).

At the end of the day, the method of how you generate it matters less than the difference in pixels between the 3D scene and a photograph of real life: Better realism with fewer artifacts.

We're not there yet universally for all situations for DLSS 3.0, but we're rapidly hurtling towards that crossover point for future frame rate amplification technologies.

That's the important part, and some frame rate amplification technologies, over this decade, are actually about to approach lower difference (relative to having to be forced to lower detail levels like disabling raytracing or lower resolution rendering/textures, etc -- and usually not even be able to achieve the same said frame rates -- that's considered an artifact, from this discussion POV). This includes all spatial and temporal artifacts (aka motion artifacts), both of which are capable of going down.

From a diff between rendered 3D scene and a photo, the terminology "fake frame" starts to create a self-rethink, when you're seeing the technology-progress trajectories hurtle the way they are.

Be reminded, we're talking about games like CP2077 or UE5 or high-detail engines that otherwise run at low frame rates, not low-detail esports games like CS:GO.

That's just my POV, but YMMV.

</philosophical>
 
Last edited:
Yeah, if you want to keep using the terminology, be fair. 3D rendering is just generating a fake version of real life, anyway. And AI supersampling is just creating fake pixels. And raytracing denoising is just creating fake detail.

Long term, "fake frames" becomes metaphorically more non-sequitur once the GPU rendering architecture spreads beyond the triangle-render workflow.

Unfortunately, that's going the way of the dinosaur to a more hybrid approach.

What is important is the low SNR (signal to noise ratio) between the different methods of frame creation (triangle-draw or adjustment to an existing frame that was triangle-drawn).

The "fake frames" are getting more truth added to it over time (e.g. controller data) which makes the frame even less fake. For example, ASW 2.0 reprojection in Rift VR made things a lot more realistic than bearing with 45fps stutter.

This is a huge semantical rabbit hole, which exists only because grandpa's interpolation was super-bad and laggy.

Real world is not drawn in triangles anyway. What's important is a signal-to-noise ratio being low relative to real life (e.g. passing the Holodeck Turing Test, VR=real life), in a scenery diff.
That's just getting into fallacies and semantics. No need for that.

Interpolated data is not the same *shrug*.

*late edit given your own more detailed edit*
 
Last edited:
That's just getting into fallacies and semantics. No need for that.

Interpolated data is not the same *shrug*.
True, it is not the same thing. Nor is the various methods of frame rendering / raytracing / reprojecting / etc. [agreement]

However, it is actually a self-fallacious argument to claim my post is fallacies. [disagreement]

There are situations where one can add more true data to the various kinds frame generation (not necessarily "interpolation" but hybrids of rendering / supersampling / reprojection / etc) such as controller positional data, and that the frame rate amplification modifies the previous high-resolution frame more perfectly than trying to rerender the low-resolution/low-detail frames from scratch at the same frame rate without going too low.

You might want to re-read since I am talking about various techniques other than "interpolation" (reprojection doesn't require a future frame, thanks to extra knowledge about controller positional data). I made an edit to it that clarified parts of my previous post (before I saw your reply).

Even when rendering frames in current non-DLSS workflows, GPU are already beehives of hacks/shortcuts that tries to skip rendering unnecessary stuff, in order to increase performance. That being said, by rendering fewer frames per second via the triangle-render workflow you can focus on creating more details on original frames, and focus on large-ratio frame rate amplification. By doing this in 10x more detailed frames (thanks to 10:1 frame rate amplification techniques doing the rest) -- you actually can preserve realism more than lose realism.

There is a crossover point where you can have more truer intermediate generated frames (regardless of how they're generated, even if not by the "interpolation" strawman, but also not by rerendering) that are closer to real life and have fewer spatial artifacts and fewer motion artifacts.

Right now, we're not there yet. The frame-rate increase ratios are not high enough yet, nor the artifact issues low enough yet, to reach the crossover point. But the crossover point exists.

Often the next frame is a minor positional shift, and there are now ways to do it in superior ways than interpolation.

For example, reprojection (not interpolation) is able to do it laglessly and perceptually losslessly (pixels same as if you'd rerender!) except for the parallax-reveal boundaries, and then that's where the extra processing is needed to avoid the "fakeness" you so claim.

There are some hybrid methods being invented where some developers are now experimenting with partial-reprojection/partial-rerender, especially to fix parallax-reveals more perfectly. A lot of experimentation is occuring, both by laboratories and by indies.

Regardless, my new paragraph in my previous edited post mentioning the crossover point is key.
This is why, over the longer term, the terminology "fake frames" is ultimately more arbitrary than many think, with future workflows (post-DLSS-3.0, obviously. DLSS 3.0 is only Wright Brothers).
 
Last edited:
Your talking near future. Which is awesome, but it is not where it's at yet. As it stands with DLSS 3's frame generation, it's assisted frame interpolation. So it being fake frames, is accurate.

That said, I massively enjoy your posts.
 
That an interesting and extreme way to test DLSS 3:



Big 4k monitor; native / DLSS quality / DLSS frame generation one beside each other and blind test.

The fact that someone that review displays for a living has a hard time to tell them apart (even for DLSS 3 outside the GUI elements) despite getting extremely close to a large size expensive monitor could be a testimony to how good DLSS got or how overrated 4k resolution for games is (or a mix of both), DLSS 2 seem to beat native by quite a bit in visual fidelity subjective experience.

If the difference is not obvious in that extreme of case (side by side), I imagine it will be near impossible for the same person to see a difference from a different actual gaming session without the direct referent to look at and concentrating-looking for them.

The 4K 60 fps DLSS 3 over 4K native at 30 fps seem a clean win has well, which for the console world if the RDNA frame generation work on RDNA 2 hardware could be interesting and maybe for the 4060-4070, has only the 4090 is out the only talk for which it made sense was the going from 120 to 240 or 75 to 120, but in the lower end with mid-range gamers in the real world were 30 fps on high latency Television with wireless controller was extremely popular 3 years ago and still is maybe it will bring a lot of value.
 
Last edited:
This is why I don't think it's worth being overly worried about it. Fakes frames or not, it can look good. If you don't like it, you can also turn it off.

*late edit*
Personally I think it looks glitchy, so I wouldn't use it. But having the option for those that want it.... *shrug* might as well. But imho it's no where near as killer a feature as the Super Resolution function.
 
Last edited:
This is why I don't think it's worth being overly worried about it. Fakes frames or not, it can look good. If you don't like it, you can also turn it off.
I agree.

I find the concern over "fake frames" interesting, since the "real" frames in the first place were made of fake approximations to begin with. Seems like splitting hairs to me. This is extending approximations to the temporal domain for even more speed.

And yeah - if it isn't your bag, turn it off.

I think I personally wasn't expecting just how good DLSS et al were going to be. I can be a fussbudget, but I also do not pause / capture screenshots and meticulously examine them. Just based upon playing them in real time - amazing stuff.
 
I agree.

I find the concern over "fake frames" interesting, since the "real" frames in the first place were made of fake approximations to begin with. Seems like splitting hairs to me. This is extending approximations to the temporal domain for even more speed.

And yeah - if it isn't your bag, turn it off.

I think I personally wasn't expecting just how good DLSS et al were going to be. I can be a fussbudget, but I also do not pause / capture screenshots and meticulously examine them. Just based upon playing them in real time - amazing stuff.
The fake frames drama will die once FSR catches up and get interpolation as well. Then it will be an even playing field with your usual people claiming one is better than the other when in reality both will have their pros and cons.
 
I think for the "drama" to die will require 4 things
) Time
) Intel-Apple-Amd solutions
) Case were there is no debate about the benefit like VR, mobile switch-steam deck, high hz laptop, very high HZ monitors
) And obviously, the quirks being fixed over the next 1-2 year
- 2D sprite-GUI artifact
- bad frame on camera change or other giant shift between 2 frame detected and simply rejected.
- V-sync type of handling frame going above monitor max issues fixed

I feel for one thing the drama is more over how it is used in Marketing and justify pricing than on the tech itself, would it have been announced-emphased like AMD did on their release, the drama would have been quite minimal.
 
Last edited:
I agree.

I find the concern over "fake frames" interesting, since the "real" frames in the first place were made of fake approximations to begin with. Seems like splitting hairs to me. This is extending approximations to the temporal domain for even more speed.

And yeah - if it isn't your bag, turn it off.

I think I personally wasn't expecting just how good DLSS et al were going to be. I can be a fussbudget, but I also do not pause / capture screenshots and meticulously examine them. Just based upon playing them in real time - amazing stuff.

No, this isn't splitting hairs and it does matter. You shouldn't conflate the two as being equal or not mattering.

But the feature is still useful.
 
One of the initial shortcomings of DLSS 3 was its lack of support for VSYNC, which is fixed by today's new GeForce Game Ready driver 526.98. NVIDIA explained:

DLSS 3 uses its NVIDIA Reflex technology to limit the output frame rate to slightly below the refresh rate of the G-SYNC monitor. This enables tear-free gaming while avoiding large back pressure and high latency caused by VSYNC. To enable this feature:

-Enable G-SYNC: NVIDIA Control Panel --> Display --> Setup G-SYNC
-Turn VSYNC On: NVIDIA Control Panel --> 3D Settings --> Manage 3D Settings
-Use the Global Settings tab to apply the options in every game or on a per-game basis in the Program Settings tab
-Turn on DLSS Frame Generation in a supported game
 
One of the initial shortcomings of DLSS 3 was its lack of support for VSYNC, which is fixed by today's new GeForce Game Ready driver 526.98. NVIDIA explained:

DLSS 3 uses its NVIDIA Reflex technology to limit the output frame rate to slightly below the refresh rate of the G-SYNC monitor. This enables tear-free gaming while avoiding large back pressure and high latency caused by VSYNC. To enable this feature:
This is a very important fix. VSYNC ON, regardless of VRR, has many important purposes, if programmed in a way to avoid latency.

While VSYNC OFF is very popular, VSYNC is important for stutter-free and tear-free operations needed for virtual reality and other applications. VR applications are among the world's lowest-latency VSYNC ON software, and so the technologies should eventually be able to mix together.

Perfect frame-paced VSYNC is superior is when combined with low-persistence VR displays -- which has major problems with VSYNC OFF when tearing is 100x more visible in a Holodeck IMAX-FOV display. So what's good for esports fails in VR, etc -- Right Tool For Right Job.

That being said, there are other glitches in Optical Flow to be fixed for VR. It will be a while before things improve enough to reduce VR artifacts and latency with 4000-series frame rate amplification, but hopefully we're one step closer now to full maxed-out RTX-enabled UE5-quality at 90fps+ on both desktop displays AND also virtual reality displays too.
_____

That being said... I have an important thought.

I'd love to see the NVIDIA Optical Flow SDK start to support reprojection technology, for future 4K 1000fps 1000Hz operations -- no longer unobtainium tech. Basically add support for reprojection in addition to interpolation.

This will make easier brute framerate-based motion blur reduction (e.g. 100fps -> 1000fps to reduce motion blur by 90%) as a substitute for eye-killer flicker-based strobe-based blur reduction. Strobing certainly has its uses and it put Blur Busters on the map, but real life doesn't flicker.

Brute framerate-based motion blur reduction is vastly superior, especially when we finally hit major ratios near 5:1 and 10:1 ratio territory. But it requires massive geometric frame rate increases (5x-10x to reduce motion blur to 1/5th or 1/10th respectively).

Without strobing, display motion blur is proportional to frametime. (Whether original or "fake" frame, up to the display's max Hz). So gigantically more fps and more Hz is mandatory to achieve ergonomic PWM-free strobe-free flicker-free AND blur-free.

That's why when we finally hit 4:1 frame rate amplification ratios, it actually was starting to enter "WOW" territory for blur-reduction-lovers, that DLSS 3.0 showed promise as a strobeless motion blur reducton technology, despite glitches. Imagine if lag goes down, artifacts goes down, and frame rate amplification ratios goes up -- strobing becomes obsolete for PC games!

DLSS does create minor softening, but the large framerate increases compensate. The net result is that you still get almost 4x clearer motion at 4x frame rates. For some people, this overcomes the DLSS disadvantages for motion-blur-haters who are also strobe-haters and flicker-haters -- the kind of people who get motion blur headaches (getting motion sick from display motion blur is something that happens to some of us!)

But adding reprojection, to piggyback on this, would turn 4:1 ratios to 10:1 ratios, using today's technology. Your very own GPU already installed in your system, as long as it has enough memory bandwidth -- the terabyte per second memory of an RTX 4090 is enough to do 4K at 1000fps via rudimentary reprojection algorithms already.

We need a centrallized frame rate amplification SDK that is not VR-specific or interpolation-specific!

Ideally I want vendor-independence, but if NVIDIA trailblazes this, then other more open GPU-independent frame rate amplification APIs will follow (Vulkan and DirectX).
 
Last edited:
One of the initial shortcomings of DLSS 3 was its lack of support for VSYNC, which is fixed by today's new GeForce Game Ready driver 526.98. NVIDIA explained:

DLSS 3 uses its NVIDIA Reflex technology to limit the output frame rate to slightly below the refresh rate of the G-SYNC monitor. This enables tear-free gaming while avoiding large back pressure and high latency caused by VSYNC. To enable this feature:

-Enable G-SYNC: NVIDIA Control Panel --> Display --> Setup G-SYNC
-Turn VSYNC On: NVIDIA Control Panel --> 3D Settings --> Manage 3D Settings
-Use the Global Settings tab to apply the options in every game or on a per-game basis in the Program Settings tab
-Turn on DLSS Frame Generation in a supported game
I mean, this is just basic g-sync usage 101.

That being said, I always thought it was kind of stupid/not average user friendly that when you enable g-sync it doesn't by default just force v-sync on anyways.
 
Don't look now, but LinusTechTips just featured the video I'm super-excited about.



GPU for 4K 1000fps 1000Hz UE5-detail is already here today.

Just add 4K 1000fps 1000Hz OLED. Beginnings of goodbye strobing (for PC gaming).

Or at least, for now 240fps 240Hz Cyberpunk 2077 on upcoming 240Hz OLEDs (regardless of resolution!)

Low persistence sample-and-hold FTW!

LinusTechTips just raised the profile of this. Now it's the game developers' turn. Epic, CD PROJEKT RED, etc.

And NVIDIA too -- add it to DLSS 4.0
 
In the middle of a round of Battlefield 2042 Escalation Conquest, I just thought of turning off DLSS to compare it with native and while running around, I noticed something.

Check out the handrail barriers:

Native at 3440 x 1440:

Screenshot_20221208_041457.png



DLSS Quality at 3440 x 1440:

Screenshot_20221208_041415.png


Seems like the DLSS AI reconstruction is the real deal. DLSS has an overall softer look but is stable while native is sharper but with pixel crawl and shimmering artifacts here and there.

Honestly, I prefer the sharper look of native but I really can't stand the shimmering aliasing so I'm going back to DLSS. Besides, I already got used to the smoother look. Also, with DLSS, the power consumption of the 4090 is almost always under 200W. At native it's around 260W on average. This is with a 116fps frame cap due to Nvidia Reflex.
 
Last edited:
In the middle of a round of Battlefield 2042 Exodus Conquest, I just thought of turning off DLSS to compare it with native and while running around, I noticed something.

Check out the handrail barriers:

Native at 3440 x 1440:

View attachment 532640


DLSS Quality at 3440 x 1440:

View attachment 532639

Seems like the DLSS AI reconstruction is the real deal. DLSS has an overall softer look but is stable while native is sharper but with pixel crawl and shimmering artifacts here and there.

Honestly, I prefer the sharper look of native but I really can't stand the shimmering aliasing so I'm going back to DLSS. Besides, I already got used to the smoother look. Also, with DLSS, the power consumption of the 4090 is almost always under 200W. At native it's around 260W on average. This is with a 116fps frame cap due to Nvidia Reflex.
Yeah, BF2042 Quality DLSS looks pretty nice. In games when I can afford the FPS, I'll use DLAA, which is much sharper as there is no down sampling as it is DLSS at native resolution.
 
In the middle of a round of Battlefield 2042 Escalation Conquest, I just thought of turning off DLSS to compare it with native and while running around, I noticed something.

Check out the handrail barriers:

Native at 3440 x 1440:

View attachment 532640


DLSS Quality at 3440 x 1440:

View attachment 532639

Seems like the DLSS AI reconstruction is the real deal. DLSS has an overall softer look but is stable while native is sharper but with pixel crawl and shimmering artifacts here and there.

Honestly, I prefer the sharper look of native but I really can't stand the shimmering aliasing so I'm going back to DLSS. Besides, I already got used to the smoother look. Also, with DLSS, the power consumption of the 4090 is almost always under 200W. At native it's around 260W on average. This is with a 116fps frame cap due to Nvidia Reflex.
This is why I recommend DLSS Quality if it's available, even if you don't need it. It does a great job of resolving details that sometimes even native resolution can't resolve. It's really cool like that.
 
Just played Portal RTX on my 4090. At 4K it is nearly unplayable without DLSS and Frame Generation. 15-20 FPS 4K native. Of course, when you turn on DLSS and Frame Generation, the experience is great.

I foresee many angry RTX 2000 and 3000 owners venting their frustration in the coming days.
 
Just played Portal RTX on my 4090. At 4K it is nearly unplayable without DLSS and Frame Generation. 15-20 FPS 4K native. Of course, when you turn on DLSS and Frame Generation, the experience is great.

I foresee many angry RTX 2000 and 3000 owners venting their frustration in the coming days.

I really doubt many will care about a very old game getting full path tracing, especially when it takes 1,600 bucks to still have a subpar performance. It's more a tech demo then anyone expecting people to play the game with it. I just highly doubt previous gen owners are going to be upset because their card is unable to play it at 4K.
 
I really doubt many will care about a very old game getting full path tracing, especially when it takes 1,600 bucks to still have a subpar performance. It's more a tech demo then anyone expecting people to play the game with it. I just highly doubt previous gen owners are going to be upset because their card is unable to play it at 4K.
Of course it will be used to compare RNDA 3 to Lovelace even being very Nvidia specific/optimized, plus half the frames generated vice rendered. Nvidia has big plans for Portal. My take. Still kinda neat in the end.
 
Last edited:
Here we go peeps. Portal w/ RTX.

My takeaway: Full Path Tracing is a b**tch even for an RTX 40 GPU w/ Frame Generation. Yikes!


3090, 4k, Ultra -> 10fps
4K w/ DLSS (it is labelled as DLSS 3.0) performance mode, high settings -> Amazing 36fps. Ultra performance it hit 60fps but looked like crap. Let see, 99%+ of RTX users need not apply for this game. At the beginning in the game was a 4090FE card on the bottom of the table, lol. Yep, that is what you need to play this with :D.
 
I really doubt many will care about a very old game getting full path tracing, especially when it takes 1,600 bucks to still have a subpar performance. It's more a tech demo then anyone expecting people to play the game with it. I just highly doubt previous gen owners are going to be upset because their card is unable to play it at 4K.
Much like Quake II RTX before it, it is a glorified demo. It's very cool and all, but no one was really going to dust off Portal to experience the whole story again just for the path tracing.

Also, RNDA; is that what an AMD shill signs before being paid to post FUD?:ROFLMAO:
 
Much like Quake II RTX before it, it is a glorified demo. It's very cool and all, but no one was really going to dust off Portal to experience the whole story again just for the path tracing.


Also, RNDA; is that what an AMD shill signs before being paid to post FUD?:ROFLMAO:
Na, just stir up some nVidiots :cool:. Seriously it was my normal mistake that I normally make and GoldenTiger correctly caught.
What fud? 😳
 
Not really FUD, but there's definitely some cheer-leading going on, wouldn't you agree?




🤔
I am more for better IQ and performance with what ever works. As RT tech is being explored by developers, new innovative ways are being found and exploited. Path tracing kills even the mighty 4090, while software raytracing is running rather well on AMD console hardware (just smarter RT maybe). Once RT becomes compelling -> big visual differences, you see that with Calisto Protocol and Fortnight using Unreal 5.1 with usable or good performance that is what exciting. Not mindless use of RT that tanks every GPU out there and in some cases meaningless visual improvements. If cheer-leading is going on it is for tech that works that more people can actually use and enjoy.
 
I am more for better IQ and performance with what ever works. As RT tech is being explored by developers, new innovative ways are being found and exploited. Path tracing kills even the mighty 4090,
I am not sure the opposition you are trying to raise, Control RT on run faster at 4K native on a 4090 than this

, while software raytracing is running rather well on AMD console hardware
? Why saying the console are not using the hardware RT mode ? At least in the Digital foundry video they seem to always use the PC to show the software mode difference, which is not that different than the usual pre-bake stuff has it does not support much dynaymic, rigged mesh and so on;



Why hardware RT AMD console hardware would run in the vastly inferior software mode ? I mean possibly, but source ?

The 6800xt seem to run FOrnite with hardware RT on fine, why not the consoles ? If you put high setting in the RT option I am not even sure the game let you run in software mode.
 
I am not sure the opposition you are trying to raise, Control RT on run faster at 4K native on a 4090 than this


? Why saying the console are not using the hardware RT mode ? At least in the Digital foundry video they seem to always use the PC to show the software mode difference, which is not that different than the usual pre-bake stuff has it does not support much dynaymic, rigged mesh and so on;



Why hardware RT AMD console hardware would run in the vastly inferior software mode ? I mean possibly, but source ?

The 6800xt seem to run FOrnite with hardware RT on fine, why not the consoles ? If you put high setting in the RT option I am not even sure the game let you run in software mode.

The video was very clear, the console is using software Raytraced Lumen while the PC version is using Hardware Raytraced Lumen. They didn't go into why they could not use the hardware RT on the Consoles (maybe a level of consistency and ease when done between PS5 and Xbox Series X for developers with very similar results making it easy - my guess).

It does not matter as much as the results, Lumen and Nanite is looking to be very awesome on the current gen consoles and many games will be using the Unreal 5.x game engine. RT is working not even needing dedicated hardware is another aspect which I find very interesting.
 
Nvidia DLSS 2.5.1 Disables Built-In Sharpening; Nvidia Tells Developers to Use NIS Sharpening Going Forward

Nvidia DLSS 2.5.1 version debuted in the latest Portal RTX patch released last week. Following the festivities, it has now been found that this latest version of the software doesn't feature any built-in sharpening filter anymore...advanced users had been using alternative methods (like hex edits or the DLSS SDK .dll, which, however, added a watermark) to fix the over-sharpening issues in select games, like Red Dead Redemption 2 or God of War...the problems were most noticeable during motion.

The news was confirmed by Nvidia's RTX Unreal Engine Evangelist Richard Cowgill...Nvidia is apparently recommending that game developers use Nvidia Image Scaling sharpening going forward...Nnidia Image Scaling (NIS) was released in November 2021 as an upgrade to the previous image scaling technology...the new algorithm uses a 6-tap filter with 4 directional scaling and adaptive sharpening filters...scaling and sharpening also happen in a single pass, boosting performance...

https://www.reddit.com/r/nvidia/comments/zy68uh/psa_dlss_dll_version_251_completely_disables_dlss/
 
So my nvngx_dlss.dll file for Portal RTX is 24.5MB for Version 2.5.1 but the download on TechPowerup is only 14MB for version 2.5.1? I'm going to assume my Portal RTX one is correct, and I could drop that in say Forza Horizon 5 or MSFS2020 and see if it works?
 
So my nvngx_dlss.dll file for Portal RTX is 24.5MB for Version 2.5.1 but the download on TechPowerup is only 14MB for version 2.5.1? I'm going to assume my Portal RTX one is correct, and I could drop that in say Forza Horizon 5 or MSFS2020 and see if it works?
It's the same file. The TPU download is a compressed ZIP file, but once you extract it and compare the two, they're the same dll.
 
I am more for better IQ and performance with what ever works. As RT tech is being explored by developers, new innovative ways are being found and exploited. Path tracing kills even the mighty 4090, while software raytracing is running rather well on AMD console hardware (just smarter RT maybe). Once RT becomes compelling -> big visual differences, you see that with Calisto Protocol and Fortnight using Unreal 5.1 with usable or good performance that is what exciting. Not mindless use of RT that tanks every GPU out there and in some cases meaningless visual improvements. If cheer-leading is going on it is for tech that works that more people can actually use and enjoy.
Most reasonable take on this.
Before they said it was useless, then not enough games support it, now not it's not fast enough. Full acceptance of RT tech will happen soon.
RT/DLSS/FSR is still in the infancy stages, yet we can see massive potential.
 
I am not sure the opposition you are trying to raise, Control RT on run faster at 4K native on a 4090 than this


? Why saying the console are not using the hardware RT mode ? At least in the Digital foundry video they seem to always use the PC to show the software mode difference, which is not that different than the usual pre-bake stuff has it does not support much dynaymic, rigged mesh and so on;



Why hardware RT AMD console hardware would run in the vastly inferior software mode ? I mean possibly, but source ?

The 6800xt seem to run FOrnite with hardware RT on fine, why not the consoles ? If you put high setting in the RT option I am not even sure the game let you run in software mode.

6800 XT is more powerful than Series X and PS5.

The GPU in the PS5 is somewhere between a 6600xt and a 6700xt.

6600 XT has 2048 shading units 10.6 flops
PS5 has 2304 shading units 10.29 flops
6700 XT has 2560 shading units 13.21 flops
Series X has 3328 shading units------but the clock speed is way lower than usual for RDNA 2. So its 12.15 flops

6800 XT has 4608 shading units 20.74 flops


One of the issues with the consoles, is that they need try and produce a 4K image. You could probably do 1080/60 in Fortnite with hardware Lumen, on a PS5. But......console players are playing on 4K TVs. So, they do dynamic resolution (PS5's internal res in Fortnite with Software Lumen, averages 55% of 4K. Series X is 59%) and Temporal super resolution upscaling, to create a 4K image. Hardware Lumen would not allow nearly as much internal res, on the consoles, while maintaining a locked 60fps.

The Matrix tech demo on PS5 has hardware Lumen. But its like 20 - 30fps. (its also an older version of Lumen and its pushing a lot higher quality rasterized visuals, than Fortnite).
 
Back
Top