Intel’s Arc A770 Nearly 3x Faster than AMD’s Radeon RX 7900 XTX in Hogwarts with Ray Tracing

doesn't Frame Generation actually introduce increased Input Latency?

when enabling frame generation, while scene FPS rose again to 112 FPS, latency increased to 63 ms. What this means is that despite the game having the visual smoothness of 112 FPS, the latency and responsiveness you're experiencing is more like playing the game at 40 FPS. You end up with smooth motion but a slow feel, which is hard to show on a video. https://www.techspot.com/article/2546-dlss-3/
This is true, and for some games it is quite the boone but for fast-twitch games probably not so much.
That's today's status quo.

Future reprojection-based frame generation will reduce latency.

Right now, DLSS 3.0 uses partially interpolation-based frame generation, but reprojection-based frame generation is superior (e.g. Oculus ASW 2.0), ported to PC context:

It will require some creativity like API hooks for new 6dof positional updates (translate XYZ, rolls XYZ) from mouse / keyboard movements, fed into the reprojection-powered frame generator.



(Note: Downloadable demo! It actually works; reprojection artifacts mostly disappear if you start at an original framerate above flicker fusion threshold, e.g. start reprojecting from 80-100fps, towards a higher frame rate like 360fps, 500fps or 1000fps).

There is enough horsepower for 4K 1000fps 1000Hz UE5-detail possible with today's 4000-series GPU.

Wait till roughly the 4.0 or 5.0 generation of DLSS, XeSS and FSR, I think, before latency-reducing frame generation comes out.

Even retroactive reprojection can also undo latency of 10ms frames (100fps) with 1ms mouse polls, turning 100fps 10ms rendertimes into 1000fps with 1ms latency. Research in future advanced multilayer Z-buffers with 3D reprojection (instead of just 2D reprojection) will also apply this to character movements, though initially will only be for zeroing out latency in mouselook / strafe / pans / turns.

This does not solve frame generation's unsuitability in esports, but this will change between now and the end of this decade. There's just no way to generate UE5.1-quality 4K 1000fps at 2nm-3nm transistor sizes without reprojection-based frame generation technology but at least there definitely is a path forward to the 4K 1000fps future (2030s).

The multitiered GPU rendering pipeline harkens, not too different from the video layering of 1 full frame per second and 23 predicted (interpolated) frames per second that is part of video codecs such as H.264 ...
 
Last edited:
Is that sourced from the developers or that German website? The patch had zero impact on any of my machines.
Its listed on a few gaming sites, though can't seem to find it listed on the publishers site.
 
doesn't Frame Generation actually introduce increased Input Latency?

when enabling frame generation, while scene FPS rose again to 112 FPS, latency increased to 63 ms. What this means is that despite the game having the visual smoothness of 112 FPS, the latency and responsiveness you're experiencing is more like playing the game at 40 FPS. You end up with smooth motion but a slow feel, which is hard to show on a video. https://www.techspot.com/article/2546-dlss-3/
I don't know why TechSpot decided to keep that line in there. According to their own plot, that line is completely false. The latency is actually LOWER than with all DLSS turned off.

For Cyberpunk:
Native (no Reflex): 101.5ms
Native (with Reflex): 63.5ms
DLSS3 Quality: 62.6ms
DLSS3 Performance: 52.1ms
DLSS2 Quality: 47ms
DLSS2 Performance: 42.7ms

3.png


Similar results with F1 2022:
1.png
 
Same. I also remember people proudly declaring how expensive antialiasing was and how it would never be enabled for them... Now we don't bat an eye at it. We won't bat an eye at raytracing soon enough either ;).

I love eye candy. I always have. Old school bump mapping, the old AA/AF settings, now ray tracing. I want all the eye candy. It's always been a huge hit in performance at first. After a few generations, it's just standard fare and just a part of the feature set with normal frame rates. Right now, though. Damn. It's not very good looking for any vendor using RT with this single game. The game itself needs a lot of optimization.

I'm hoping in a few years, RT is just another standard feature and benefit to games and GPU's.
 
Same. I also remember people proudly declaring how expensive antialiasing was and how it would never be enabled for them... Now we don't bat an eye at it. We won't bat an eye at raytracing soon enough either ;).
True, but there was also a lot more overhead available back then, the silicon was absolutely huge so reducing it to cram more transistors on was easy enough now we're really edging up on the fundamental limits of size shrinking, power was easily pulled straight off the motherboard through the slot today we're getting closer to power limits of electrical outlets.
 
Last edited:
True, but there was also a lot more overhead available back then, the silicon was absolutely huge so reducing it to cram more resistors on was easy enough now we're really edging up on the fundamental limits of size shrinking, power was easily pulled straight off the motherboard through the slot today we're getting closer to power limits of electrical outlets.

Yup, shrinking stuff back then was pretty easy, we had new high end gpus more frequently vs today where the limits are being reached.
 
Nanite and Lumen get 90% of the RTX results at 50% of the GPU load
I am not sure about that line, there is very little to go with, but both the matrix demo or Fornite, that combo seem to give like 145% of the results for 130% of the cost or something of the sort.

Fornite does not run faster than say Cyberpunk of all games:

Fortnite_DX12_RT-p.webp
Metro_EE-p.webp


Metro exodus Enhanced edition run 50% faster on a 4080, it is more a boost in image quality than a reduction of cost it seem to me, they ran often under 1000p on the latest console.

If you mean between hardware vs software Lumen itself, I am not sure about 50% difference
 
This game seems like a mess when it comes to testing and overall performance lmao.

It's a new release on multiple platforms, so of course it's got issues. It's also from a new developer I think as well, which likely doesn't help things. Most likely it will need a patch or two for things to work like they should.
 
I am not sure about that line, there is very little to go with, but both the matrix demo or Fornite, that combo seem to give like 145% of the results for 130% of the cost or something of the sort.

Fornite does not run faster than say Cyberpunk of all games:

Fortnite_DX12_RT-p.webp
Metro_EE-p.webp


Metro exodus Enhanced edition run 50% faster on a 4080, it is more a boost in image quality than a reduction of cost it seem to me, they ran often under 1000p on the latest console.

If you mean between hardware vs software Lumen itself, I am not sure about 50% difference
It’s more the load specifically on the ray tracing accelerators not the GPU as a whole. I would have been more clear on that part. But my numbers there are probably exaggerated a touch.
 
AMD Software: Adrenalin Edition 23.2.1 Release Notes
https://www.amd.com/en/support/kb/release-notes/rn-rad-win-23-2-1

Important Notes
AMD Link users running Radeon RX 7000 series GPUs will need to update to a newer version of AMD Link now available on various platforms.
AMD is working with the game developers of Hogwarts Legacy™ to resolve performance issues when enabling ray tracing.
 
Back
Top