Watch Dogs Legion - Xbox Series X/S Ray Tracing vs PC RTX

Rizen

[H]F Junkie
Joined
Jul 16, 2000
Messages
9,487
Looks like the first real comparison between PC level hardware and the RNDA2 raytracing cores in the consoles. Obviously not a true apples to apples comparison with the upcoming RX6000 GPUs, as the console hardware is missing Infinity Cache.



Reddit poster summarized it:
TL;DW: consoles run raytracing below PC minimum settings; exact settings discovered via data mining by modders. PC can be forced to console settings but the image quality is still slightly better for unknown reasons. In their test scene of choice, the XSX was maintaining 30 fps at 94% resolution scale (i.e. 3609x2040) while the 2060 super did 32 fps at native 4k.

So initial performance for Xbox Series X ray tracing is roughly on par with an RTX 2060S.

I wonder how much the additional cache will help on the discrete GPU cards for PC.
 
Last edited:
I believe DF compared rasterization performance in Gears 5 between the XSX and PC, and determined that it approximately performs around 2080 levels. So...

Rasterization = ~2080
Ray-Tracing = ~2060S

Could this indicate that RDNA2's ray tracing performance may be potentially worse than Turing's? As if it were on par with Turing wouldn't RT performance levels also be 2080 levels?

Obviously we would need more data sets to draw any conclusions, but interesting none-the-less.
 
  • Like
Reactions: Elios
like this
I believe DF compared rasterization performance in Gears 5 between the XSX and PC, and determined that it approximately performs around 2080 levels. So...

Rasterization = ~2080
Ray-Tracing = ~2060S

Could this indicate that RDNA2's ray tracing performance may be potentially worse than Turing's? As if it were on par with Turing wouldn't RT performance levels also be 2080 levels?

Obviously we would need more data sets to draw any conclusions, but interesting none-the-less.
It's hard to estimate because AMD and NVIDIA have very different raytracing cores, and I have read comments to the effect that AMD's solution seems to rely heavily on cache - which is absent from the console implementations.
 
I'm curious what the GPU memory utilization on the x-box is while running the game. On my 3080, it's bumping right up against the 10GB limit. Not sure if the game reserves as much as is available, or if the game really requires that much GPU memory. Apologies if the question is OT.
 
I'm curious what the GPU memory utilization on the x-box is while running the game. On my 3080, it's bumping right up against the 10GB limit. Not sure if the game reserves as much as is available, or if the game really requires that much GPU memory. Apologies if the question is OT.
The Series X only has 10GB of system memory available on the full 320-bit, high speed bus. I don't know what the system does with the remaining 6GB, but I would guess that the system cannot utilize more than 10GB for the GPU because the other 6GB is too slow.
 
I could be easy to impress but this feel like high quality content and it is interesting (it feel like the Sony Morales game could have made better RT choice with the power they had) and this is maybe not fully taken advantage of RDNA2/new console yet.

Because if it is standard that game look better and faster on a 2060 super with DLSS on than on the new console because RT become a must buzzword for sales, that would be a massive let down.
 
Isn't there like 1 RA per 20 cores so the PC should have at least double the raw RT performance of PS5 and at least 25% above Xbox?
 
This actually makes me appreciate my RTX 3080 even more. I love how everyone was saying that new consoles were going to revolutionize the gaming world, and while I think their performance on the XSX is impressive, I can run this game at 4K w/ DLSS and Ray Tracing at a near locked 60 FPS easily on my 3080, and it looks damn amazing.

It will be interesting to see how the 6800 and 6900 cards run Watch Dogs: Legion.
 
This actually makes me appreciate my RTX 3080 even more. I love how everyone was saying that new consoles were going to revolutionize the gaming world, and while I think their performance on the XSX is impressive, I can run this game at 4K w/ DLSS and Ray Tracing at a near locked 60 FPS easily on my 3080, and it looks damn amazing.

It will be interesting to see how the 6800 and 6900 cards run Watch Dogs: Legion.
Just my opinion but I think that Rasterization is >>>>RT in importance and I think the Radeon cards are far better positioned for the future. They will have a DLSS alternative, and Ray Tracing will be for the most part designed around DXR keeping console capability in mind. DLSS is not a magical thing you could get similar performance and fidelity through tons of techniques that will be available for the AMD designs. More memory, less power direct relationship with consoles, all of these point to AMD being the choice this time for me. DLSS has been very effectively marketed by NVIDIA but it and RT are not significantly important to me to give up all other AMD advantages.
 
We've been anticipating this split since March of this year (when AMD's first demo was even worse than a 2060).

I figured they could improve efficiency, but we knew they were going to hit somewhere around the 2060-2070 wall.

It is possible that Big Navi will surprise us, but I'm expecting the RT performance from the 3060 to be faster than Big Navi.
 
Last edited:
everyone was saying that new consoles were going to revolutionize the gaming world
Well, it is doing just that. Consoles will clearly be the minimum RT level, but at least it'll be there, and PC can just build on that. Most games are made multi-platform, so it won't hurt anything to have the "safety" of the console userbase as a basic level, so RT becomes a mainstream thing and PC versions can crank up the detail.
I think that Rasterization is >>>>RT in importance
I'd say both will be symbiotic. Raster gets much of the job done, but RT brings the realism up several notches. Not everywhere, but when it can shine, it's really noticeable. No need to emphasize one over the other, they're both best when working together.
 
This actually makes me appreciate my RTX 3080 even more. I love how everyone was saying that new consoles were going to revolutionize the gaming world, and while I think their performance on the XSX is impressive, I can run this game at 4K w/ DLSS and Ray Tracing at a near locked 60 FPS easily on my 3080, and it looks damn amazing.

It will be interesting to see how the 6800 and 6900 cards run Watch Dogs: Legion.
The game does not run 60 FPS locked at 4K RT Ultra and DLSS Quality. It even dips to 45 FPS at 1440P while driving.

So unless you are reducing your quality settings for environment, RT or DLSS you are not providing correct performance numbers for a 3080.
 
  • Like
Reactions: noko
like this
I expect a PC to PC comparison for this title which should indicate roughly the difference between AMD capability and Nvidia. 6800XT with higher clock speeds with 20 more CU's will be an interesting comparison against a 3800.

XBox Series X Memory: https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs
In terms of how the memory is allocated, games get a total of 13.5GB in total, which encompasses all 10GB of GPU optimal memory and 3.5GB of standard memory. This leaves 2.5GB of GDDR6 memory from the slower pool for the operating system and the front-end shell. . . .

Not limited to 10GB plus if you are limited to 10gb like on the 3800, how would you have extra vram for higher level of detail (draw distances), higher resolution textures, higher quality fand more geometry etc.? In short you wouldn't and be limited to Console level quality if vram limited, not say you won't have better IQ due to the greater processing power. If Game uses and needs the full 13.5GB of vram on the Xbox series X (probably never), 3800 may have some restrictions.
 
This actually makes me appreciate my RTX 3080 even more. I love how everyone was saying that new consoles were going to revolutionize the gaming world, and while I think their performance on the XSX is impressive, I can run this game at 4K w/ DLSS and Ray Tracing at a near locked 60 FPS easily on my 3080, and it looks damn amazing.

It will be interesting to see how the 6800 and 6900 cards run Watch Dogs: Legion.
and I have to say again----of course you get better performance on a GPU which alone, costs $300 more than a PS5.
 
I believe DF compared rasterization performance in Gears 5 between the XSX and PC, and determined that it approximately performs around 2080 levels. So...

Rasterization = ~2080
Ray-Tracing = ~2060S

Could this indicate that RDNA2's ray tracing performance may be potentially worse than Turing's? As if it were on par with Turing wouldn't RT performance levels also be 2080 levels?

Obviously we would need more data sets to draw any conclusions, but interesting none-the-less.
They have an RT accelerator in each shader core.
The rumor is that the top Big Navi cards are more or less about as good in RT performance, as a 2080ti.

but the "lower"models have a lot less shader cores. So, their RT performance would then theoretically nosedive. and may not scale relative to Nvidia's otherwise comparable "lower" models.

And yes, the infinity cache is a bit of a wildcard, which the consoles do not seem to have.
 
  • Like
Reactions: Rizen
like this
Isn't this pretty good for drawing less than 200w total?
Yes, I think so. The entitle SoC is ~15 billion transistors which is far smaller than the GA102 and Big Navi. They are delivering a good experience for the money and limitations imposed by the form factor.

I did expect that raytracing would be a novelty on consoles, basically a checkbox so they could advertise the feature, and looks like that will be the case unless optimizations over the next couple of years deliver significant gains. But, it’s still good to continue pushing the technology forward. If consoles didn’t have it, adoption would be a lot slower.
 
Yes, I think so. The entitle SoC is ~15 billion transistors which is far smaller than the GA102 and Big Navi. They are delivering a good experience for the money and limitations imposed by the form factor.

I did expect that raytracing would be a novelty on consoles, basically a checkbox so they could advertise the feature, and looks like that will be the case unless optimizations over the next couple of years deliver significant gains. But, it’s still good to continue pushing the technology forward. If consoles didn’t have it, adoption would be a lot slower.
We always see adaptations/optimizations as a console progresses through its life. Nvidia's RTX 2000 and 3000 series cards have a lot of Ray Tracing horsepower, so some developers were just going "RAY TRACE EVERYTHING!!!" with very minimal thought being given to optimization for performance.

Now that the XSX and PS5 are out with far more limited RT prowess, this will force developers to come up with creative hybrid solutions that use both RT and SSR along with other techniques to give the feel of full RT without actually doing it. I bet we come back in 6-7 years and are amazed at the fidelity of games once devs learn what these systems can actually do.

The positive side of this is that these performance optimizations will translate to performance gains on RX 6000 and RTX 2000/3000 cards where Ray Tracing is used. I think we will also see a big push at newer and better upscaling techniques that allow overall better performance with minimal image degradation. These consoles have a lot of CPU power, especially compared to previous generation. It will be a big win for PC enthusiasts.
 
I expect a PC to PC comparison for this title which should indicate roughly the difference between AMD capability and Nvidia. 6800XT with higher clock speeds with 20 more CU's will be an interesting comparison against a 3800.

XBox Series X Memory: https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs


Not limited to 10GB plus if you are limited to 10gb like on the 3800, how would you have extra vram for higher level of detail (draw distances), higher resolution textures, higher quality fand more geometry etc.? In short you wouldn't and be limited to Console level quality if vram limited, not say you won't have better IQ due to the greater processing power. If Game uses and needs the full 13.5GB of vram on the Xbox series X (probably never), 3800 may have some restrictions.
I am not sure if there is some clear distinction of VRAM and non VRAM on those console, so maybe they could use more than the 10 of fast ram for VRAM (like a pc will do), but if it dedicated 100% of the memory allowed for the game has VRAM that would leave nothing for regular gaming regular ram usage.

If a game need say 5-6 gig of regular rams it will not be able to use more than 7.5-8.5 gig has VRAM, the fact that game has so little total memory available to them (13.5 for both vram+ram) is maybe why the new game made with console in mind seem lock to a 8 gig vram usage, I doubt a 3070 will have issue running console level setting. Just thing of a PC playing AAA games in 2020 having only 6 gig of ram above the usage on boot to play game with, that already not a lot.
 
Pretty amazing that these consoles are able to put up these kinds of performance
 
It's one title, but it does seem to indicate that ray tracing may be a touch weaker than we had hoped. Either way, this is a massive bump in raster performance, and I think we're going to get some really excellent looking PC games in a couple years because of this generational leap.
 
I am not sure if there is some clear distinction of VRAM and non VRAM on those console, so maybe they could use more than the 10 of fast ram for VRAM (like a pc will do), but if it dedicated 100% of the memory allowed for the game has VRAM that would leave nothing for regular gaming regular ram usage.

If a game need say 5-6 gig of regular rams it will not be able to use more than 7.5-8.5 gig has VRAM, the fact that game has so little total memory available to them (13.5 for both vram+ram) is maybe why the new game made with console in mind seem lock to a 8 gig vram usage, I doubt a 3070 will have issue running console level setting. Just thing of a PC playing AAA games in 2020 having only 6 gig of ram above the usage on boot to play game with, that already not a lot.
series xindeed has limits on how the RAM is split.

Memory Config:
10240 MB for GPU memory / 320 bit @ 560GB/s
+3584 MB for system memory / 192 bit @ 336GB/s
+2560 MB reserved by the OS / 192 bit @ 336GB/s

I am sure if they need more system RAM, they can dip into the 10gb pool.

PS5 probably reserves for the OS/GUI. But otherwise, all of the remaining RAM should be available to split however the devs want. And it's all GDDR6 256 bit for a total of 448 GB/s bandwidth.
 
Last edited:
Pretty amazing that these consoles are able to put up these kinds of performance
I feel like if this is really close to the norm (one could doubt it will) that slower than a Core i5 8400 with an RTX 2060 super even when running with lower details is quite under the pre-release/actual review hype.

Lot of people were expecting 2080ti/3070 performance with a bit under Ryzen 3700x when not above those.

That said maybe some game will show that, that only one example, maybe it was a bad way to do the RT for RDNA2 and maybe when Ampere/RDNA2 laptop GPU will be released the console will look amazing in comparison.
 
We've been anticipating this split since March of this year (when AMD's first demo was even worse than a 2060).

I figured they could improve efficiency, but we knew they were going to hit somewhere around the 2060-2070 wall.

It is possible that Big Navi will surprise us, but I'm expecting the RT performance from the 3060 to be faster than Big Navi.
what? all the leaks so far have put big navi ~3070 in RT. Obviously we'll know more soon, but I think the 6800 will be ~3070 in RT and outperform in raster. 6800xt would obviously be better than 6800, but I don't think it scales as well, so 6800xt/6900xt will not be that much better (aka, in between 3070 and 3080). Who knows though, at least we should be finding out shortly for sure.

For a ~245w console, this is not to shabby considering a 2060 super draws close to 200w by itself (not counting the motherboard, cpu, ram, etc).
 
Back
Top