Intel Arc - Xᵉ Super Sampling

That looks great. I'm truly interested in seeing how far they go with the performance & what codec acceleration is included.
 
That looks great. I'm truly interested in seeing how far they go with the performance & what codec acceleration is included.
It looks okay. One thing that sticks out is how some of the particle effects disappeared in the upscaling process. The upscalar is too focused on the models and textures, completely ignoring the post-process effects.
 
It looks okay. One thing that sticks out is how some of the particle effects disappeared in the upscaling process. The upscalar is too focused on the models and textures, completely ignoring the post-process effects.
Ah, I wasn't paying that close attention (max of 480p here at work...). They might need to turn it down a bit then.
 
Ah, I wasn't paying that close attention (max of 480p here at work...). They might need to turn it down a bit then.
You can see the most obvious parts at 480p in the lower-left corner of the base where the lightning effect is emanating and how the surrounding lights are affected.
 
You can see the most obvious parts at 480p in the lower-left corner of the base where the lightning effect is emanating and how the surrounding lights are affected.
Wow... definitely no glow. That makes everything sharp & boring. :eek:
 
It kinda understandable since not many gamers will run game on Arc laptops.

What are you talking about here fool? the xess is another open-source competitor to FSR2.

You can run it on whatever the fuck you want!

Now all we need if for NVIDIA to release the Non-Tensor-Core-required DLSS 2 implementation from Control, if they want to make this a real competition!
 
Last edited:
You can when the cross-platform DP4a functionality of XeSS is ready. Which is isn't, which is why Dolmen isn't releasing with XeSS (not exactly allot of Arc GPU's out there yet to utilize the XMX accelerated part of XeSS)
 
What are you talking about here fool? the xess is another open-source competitor to FSR2.

You can run it on whatever the fuck you want!

Now all we need if for NVIDIA to release the Non-Tensor-Core-required DLSS 2 implementation from Control, if they want to make this a real competition!
It's open standard, not open source.
 
Intel details XeSS tech: Arc A770 GPU benched, 20+ games support XeSS

https://www.tweaktown.com/news/8814...ex.html?utm_source=dlvr.it&utm_medium=twitter

A list of games that will have Intel XeSS support either when they launch, or over the coming months, these games include:

  • Call of Duty: Modern Warfare II
  • Arcadegeddon
  • Ghostwire Tokyo
  • Vampire Bloodhunt
  • Ghostbusters Spirits Unleashed
  • Naraka Bladepoint
  • Super People
  • Gotham Knights
  • DioField Chronicles
  • Dolmen
  • Chivalry II
  • Redout II
  • The Settlers
  • Death Stranding: Director's Cut
  • The Rift Breaker
  • Hitman III
  • CHORVS
  • Shadow of The Tomb Raider
  • Anvil Vault Breakers
 
XeSS will work on all 3rd party GPUs that support HLSL Shader Model 6.4

WDDM 2.6​

Windows 10 May 2019 Update (Version 1903) includes WDDM 2.6. Updates to display driver development in Windows 10 version 1903 include the following feature:

  • Shader Model 6.4, adding support low-precision packed dot product intrinsics and for library sub-objects to simplify ray-tracing
Supported GPUs: GCN 1+, Kepler+, Skylake+

FbDLlsiaAAcYzhZ.png
 
Intel XeSS ML Upscaling - The Digital Foundry Tech Review - XeSS vs DLSS vs Native!

 
  • Like
Reactions: noko
like this
Intel XeSS ML Upscaling - The Digital Foundry Tech Review - XeSS vs DLSS vs Native!


Looks like Intel did a good job on this, while still needing to be improved upon. Then again developers may also need a little more time to get it down as well. So developers have 3 choices or major decent upscalers. DLSS, XeSS and FSR with FSR the wider GPU supported method. With the uber GPUs coming up, I wished there was a DLSS/XeSS/FSR mode at native resolution for that better overall quality over TAA. Nice seeing an ARC 770 being used as well.
 
i wonder if DLDSR is somewhat close to that (not that I do not understand what you mean but that I am not sure I fully understand all the NVIDIA acronym) stuff

https://techguided.com/what-is-nvidias-dldsr-how-to-enable/
https://www.techspot.com/guides/2428-dldsr-vs-dlss/
Basically in Doom Eternal, playing on my 1440p monitor, I would DSR game at 4K and run DLSS in Quality mode which would render the game internally at my native resolution of 1440p. The difference in quality using DLSS and DSR was much better than native at 1440P with only a small performance drop. So for DLSS it is possible just not an easy selected mode to do it. AMD FSR should be similar except AMD has less resolutions choices to scale from.

LOL, did not know about DLDSR, have to check more into it. Thanks
 
LOL, did not know about DLDSR, have to check more into it. Thanks
I think it does something really close to what you did on Doom, it use DLSS like technology to run a game with DSR at a lower performance cost.
 
I think it does something really close to what you did on Doom, it use DLSS like technology to run a game with DSR at a lower performance cost.
Looks like a cool option to play around with. Both using tensile cores, AI work while DLSS is rendering at a lower internal resolution using temporal data, custom AA while DLDSR looks to just be rendering internally at a higher resolution then downscaling using AI. Combine both DLDSR and DLSS? :). Just have to play around with this and it has been out for over 8 months, jesh.
 
i wonder if DLDSR is somewhat close to that (not that I do not understand what you mean but that I am not sure I fully understand all the NVIDIA acronym) stuff

https://techguided.com/what-is-nvidias-dldsr-how-to-enable/
https://www.techspot.com/guides/2428-dldsr-vs-dlss/
Looks like Intel did a good job on this, while still needing to be improved upon. Then again developers may also need a little more time to get it down as well. So developers have 3 choices or major decent upscalers. DLSS, XeSS and FSR with FSR the wider GPU supported method. With the uber GPUs coming up, I wished there was a DLSS/XeSS/FSR mode at native resolution for that better overall quality over TAA. Nice seeing an ARC 770 being used as well.
There's DLAA which NVIDIA should force devs to also support. Makes no sense to not add that as well.
 
There's DLAA which NVIDIA should force devs to also support. Makes no sense to not add that as well.
Actually that looks like what I want for using AI to enhance the native resolution image quality and AA. Why even name it differently over DLSS? Does it not use temporal data for the AA? Basically DLSS (Native) mode option.
 
XeSS is vendor agnostic so it can be supported on all brands.
Would be nice if they'd use the tensor cores on nVidia cards to accelerate it. I guess they want to push their own cards somehow. According to the DF video, it uses INT8 math on their cards, which is something the tensor cores are very good at.
Actually that looks like what I want for using AI to enhance the native resolution image quality and AA. Why even name it differently over DLSS? Does it not use temporal data for the AA? Basically DLSS (Native) mode option.
Marketing. But yes, it is more or less just DLSS in native resolution.
 
  • Like
Reactions: noko
like this
Actually that looks like what I want for using AI to enhance the native resolution image quality and AA. Why even name it differently over DLSS? Does it not use temporal data for the AA? Basically DLSS (Native) mode option.
I assume that it does but runs at native resolution instead.
 
  • Like
Reactions: noko
like this
This old article says DP4a support is mandatory, which restricts it to RDNA2 & above

https://www.pocket-lint.com/games/news/pc-gaming/161025-what-is-xe-super-sampling

XeSS is very likely not a usable reconstruction on any GPU from the following series: nVidia: Maxwell and before AMD: (Navi10), Vega 16nm, ALL Vega iGPUs, Polaris and before Intel: DG1, ICL and before.

https://twitter.com/JirayD/status/1575071274543435776?s=20&t=9ZUmlluCtzkI9u1gezw8hQ

For GPUs without DP4 support, XeSS uses an Int24 fallback path. While this is the fastest widely supported Int format on semi-modern GPUs, it is running far too slow for practical usage. On a 5700XT (probably the most powerful GPU without DP4) I am getting memeworthy framerates.
 
Last edited:
XeSS is very likely not a usable reconstruction on any GPU from the following series: nVidia: Maxwell and before AMD: (Navi10), Vega 16nm, ALL Vega iGPUs, Polaris and before Intel: DG1, ICL and before.

https://twitter.com/JirayD/status/1575071274543435776?s=20&t=9ZUmlluCtzkI9u1gezw8hQ

you'll be most likely to benefit if you have an Nvidia Pascal GPU or later, an AMD RX 6000 card(6600 or above), or an Intel Arc GPU. You'll also probably want 8GB or more VRAM or you may be limited to single digit percentage gains.

Nvidia RTX cards are still better off with DLSS, but GTX owners might appreciate the feature.

https://www.tomshardware.com/news/shadow-of-the-tomb-raider-xess-tested
 
you'll be most likely to benefit if you have an Nvidia Pascal GPU or later, an AMD RX 6000 card(6600 or above), or an Intel Arc GPU. You'll also probably want 8GB or more VRAM or you may be limited to single digit percentage gains.

Nvidia RTX cards are still better off with DLSS, but GTX owners might appreciate the feature.

https://www.tomshardware.com/news/shadow-of-the-tomb-raider-xess-tested
It worked for me on my RX 6700, getting ready to try the RX 6600 out.
 
Back
Top