More DLSS...

In some things latency really don't matter (research, BOINC, super-computers) as they are all about compute power
Latency really doesn't matter for super-computers? Dude.. Latency is one of the greatest obstacles in "super computers", what are you trying to say?

I have seen thousands of servers in datacenters, never seen an AMD card...plenty of Tesla cards though
No doubt, Nvidia has had a large multi-year lead here, as AMD was finally getting their stuff in order with Ryzen/Epyc and moving from GCN to RDNA architecture in their GPU's.

The current top 2 "super computers" in the world (Summit and Sierra) utilize IBM Power9 CPU paired with Nvidia Volta GV100 GPU's.

The next two coming online in the next <18 months, the worlds first Exascale systems, will be powered by both AMD CPU and GPU. Nice win for AMD there.
 
So, I loaded up Control to stability test my overclock last night. And I played it for like 10 minutes before noticing... it was running at 540P. DLSS 2.0 is magic.

Basically, I had been testing DLSS a few months ago and left the setting at the lowest (540P render resolution on a 1080p native monitor) and when I booted it up last night and didn't even realize anything was wrong until like 10 minutes in I was wondering why the text was a little blurry.

I mean, once I set it back to 720P it definitely looked nicer, but the fact that I didn't immediately notice something was wrong speaks to the quality of DLSS 2.0. Just need more games to support it.
 
I'd like to see mainstream games adopt DLSS. Warzone with DLSS 2.0 would be amazing since you could easily push 240+ fps at all times using it. However since DLSS requires AI training I bet it will be restricted to SP games that don’t change much since a BR game like Warzone constantly gets updated which would probably require retraining DLSS. Otherwise I can’t think of a good reason why Nvidia hasn’t pushed DLSS 2.0 to popular BR games.
 
Last edited:
I'd like to see mainstream games adopt DLSS. Warzone with DLSS 2.0 would be amazing since you could easily push 240+ fps at all times using it. However since DLSS requires AI training I bet it will be restricted to SP games that don’t change much since a BR game like Warzone constantly gets updated which would probably require retraining DLSS. Otherwise I can’t think of a good reason why Nvidia hasn’t pushed DLSS 2.0 to popular BR games.
Maybe the push for BR games and DLSS hasnt happened yet because most BR players are on 1080p Potatoes not capable of DLSS....?
 
Maybe the push for BR games and DLSS hasnt happened yet because most BR players are on 1080p Potatoes not capable of DLSS....?

Most BR players have decent systems since high FPS is king. I play at 1080p as well with 240 Hz. Nobody gives a shit about 4K in multiplayer games, frame rates is all that matters. In fact Nvidia heavily advertises this now: https://www.nvidia.com/en-us/geforce/campaigns/frames-win-games/



So I suspect it’s what I said above, a DLSS limitation that requires retraining when the game has large patches so it’s not feasible in evolving games like BR. If that’s true then DLSS will have limited usefulness.
 
I'd like to see mainstream games adopt DLSS. Warzone with DLSS 2.0 would be amazing since you could easily push 240+ fps at all times using it. However since DLSS requires AI training I bet it will be restricted to SP games that don’t change much since a BR game like Warzone constantly gets updated which would probably require retraining DLSS. Otherwise I can’t think of a good reason why Nvidia hasn’t pushed DLSS 2.0 to popular BR games.
I was under the impression that DLSS is not designed for high frame rates. Can't remember where I read it but I think it topped at around 140 fps or so
 
I was under the impression that DLSS is not designed for high frame rates. Can't remember where I read it but I think it topped at around 140 fps or so

I’ve never seen that mentioned anywhere but it could be a limitation of the tech I guess.
 
I'd like to see mainstream games adopt DLSS. Warzone with DLSS 2.0 would be amazing since you could easily push 240+ fps at all times using it. However since DLSS requires AI training I bet it will be restricted to SP games that don’t change much since a BR game like Warzone constantly gets updated which would probably require retraining DLSS. Otherwise I can’t think of a good reason why Nvidia hasn’t pushed DLSS 2.0 to popular BR games.
Early DLSS required per game training but DLSS 2.0 uses a generic algorithm.
No doubt it will get updates.
The downside is developers need to add support for version 2.0.
 
Early DLSS required per game training but DLSS 2.0 uses a generic algorithm.
No doubt it will get updates.
The downside is developers need to add support for version 2.0.

Developers needed to add support for version 1 too as they had to send their game code to Nvidia.

DLSS 2.0 is a lot simpler for developers. If their game or game engine already supports TAA implementing DLSS 2.0 is even easier. There is talk that DLSS 3.0 won't require any further work from developers in any game that uses TAA.

The algorithm will get better as Nvidia are continue to do training on their Super computer.
 
Most BR players have decent systems since high FPS is king. I play at 1080p as well with 240 Hz. Nobody gives a shit about 4K in multiplayer games, frame rates is all that matters. In fact Nvidia heavily advertises this now: https://www.nvidia.com/en-us/geforce/campaigns/frames-win-games/



So I suspect it’s what I said above, a DLSS limitation that requires retraining when the game has large patches so it’s not feasible in evolving games like BR. If that’s true then DLSS will have limited usefulness.


Would be interesting to see how many BR players besides enthusiasts have RTX GPU's that can do DLSS. How many players are still on earlier gens like Pascal etc?
 
While I’m sure the majority are on non-Turing GPUs, over time I expect there will be more that upgrade their 10xx and older cards especially as the whole price spectrum is covered and more generations are released.
 
While I’m sure the majority are on non-Turing GPUs, over time I expect there will be more that upgrade their 10xx and older cards especially as the whole price spectrum is covered and more generations are released.
I agree.
Wonder if part of the no DLSS in BR's deal has to do with the possible "outrage" of a game giving ppl with certain tech an advantage? Besides the usual fastest card wins stuff.
Would a game publisher steer away from that?
 
DLSS 2.0 is a lot simpler for developers. If their game or game engine already supports TAA implementing DLSS 2.0 is even easier. There is talk that DLSS 3.0 won't require any further work from developers in any game that uses TAA.

That DLSS 3.0 rumor was just made up click bait.

DLSS is always going to require developer work, it works a lot like checkerboard rendering, you need deep hooks into the gaming engine, but it isn't hard work.

Though you will NEVER be able to just force it on for games that don't support it.
 
Though you will NEVER be able to just force it on for games that don't support it.
Never is a pretty strong word here. I mean if a year ago someone said Nvidia would make 720p look like 1080p, but with double the performance, well, I would say that could never happen yet here we are.
 
Maybe the push for BR games and DLSS hasnt happened yet because most BR players are on 1080p Potatoes not capable of DLSS....?
So, one of the reasons I moved up to 1600P back when that became available was for Battlefield games; not terribly different compared to BR combat outside of the armored and flying vehicles. I went that direction because I wanted to see further, and that's an advantage I maintain today.

What concerns me with DLSS is that those details at a distance that I rely upon will not be properly enhanced; what might bother others is if the needed detail is over enhanced, providing an ESP-like advantage.
 
So, one of the reasons I moved up to 1600P back when that became available was for Battlefield games; not terribly different compared to BR combat outside of the armored and flying vehicles. I went that direction because I wanted to see further, and that's an advantage I maintain today.

What concerns me with DLSS is that those details at a distance that I rely upon will not be properly enhanced; what might bother others is if the needed detail is over enhanced, providing an ESP-like advantage.
There’s a good video from DF showing how DLSS 2.0 works in control comparing it to other methods and resolutions including zooming in on small text and other details- it is not at all like an anisotropic filter with sharpening or some other “fake detail” method - it’s closer to a mid resolution (i.e. 1800p instead of 2160p) in many scenes.
 
There’s a good video from DF showing how DLSS 2.0 works in control comparing it to other methods and resolutions including zooming in on small text and other details- it is not at all like an anisotropic filter with sharpening or some other “fake detail” method - it’s closer to a mid resolution (i.e. 1800p instead of 2160p) in many scenes.
Sure; but that needs to be proven in a game like Battlefield where the 'detail' is literally a single native pixel, and you're cueing (and aiming and firing) on the difference in color vs. the background. Because shit really is that far away, and the weapons do reach!
 
Never is a pretty strong word here. I mean if a year ago someone said Nvidia would make 720p look like 1080p, but with double the performance, well, I would say that could never happen yet here we are.

DLSS 2.0 needs access to motion vector data (the same data used for TAA). So unless DirectX defines a standard for generating and using motion vectors DLSS will require game support.

The alternative is for nvidia to add custom code in the driver for each game based on the specific motion vector approach and even then it’s not guaranteed to work properly.
 
DLSS 2.0 needs access to motion vector data (the same data used for TAA). So unless DirectX defines a standard for generating and using motion vectors DLSS will require game support.

The alternative is for nvidia to add custom code in the driver for each game based on the specific motion vector approach and even then it’s not guaranteed to work properly.

Only thing I'm worried about in regards to DLSS is whether devs will implement it because both consoles are all AMD.
 
Only thing I'm worried about in regards to DLSS is whether devs will implement it because both consoles are all AMD.
AMD would no doubt benefit from a overall tech slow down in the GPU field. Get rid of DLSS, wait for RT another few years etc.
 
Never is a pretty strong word here. I mean if a year ago someone said Nvidia would make 720p look like 1080p, but with double the performance, well, I would say that could never happen yet here we are.

There was an amazing video on DLSS 2.0 reconstruction, linked earlier in this thread. Unfortunately you now need to sign up for the developer program to see it.

The reason DLSS 2.0 is so good, is you have to feed it a bunch of very specific inputs. You have to render lower resolution in a specified sample pattern (much like checkerboard rendering) Example: Render frame in half width, one frame you render the left pixel, and the next the right, then use some smarts to assemble them. But more than that in the video they indicated you have to boost the texture resolution (and it said something like: "Because we can't create detail that isn't there"), change some other filtering levels, and do denoising step, and finally apply the 2D HUD, at full resolution after upscaling.

Almost none of these steps can be done after the fact. This is not just taking an output, up-scaling with AI. It's advanced checkerboard rendering with deep hooks into the game engine, with AI.

So when I say this can never just be forced on, I mean this particular technique, that requires all these important game engine changes/data for reconstruciton. Now in theory they could come up with something that just based information on simply up-scaling the final output with AI, but IMO we won't see that, because that will always be inferior to this kind advanced AI checkerboard reconstruction.

IMO with a smart enough algorithm, you could do a credible job of this without AI (See Control DLSS 1.5, and Horizon Zero Dawn). AMD should be working on it, to counter DLSS. It might not turn out as good as DLSS 2.0, but it could get close enough to be more competitive.
 
Only thing I'm worried about in regards to DLSS is whether devs will implement it because both consoles are all AMD.

Microsoft has their own version of cloud machine learning (without tensor cores). They could add it to a future version of DirectX
 
Microsoft has their own version of cloud machine learning (without tensor cores). They could add it to a future version of DirectX
Microsoft is implementing a system that is going to upscale all original Xbox and Xbox 360 games to 4K on the Series X, in addition to adding HDR and increasing framerates for those games that are not fixed. There is an article on the Xbox website going into detail, I just need to find it.
 
DLSS 2.0 needs access to motion vector data (the same data used for TAA). So unless DirectX defines a standard for generating and using motion vectors DLSS will require game support.

The alternative is for nvidia to add custom code in the driver for each game based on the specific motion vector approach and even then it’s not guaranteed to work properly.
UE4 has a branch that supports DLSS, once the game engines have support I don't see why many games would leave that option absent, even if not using RT. The tech is really outstanding.

PApswAS.png


Here 540p with DLSS 2.0 looks way more sharper, better than 1080p TAA.
 
Microsoft has their own version of cloud machine learning (without tensor cores). They could add it to a future version of DirectX
Are you talking abou DirectML?
AFAIK, its already on DX12, don't know why nobody uses it. I recall MS demoed Forza using it on a GTX1080Ti (I think) even before DLSS came out a couple of years ago or so.
 
Are you talking abou DirectML?
AFAIK, its already on DX12, don't know why nobody uses it. I recall MS demoed Forza using it on a GTX1080Ti (I think) even before DLSS came out a couple of years ago or so.

Yes, has to be direct ML. Will be used in series X. So I was under the assumption that it is a future feature

– Xbox Series X supports Machine Learning for games with DirectML, a component of DirectX. DirectML leverages unprecedented hardware performance in a console, benefiting from over 24 TFLOPS of 16-bit float performance and over 97 TOPS (trillion operations per second) of 4-bit integer performance on Xbox Series X. Machine Learning can improve a wide range of areas, such as making NPCs much smarter, providing vastly more lifelike animation, and greatly improving visual quality.


https://news.xbox.com/en-us/2020/03/16/xbox-series-x-glossary/
 
  • Like
Reactions: noko
like this
Are you talking abou DirectML?
AFAIK, its already on DX12, don't know why nobody uses it. I recall MS demoed Forza using it on a GTX1080Ti (I think) even before DLSS came out a couple of years ago or so.

According to an interview between Microsoft Game Stack General Manager James Gwertzman and VentureBeat, the company is using machine learning (ML) to improve low-resolution textures in real time. You can read what he said below.


You were talking about machine learning and content generation. I think that's going to be interesting. One of the studios inside Microsoft has been experimenting with using ML models for asset generation. It's working scarily well. To the point where we're looking at shipping really low-res textures and having ML models uprez the textures in real time. You can't tell the difference between the hand-authored high-res texture and the machine-scaled-up low-res texture, to the point that you may as well ship the low-res texture and let the machine do it... Like literally not having to ship massive 2K by 2K textures. You can ship tiny textures... The download is way smaller, but there's no appreciable difference in game quality. Think of it more like a magical compression technology. That's really magical. It takes a huge R&D budget. I look at things like that and say — either this is the next hard thing to compete on, hiring data scientists for a game studio, or it's a product opportunity. We could be providing technologies like this to everyone to level the playing field again.

https://www.windowscentral.com/microsoft-wants-use-machine-learning-improve-poor-game-textures
 
Are you talking abou DirectML?
AFAIK, its already on DX12, don't know why nobody uses it. I recall MS demoed Forza using it on a GTX1080Ti (I think) even before DLSS came out a couple of years ago or so.

This article consolidates posts from 3 different reports

https://lordsofgaming.net/2020/06/xbox-series-x-directml-a-next-generation-game-changer/

First let’s look at the work of Playfab, a company that Microsoft acquired back in early 2018. They have been working on making tools for the back-end of games supported in the cloud. Playfab is using the power of Microsoft’s Azure servers & AI to upscale low-resolution textures in real-time. Playfab’s GM, James Gwertzman talked with VentureBeat on some of the things he and his team are working on.

“One of the studios inside Microsoft has been experimenting with using ML models for asset generation. It’s working scarily well. To the point where we’re looking at shipping really low-res textures and having ML models uprez the textures in real-time. You can’t tell the difference between the hand-authored high-res texture and the machine-scaled-up low-res texture, to the point that you may as well ship the low-res texture and let the machine do it”

Back in March Eurogamer’s Digital Foundry was presented with the Xbox Series X using machine learning to convert SDR to HDR in a few games including Halo 5 in real-time.

“We got to see the Xbox One X enhanced version of Halo 5 operating with a very convincing HDR implementation, even though 343 Industries never shipped the game with HDR support. Microsoft ATG principal software engineer Claude Marais showed us how a machine learning algorithm using Gears 5’s state-of-the-art HDR implementation is able to infer a full HDR implementation from SDR content on any back-compat title.”
 
Are you talking abou DirectML?
AFAIK, its already on DX12, don't know why nobody uses it. I recall MS demoed Forza using it on a GTX1080Ti (I think) even before DLSS came out a couple of years ago or so.

One more link...

image reconstruction techniques are improving in quality to the point where Nvidia's DLSS AI upscaling is now capable of producing hugely impressive results from just one quarter native resolution - and we actually have an example of some Microsoft research where the firm is using its own machine learning-based API, DirectML, to produce some remarkably good AI upscaling on Forza Horizon 3. So far, we've not had any kind of hints on hardware-accelerated deep learning features baked into either the next-gen consoles or indeed RDNA 2, but DirectML has been architected in parallel with the DXR ray tracing API and I find it hard to believe that Microsoft would develop this technology when only Nvidia has the hardware to fully leverage it.

https://www.eurogamer.net/articles/...theory-does-a-4tf-next-gen-console-make-sense
 
One more link...

image reconstruction techniques are improving in quality to the point where Nvidia's DLSS AI upscaling is now capable of producing hugely impressive results from just one quarter native resolution - and we actually have an example of some Microsoft research where the firm is using its own machine learning-based API, DirectML, to produce some remarkably good AI upscaling on Forza Horizon 3. So far, we've not had any kind of hints on hardware-accelerated deep learning features baked into either the next-gen consoles or indeed RDNA 2, but DirectML has been architected in parallel with the DXR ray tracing API and I find it hard to believe that Microsoft would develop this technology when only Nvidia has the hardware to fully leverage it.

https://www.eurogamer.net/articles/...theory-does-a-4tf-next-gen-console-make-sense

Microsoft did do DXR...which kinda negates you point about Microsoft developing API's where only one vendor has hardware support ;)
 
Microsoft did do DXR...which kinda negates you point about Microsoft developing API's where only one vendor has hardware support ;)

In that context, I think Eurogamer was saying that Microsoft was using DLSS in Forza & it is very unlikely that Forza would support only Nvidia's DLSS. There has to be a way where it can support AMD via directML too.
 
That DLSS 3.0 rumor was just made up click bait.

DLSS is always going to require developer work, it works a lot like checkerboard rendering, you need deep hooks into the gaming engine, but it isn't hard work.

Well, good to know it was only clickbait.

But, as you say, it's not hard work to implement DLSS 2.0.
 
"Death Stranding" gets DLSS 2.0:
https://www.nvidia.com/en-us/geforce/news/death-stranding-pc-partnership-dlss-2-0-trailer/
DEATH STRANDING already has the reputation of being a visual stunner, but NVIDIA DLSS 2.0 helps make the PC debut something special. The extra performance NVIDIA DLSS 2.0 delivers in DEATH STRANDING allows gamers to unlock the vast graphical potential of the PC platform by increasing the graphics settings and resolution, delivering on the vision we have for the game.” - Akio Sakamoto, CTO KOJIMA PRODUCTIONS
 
Back
Top