AMD Could Do DLSS Alternative with Radeon VII through DirectML API

Megalith

24-bit/48kHz
Joined
Aug 20, 2006
Messages
13,000
DLSS isn’t exclusive to NVIDIA, at least in theory: AMD is reportedly testing an alternative implementation made by possible by Microsoft’s DirectML API, a component of DirectX 12 allowing for “trained neural networks” on any DX12 card. “The Deep Learning optimization, or algorithm, if you will, becomes a shader that can run over the traditional shader engine, without a need for tensor cores.”

Will game developers actually implement, experiment and integrate support for DX ML into games? It also has to be said, it works reversed, DirectML could also make use of Nvidia's Tensor cores, certainly giving them an advantage. As the Radeon card would see a performance hit, whereas the RTX cards can simply offload the algorithm towards its Tensor cores. Time will tell, but this certainly is an interesting development as long as your graphics card is powerful enough, of course.
 
As most pc games are console ports i doubt it. Same goes for ray tracing and dlss, when consoles are capable of it then maybe we'll see them in pc games as standard features rather than something a gpu vendor paid devs to implement.
 
As most pc games are console ports i doubt it.

Actually, this could make a lot of sense. If you think about it, DLSS allows you to render everything at lower resolution, then fill in the gaps with a neural network (similar to how our eyes behave). If there could be a certain amount of cores, be it shaders or AMD's equivalent to Nvidia's tensor cores, dedicated to the same task via DX12, all games could benefit from it and so consoles would get more performance from the same hardware (because of the lower resolution rendering).
 
I certainly hope these fancy over-hyped AI upscaling schemes are not the future. I want real rendered pixels, not some upscaling.
 
who cares? DLSS is shit, the fuck would i downscale then upscale for? ive seen the videos by nexus, and DLSS looks like shit.
 
If AMD did this in software, I wonder would it have to be the same card as the one doing the regular rendering?
 
As most pc games are console ports i doubt it. Same goes for ray tracing and dlss, when consoles are capable of it then maybe we'll see them in pc games as standard features rather than something a gpu vendor paid devs to implement.


Uh actually that is exactly why it can be done, AMD are the providers of the main APU's of the two consoles, and both the ps4 pro and the xb1 x have their own versions of checkerboard rendering that gives you image quality between the base resolution and the native desired one at around half the cost in performance.


Wishing for native resolution is nice but performance budget isn't infinite, that is why we use rasterized graphics instead of path traced ones (and even with rtx you guys have seen the hacks they have had to pull off for two types of 2 bounce Ray traced effects, simply because performance budget is never enough).
 
Uh actually that is exactly why it can be done, AMD are the providers of the main APU's of the two consoles, and both the ps4 pro and the xb1 x have their own versions of checkerboard rendering that gives you image quality between the base resolution and the native desired one at around half the cost in performance.


Wishing for native resolution is nice but performance budget isn't infinite, that is why we use rasterized graphics instead of path traced ones (and even with rtx you guys have seen the hacks they have had to pull off for two types of 2 bounce Ray traced effects, simply because performance budget is never enough).

I already know that, in reference to raytracing that's one thing that will have to wait quite a bit for consoles to be able to make use of it. But again even with this tech its getting devs to use it thats the main issue. Some devs need to have money thrown at them for them to even bother.
 
Sadly that part is entirely true, a system wide option that one could check in drivers is the holy grail for all of us exactly because of that.

I won't say that developers are lazy, I will say that publishers rush them and push them until only the most basic stuff can be implemented and that's a shame.
 
This only supports my hypothesis that AMD is going to do Ray-Tracing with the CPU+GPU using general compute performance. If AI like functionality can be done on the GPU with DX ML then the Ray-Casting can be done on the CPU. You know, AMD's CPU's that have like 8+ cores with 16+threads that AMD shows off almost always using Cinebench a Ray-Tracing benchmark. CPU cores that are nearly useless in games cause they only care about IPC. It just makes the most sense to me.
 
  • Like
Reactions: N4CR
like this
I like the idea of a standardised model for a DLSS like process than a vendor specific one any day of the week, if Microsoft can build it into DX12 then that guys at Khronos can too for Vulkan then everybody wins.
 
pointless, techs relying on Devs adoption won't go very far, AMD and Nvidia gave them that bad habit of sponsoring games, so most greedy studios won't implement something even if it's better for the game untill one of them sponsors them.
so unless AMD pushs this on upcoming consoles, ray tracing and dlss won't take off.
 
I like the position AMD has put themselves in. They're basically using Nvidia as a beta tester. Nvidia puts something out, AMD improves upon it.
 
pointless, techs relying on Devs adoption won't go very far, AMD and Nvidia gave them that bad habit of sponsoring games, so most greedy studios won't implement something even if it's better for the game untill one of them sponsors them.
so unless AMD pushs this on upcoming consoles, ray tracing and dlss won't take off.

That's actually not how it's supposed to work. The game devs send Nvidia the game and Nvidia runs it through there super computer. Then Nvidia releases the DLSS profile in a driver update. That's how it is supposed to work. So nothing for the game devs to do. DLSS certainly has a future if both Nvidia and AMD support it. Time will tell.
 
AMD can do a lot of things. The problem with AMD's Radeon division is that they don't actually do enough stuff.
 
That's actually not how it's supposed to work. The game devs send Nvidia the game and Nvidia runs it through there super computer. Then Nvidia releases the DLSS profile in a driver update. That's how it is supposed to work. So nothing for the game devs to do. DLSS certainly has a future if both Nvidia and AMD support it. Time will tell.
That is how it is supposed to work. So what is wrong?
Devs not sending it? Nvidia Super Computer broken?
At this point best case scenario is AAA games at some point. I can't consider it a universal tech when it most likely won't be available for the 1000's of games that aren't the latest and greatest. Fringe VR games that need it? Not a chance by the looks of it!
 
That is how it is supposed to work. So what is wrong?
Devs not sending it? Nvidia Super Computer broken?
At this point best case scenario is AAA games at some point. I can't consider it a universal tech when it most likely won't be available for the 1000's of games that aren't the latest and greatest. Fringe VR games that need it? Not a chance by the looks of it!

Who knows what the issues are, but a little patience goes a long way.
 
Who knows what the issues are, but a little patience goes a long way.
What is your definition of a little patience? Turding wasn't released last week. This is looking bad. No, an outright CON.
Give me dates please?
 
What is your definition of a little patience? Turding wasn't released last week. This is looking bad. No, an outright CON.
Give me dates please?

I am in my early 50's have been a gamer since my teens. When it comes to the PC industry change always takes time. Take DX12 for instance, how long has that been out for and yet still the majority of games are DX11. Another one is PC monitors, still the majority of gaming screens are 60hz. How bizarre is that, all gaming screens should be at least 100hz. What about quad core CPU's. If it wasn't for AMD's RyZen I'm sure all Intels latest main stream would still be only 4 cores.If you don't have patience, then PC's are not for you. Change is always slow in the PC industry.
 
Change is always slow in the PC industry.
Strange, I remember building boxes every six months for a while back in the day. It wasn't always this slow.

AMD is doing the smart play waiting for the standards to flush out here.
 
I certainly hope these fancy over-hyped AI upscaling schemes are not the future. I want real rendered pixels, not some upscaling.

To be honest, I think this kind of AI learning supersampling may be the future. Real time rendering of 4K is already frigging hard, 8K is just insane. Just look at how amazing job the Neural Network texture upscalers are doing. I mean, I know that kind of upscaling is impossible in real time (for now?) but it is a good example that upscaling can work really well.
 
For gods sake DirectML was first demoed on nvidia hardware. I guess nvidia will support both

DirectML runs as a shader, DLSS runs on tensor cores. I'll assume DLSS runs faster or maybe nvidia just doesn't want AMD to take advantage of its neural networks...
 
What is your definition of a little patience? Turding wasn't released last week. This is looking bad. No, an outright CON.
Give me dates please?

Development lifecycles are pretty long. Turing was only just announced in September. It takes AT LEAST 2 years to develop a triple A game, usually more. Sometimes developers are close to Nvidia and receive support and guidance on how to implement future features. They might have these features already, if Nvidia caught them early enough in the development cycle.

For the other developers who don't have this kind of access, we can expect support for features like this to appear in titles that started development on or after September 2018. So, available in titles somewhere between 2020 and 2022.

Patience is key,

That said, I don't want upscaling, AI or not.
 
DirectML runs as a shader, DLSS runs on tensor cores. I'll assume DLSS runs faster or maybe nvidia just doesn't want AMD to take advantage of its neural networks...

Nvidia always ignores open standards trying to create their own proprietary nonsense whenever they can in an attempt to lock out the competition. It's kind of their grand evil modus operandi. Lock-outs and lock-ins inconveniencing the user in order to pump up the profits.

VRR was first out of the gate as part of the VESA standard in 2014, but could Nvidia use it? No. Instead they took the idea, created their own G-Sync standard, locked it in to Nvidia GPU's knowing that people keep monitors for a long time and thus would be locked into buying their products for years, or not get the full features of their screen.

Then we had Gameworks and HairWorks and things like that, where Nvidia provides a toolkit and middleware up front to developers, reducing the work they have to do to implement fancy visual features that at first locked out AMD users all together, and when they finally let them in did so in a highly un-optimized fashion for no technical reason, forcing users to pick between having all the graphical features on Nvidia, or not (or with great fps penalty) on AMD hardware.

Same thing is going on here with DLSS and Ray Tracing I fear.

AMD on the other hand is the complete opposite. They embrace and create open standards, but then fail to take advantage of them, essentially giving things like HBM and FreeSync away.

Nvidia has a long history of coercion, lock-outs, lock-ins and unethical market manipulation. I hate that I give them money, lots of money, for their products, but as it stands right now AMD just doesn't have a sufficient alternative for me.
 
Last edited:
Development lifecycles are pretty long. Turing was only just announced in September. It takes AT LEAST 2 years to develop a triple A game, usually more. Sometimes developers are close to Nvidia and receive support and guidance on how to implement future features. They might have these features already, if Nvidia caught them early enough in the development cycle.

For the other developers who don't have this kind of access, we can expect support for features like this to appear in titles that started development on or after September 2018. So, available in titles somewhere between 2020 and 2022.

Patience is key,

That said, I don't want upscaling, AI or not.

Turing cards were released in September, not announced. And it takes 3 to 5 years to develop a GPU, do you think they just suddenly decided to use DLSS in September last year? They have had Tensor cores on cards since December 2017. Tensor cores were actually known about since May 2017 as that's when Nvidia announced the V100 and explained the architecture.

Not that it matters much because your gaming development cycles don't really matter for DLSS because with DLSS the work is all done by Nvidia's supercomputers. And, according to Nvidia, It's easy to implement in older games. If you want to use DLSS in your game, you send it to Nvidia and they use their supercomputers. It doesn't require any time or effort on the part of the developers apart from sending data to Nvidia to train the AI.

Yet, here we are, 5 months after Turing cards were announced with no DLSS games to play. That's really strange to me considering that Nvidia is supposed to be great at working with game developers. What's the delay?
 
Nvidia has a long history of coercion, lock-outs, lock-ins and unethical market manipulation. I hate that I give them money, lots of money, for their products, but as it stands right now AMD just doesn't have a sufficient alternative for me.

So much truth here. That said, I had been spec'ing out laptops to replace my aging Aorus, and have made the decision to switch out my entire setup. I'm moving very consciously away from Nvidia this cycle because I am tired of criticizing them, but caving and buying green anyway. I do enjoy the irony in owning (and having NO interest in replacing) my SheildTV - and having pretty decent usage of the GFN beta under my belt (Mac user - sue me.)

I just ordered picked up a 2018 15" MBP. Most of my workflow lives in the OSX environment and the dodgy, not always perfect science of hackintoshing is no longer a viable alternative. Tired of fighting with kexts and webdrivers and swapping out wifi cards. The Vega 20 bump doesn't make it a better buy necessarily, but it is at least on par performance wise with my other options (X1 extreme, XPS 15, Precision 5520). I am offloading my Gsync monitor to a neighbor's kid who thinks its the most amazing thing he's ever seen, and I'll be going with a Freesync 2 based ultrawide. An egpu setup based on either a Vega64 or the upcoming Vega7 will be coming shortly, and I'll be using Shadow service for a few months until I make that plunge. I know I am sacrificing top end performance and features, but I (80 percent of the posters here) and operate in the PC gaming 1%. The performance drop-off isn't so bad as to be intolerable. I don't see RTX being something I wouldnt want to be without for another year at least, and by then 60fps with RTX on won't be a big deal anymore.

As people who buy their most profitable equipment, and justify live on the expensive bleeding edge of this industry, we dictate what they charge, because we are the ones who agree to pay what they ask. Stop buying, and they will react accordingly. This has already borne out with Gsync. That isnt to say it isnt a good tech development, there are benefits to its existence, but now that there is a cheaper, mostly equal alternative for MOST people, we shouldnt be rewarding such anti-competitive stances.
 
How about amd first make a video card, that outruns a geforce

then, we can talk about adding anying you want.
 
Trained neural network... deep learning...
Such big words to sell us stuff.
 
Nvidia always ignores open standards trying to create their own proprietary nonsense whenever they can in an attempt to lock out the competition. It's kind of their grand evil modus operandi. Lock-outs and lock-ins inconveniencing the user in order to pump up the profits.

VRR was first out of the gate as part of the VESA standard in 2014, but could Nvidia use it? No. Instead they took the idea, created their own G-Sync standard, locked it in to Nvidia GPU's knowing that people keep monitors for a long time and thus would be locked into buying their products for years, or not get the full features of their screen.

Then we had Gameworks and HairWorks and things like that, where Nvidia provides a toolkit and middleware up front to developers, reducing the work they have to do to implement fancy visual features that at first locked out AMD users all together, and when they finally let them in did so in a highly un-optimized fashion for no technical reason, forcing users to pick between having all the graphical features on Nvidia, or not (or with great fps penalty) on AMD hardware.

Same thing is going on here with DLSS and Ray Tracing I fear.

AMD on the other hand is the complete opposite. They embrace and create open standards, but then fail to take advantage of them, essentially giving things like HBM and FreeSync away.

Nvidia has a long history of coercion, lock-outs, lock-ins and unethical market manipulation. I hate that I give them money, lots of money, for their products, but as it stands right now AMD just doesn't have a sufficient alternative for me.
AMD was in partnership to make HBM with multiple companies, they didn't 'give it away'.. bit like PS3/Cell CPU and IBM/Sony etc.
Freesync is open standard based on Adaptive sync as you know and that is great because now people are not locked into one provider of screens, even nvidia users for once in the last few years lol.

How about amd first make a video card, that outruns a geforce

then, we can talk about adding anying you want.
You keep posting this same bullshit and you clearly don't know shit about AMD product line up. The VII is expected to compete with 2080 and they have competing solutions the whole way down the product stack and actually beat Nvidia comfortably in mid to low range with existing products.
So short of the RMA2080TI and the titan RMA which is less than one percent of the market, they compete with 99% of the product stack this year already. Stop lying, I bet you own ngreedia shares hence the FUD spreading. This isn't plebbit and you will get called out for bullshit.
 
People are completely forgetting that just like with Turing, the Radeon VII's going to have a bunch of unused hardware (in traditional gaming workloads) that it'll be able to dedicate towards Anti-Aliasing, but with a major difference in flexibility. These being Turing's Tensor Cores obviously, along with the 200-300GB/s of unused memory bandwidth while gaming, in Vega 20/the Radeon VII's case (memory bandwidth scaling for gaming is going to flatline at around 700-750GB/s, leaving an entire HBM2 stack sitting idle).

Nvidia's Tensor Cores ONLY work with matrix math, & are thus useless in gaming workloads outside running pre-trained, machine learning AA algorithms (DLSS); whereas the Radeon VII's huge excess of memory bandwidth can be used to either accelerate similar algorithms via DirectML, or run various forms bog-standard AI requiring NO PRE-TRAINING with minimal perf hit.
 
Deep learning is obviously better than super sampling. Is there anything tensor cores can't do?
 
Back
Top