AMD Could Do DLSS Alternative with Radeon VII through DirectML API

Discussion in '[H]ard|OCP Front Page News' started by Megalith, Jan 20, 2019.

  1. Megalith

    Megalith 24-bit/48kHz Staff Member

    Messages:
    12,864
    Joined:
    Aug 20, 2006
    DLSS isn’t exclusive to NVIDIA, at least in theory: AMD is reportedly testing an alternative implementation made by possible by Microsoft’s DirectML API, a component of DirectX 12 allowing for “trained neural networks” on any DX12 card. “The Deep Learning optimization, or algorithm, if you will, becomes a shader that can run over the traditional shader engine, without a need for tensor cores.”

    Will game developers actually implement, experiment and integrate support for DX ML into games? It also has to be said, it works reversed, DirectML could also make use of Nvidia's Tensor cores, certainly giving them an advantage. As the Radeon card would see a performance hit, whereas the RTX cards can simply offload the algorithm towards its Tensor cores. Time will tell, but this certainly is an interesting development as long as your graphics card is powerful enough, of course.
     
    Maddness and Revdarian like this.
  2. Chris_B

    Chris_B [H]ardness Supreme

    Messages:
    5,031
    Joined:
    May 29, 2001
    As most pc games are console ports i doubt it. Same goes for ray tracing and dlss, when consoles are capable of it then maybe we'll see them in pc games as standard features rather than something a gpu vendor paid devs to implement.
     
    Maddness likes this.
  3. euskalzabe

    euskalzabe Gawd

    Messages:
    860
    Joined:
    May 9, 2009
    Actually, this could make a lot of sense. If you think about it, DLSS allows you to render everything at lower resolution, then fill in the gaps with a neural network (similar to how our eyes behave). If there could be a certain amount of cores, be it shaders or AMD's equivalent to Nvidia's tensor cores, dedicated to the same task via DX12, all games could benefit from it and so consoles would get more performance from the same hardware (because of the lower resolution rendering).
     
  4. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    26,970
    Joined:
    Oct 29, 2000
    I certainly hope these fancy over-hyped AI upscaling schemes are not the future. I want real rendered pixels, not some upscaling.
     
    Skyblue likes this.
  5. captaindiptoad

    captaindiptoad Limp Gawd

    Messages:
    361
    Joined:
    Dec 22, 2014
    who cares? DLSS is shit, the fuck would i downscale then upscale for? ive seen the videos by nexus, and DLSS looks like shit.
     
  6. Brian_B

    Brian_B 2[H]4U

    Messages:
    2,081
    Joined:
    Mar 23, 2012
    Now if only nVidia could do DLSS
     
    N4CR, Marees, El Derpo and 3 others like this.
  7. Tak Ne

    Tak Ne [H]ard|Gawd

    Messages:
    1,217
    Joined:
    Jan 28, 2008
    If AMD did this in software, I wonder would it have to be the same card as the one doing the regular rendering?
     
  8. Revdarian

    Revdarian 2[H]4U

    Messages:
    2,327
    Joined:
    Aug 16, 2010

    Uh actually that is exactly why it can be done, AMD are the providers of the main APU's of the two consoles, and both the ps4 pro and the xb1 x have their own versions of checkerboard rendering that gives you image quality between the base resolution and the native desired one at around half the cost in performance.


    Wishing for native resolution is nice but performance budget isn't infinite, that is why we use rasterized graphics instead of path traced ones (and even with rtx you guys have seen the hacks they have had to pull off for two types of 2 bounce Ray traced effects, simply because performance budget is never enough).
     
  9. Chris_B

    Chris_B [H]ardness Supreme

    Messages:
    5,031
    Joined:
    May 29, 2001
    I already know that, in reference to raytracing that's one thing that will have to wait quite a bit for consoles to be able to make use of it. But again even with this tech its getting devs to use it thats the main issue. Some devs need to have money thrown at them for them to even bother.
     
    Revdarian likes this.
  10. Revdarian

    Revdarian 2[H]4U

    Messages:
    2,327
    Joined:
    Aug 16, 2010
    Sadly that part is entirely true, a system wide option that one could check in drivers is the holy grail for all of us exactly because of that.

    I won't say that developers are lazy, I will say that publishers rush them and push them until only the most basic stuff can be implemented and that's a shame.
     
    Maddness likes this.
  11. DukenukemX

    DukenukemX [H]ardness Supreme

    Messages:
    4,224
    Joined:
    Jan 30, 2005
    This only supports my hypothesis that AMD is going to do Ray-Tracing with the CPU+GPU using general compute performance. If AI like functionality can be done on the GPU with DX ML then the Ray-Casting can be done on the CPU. You know, AMD's CPU's that have like 8+ cores with 16+threads that AMD shows off almost always using Cinebench a Ray-Tracing benchmark. CPU cores that are nearly useless in games cause they only care about IPC. It just makes the most sense to me.
     
    N4CR likes this.
  12. Lakados

    Lakados [H]ard|Gawd

    Messages:
    1,145
    Joined:
    Feb 3, 2014
    I like the idea of a standardised model for a DLSS like process than a vendor specific one any day of the week, if Microsoft can build it into DX12 then that guys at Khronos can too for Vulkan then everybody wins.
     
  13. alamox

    alamox Gawd

    Messages:
    596
    Joined:
    Jun 6, 2014
    pointless, techs relying on Devs adoption won't go very far, AMD and Nvidia gave them that bad habit of sponsoring games, so most greedy studios won't implement something even if it's better for the game untill one of them sponsors them.
    so unless AMD pushs this on upcoming consoles, ray tracing and dlss won't take off.
     
  14. Krenum

    Krenum [H]ardForum Junkie

    Messages:
    14,865
    Joined:
    Apr 29, 2005
    I like the position AMD has put themselves in. They're basically using Nvidia as a beta tester. Nvidia puts something out, AMD improves upon it.
     
  15. Maddness

    Maddness [H]ard|Gawd

    Messages:
    1,119
    Joined:
    Oct 24, 2014
    That's actually not how it's supposed to work. The game devs send Nvidia the game and Nvidia runs it through there super computer. Then Nvidia releases the DLSS profile in a driver update. That's how it is supposed to work. So nothing for the game devs to do. DLSS certainly has a future if both Nvidia and AMD support it. Time will tell.
     
  16. umeng2002

    umeng2002 Gawd

    Messages:
    894
    Joined:
    May 23, 2008
    AMD can do a lot of things. The problem with AMD's Radeon division is that they don't actually do enough stuff.
     
  17. funkydmunky

    funkydmunky 2[H]4U

    Messages:
    2,161
    Joined:
    Aug 28, 2008
    That is how it is supposed to work. So what is wrong?
    Devs not sending it? Nvidia Super Computer broken?
    At this point best case scenario is AAA games at some point. I can't consider it a universal tech when it most likely won't be available for the 1000's of games that aren't the latest and greatest. Fringe VR games that need it? Not a chance by the looks of it!
     
  18. Maddness

    Maddness [H]ard|Gawd

    Messages:
    1,119
    Joined:
    Oct 24, 2014
    Who knows what the issues are, but a little patience goes a long way.
     
  19. funkydmunky

    funkydmunky 2[H]4U

    Messages:
    2,161
    Joined:
    Aug 28, 2008
    What is your definition of a little patience? Turding wasn't released last week. This is looking bad. No, an outright CON.
    Give me dates please?
     
  20. Maddness

    Maddness [H]ard|Gawd

    Messages:
    1,119
    Joined:
    Oct 24, 2014
    I am in my early 50's have been a gamer since my teens. When it comes to the PC industry change always takes time. Take DX12 for instance, how long has that been out for and yet still the majority of games are DX11. Another one is PC monitors, still the majority of gaming screens are 60hz. How bizarre is that, all gaming screens should be at least 100hz. What about quad core CPU's. If it wasn't for AMD's RyZen I'm sure all Intels latest main stream would still be only 4 cores.If you don't have patience, then PC's are not for you. Change is always slow in the PC industry.
     
    TheSmJ, steakman1971 and Shadowed like this.
  21. Mega6

    Mega6 [H]ard|Gawd

    Messages:
    1,438
    Joined:
    Aug 13, 2017
    Strange, I remember building boxes every six months for a while back in the day. It wasn't always this slow.

    AMD is doing the smart play waiting for the standards to flush out here.
     
    notarat likes this.
  22. Chris_B

    Chris_B [H]ardness Supreme

    Messages:
    5,031
    Joined:
    May 29, 2001
    Or it could be as said above that devs for the most part don't go out of their way to add features unless they get "sponsorship" for a game.
     
    Maddness and SeymourGore like this.
  23. MaZa

    MaZa 2[H]4U

    Messages:
    2,651
    Joined:
    Sep 21, 2008
    To be honest, I think this kind of AI learning supersampling may be the future. Real time rendering of 4K is already frigging hard, 8K is just insane. Just look at how amazing job the Neural Network texture upscalers are doing. I mean, I know that kind of upscaling is impossible in real time (for now?) but it is a good example that upscaling can work really well.
     
    GSDragoon likes this.
  24. Stoly

    Stoly [H]ardness Supreme

    Messages:
    6,111
    Joined:
    Jul 26, 2005
    For gods sake DirectML was first demoed on nvidia hardware. I guess nvidia will support both

    DirectML runs as a shader, DLSS runs on tensor cores. I'll assume DLSS runs faster or maybe nvidia just doesn't want AMD to take advantage of its neural networks...
     
  25. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    26,970
    Joined:
    Oct 29, 2000
    Development lifecycles are pretty long. Turing was only just announced in September. It takes AT LEAST 2 years to develop a triple A game, usually more. Sometimes developers are close to Nvidia and receive support and guidance on how to implement future features. They might have these features already, if Nvidia caught them early enough in the development cycle.

    For the other developers who don't have this kind of access, we can expect support for features like this to appear in titles that started development on or after September 2018. So, available in titles somewhere between 2020 and 2022.

    Patience is key,

    That said, I don't want upscaling, AI or not.
     
    TheSmJ and Maddness like this.
  26. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    26,970
    Joined:
    Oct 29, 2000
    Nvidia always ignores open standards trying to create their own proprietary nonsense whenever they can in an attempt to lock out the competition. It's kind of their grand evil modus operandi. Lock-outs and lock-ins inconveniencing the user in order to pump up the profits.

    VRR was first out of the gate as part of the VESA standard in 2014, but could Nvidia use it? No. Instead they took the idea, created their own G-Sync standard, locked it in to Nvidia GPU's knowing that people keep monitors for a long time and thus would be locked into buying their products for years, or not get the full features of their screen.

    Then we had Gameworks and HairWorks and things like that, where Nvidia provides a toolkit and middleware up front to developers, reducing the work they have to do to implement fancy visual features that at first locked out AMD users all together, and when they finally let them in did so in a highly un-optimized fashion for no technical reason, forcing users to pick between having all the graphical features on Nvidia, or not (or with great fps penalty) on AMD hardware.

    Same thing is going on here with DLSS and Ray Tracing I fear.

    AMD on the other hand is the complete opposite. They embrace and create open standards, but then fail to take advantage of them, essentially giving things like HBM and FreeSync away.

    Nvidia has a long history of coercion, lock-outs, lock-ins and unethical market manipulation. I hate that I give them money, lots of money, for their products, but as it stands right now AMD just doesn't have a sufficient alternative for me.
     
    Last edited: Jan 21, 2019
  27. reaper12

    reaper12 2[H]4U

    Messages:
    2,170
    Joined:
    Oct 21, 2006
    Turing cards were released in September, not announced. And it takes 3 to 5 years to develop a GPU, do you think they just suddenly decided to use DLSS in September last year? They have had Tensor cores on cards since December 2017. Tensor cores were actually known about since May 2017 as that's when Nvidia announced the V100 and explained the architecture.

    Not that it matters much because your gaming development cycles don't really matter for DLSS because with DLSS the work is all done by Nvidia's supercomputers. And, according to Nvidia, It's easy to implement in older games. If you want to use DLSS in your game, you send it to Nvidia and they use their supercomputers. It doesn't require any time or effort on the part of the developers apart from sending data to Nvidia to train the AI.

    Yet, here we are, 5 months after Turing cards were announced with no DLSS games to play. That's really strange to me considering that Nvidia is supposed to be great at working with game developers. What's the delay?
     
    Revdarian and funkydmunky like this.
  28. Mega6

    Mega6 [H]ard|Gawd

    Messages:
    1,438
    Joined:
    Aug 13, 2017
    Maybe.. it's not that easy and doesn't work well? Or Nv's supercomputer has a Virus. :D
     
  29. cp_kirkley

    cp_kirkley Limp Gawd

    Messages:
    178
    Joined:
    Oct 4, 2012
    So much truth here. That said, I had been spec'ing out laptops to replace my aging Aorus, and have made the decision to switch out my entire setup. I'm moving very consciously away from Nvidia this cycle because I am tired of criticizing them, but caving and buying green anyway. I do enjoy the irony in owning (and having NO interest in replacing) my SheildTV - and having pretty decent usage of the GFN beta under my belt (Mac user - sue me.)

    I just ordered picked up a 2018 15" MBP. Most of my workflow lives in the OSX environment and the dodgy, not always perfect science of hackintoshing is no longer a viable alternative. Tired of fighting with kexts and webdrivers and swapping out wifi cards. The Vega 20 bump doesn't make it a better buy necessarily, but it is at least on par performance wise with my other options (X1 extreme, XPS 15, Precision 5520). I am offloading my Gsync monitor to a neighbor's kid who thinks its the most amazing thing he's ever seen, and I'll be going with a Freesync 2 based ultrawide. An egpu setup based on either a Vega64 or the upcoming Vega7 will be coming shortly, and I'll be using Shadow service for a few months until I make that plunge. I know I am sacrificing top end performance and features, but I (80 percent of the posters here) and operate in the PC gaming 1%. The performance drop-off isn't so bad as to be intolerable. I don't see RTX being something I wouldnt want to be without for another year at least, and by then 60fps with RTX on won't be a big deal anymore.

    As people who buy their most profitable equipment, and justify live on the expensive bleeding edge of this industry, we dictate what they charge, because we are the ones who agree to pay what they ask. Stop buying, and they will react accordingly. This has already borne out with Gsync. That isnt to say it isnt a good tech development, there are benefits to its existence, but now that there is a cheaper, mostly equal alternative for MOST people, we shouldnt be rewarding such anti-competitive stances.
     
    Maddness likes this.
  30. Hisshadow

    Hisshadow n00bie

    Messages:
    52
    Joined:
    Mar 11, 2017
    How about amd first make a video card, that outruns a geforce

    then, we can talk about adding anying you want.
     
    Marees likes this.
  31. Marees

    Marees n00bie

    Messages:
    22
    Joined:
    Sep 28, 2018
    DLSS requires preliminary work on NVIDIA's cloud servers

    does DirectML require any cloud hardware resources too ?
     
  32. Nebell

    Nebell [H]ard|Gawd

    Messages:
    1,518
    Joined:
    Jul 20, 2015
    Trained neural network... deep learning...
    Such big words to sell us stuff.
     
  33. N4CR

    N4CR 2[H]4U

    Messages:
    3,362
    Joined:
    Oct 17, 2011
    AMD was in partnership to make HBM with multiple companies, they didn't 'give it away'.. bit like PS3/Cell CPU and IBM/Sony etc.
    Freesync is open standard based on Adaptive sync as you know and that is great because now people are not locked into one provider of screens, even nvidia users for once in the last few years lol.

    You keep posting this same bullshit and you clearly don't know shit about AMD product line up. The VII is expected to compete with 2080 and they have competing solutions the whole way down the product stack and actually beat Nvidia comfortably in mid to low range with existing products.
    So short of the RMA2080TI and the titan RMA which is less than one percent of the market, they compete with 99% of the product stack this year already. Stop lying, I bet you own ngreedia shares hence the FUD spreading. This isn't plebbit and you will get called out for bullshit.
     
    GSDragoon likes this.
  34. reaper12

    reaper12 2[H]4U

    Messages:
    2,170
    Joined:
    Oct 21, 2006
    The AI supercomputer is busy making Terminators. :)
     
    Maddness, Marees and Mega6 like this.
  35. Cooe

    Cooe n00bie

    Messages:
    7
    Joined:
    Sep 26, 2017
    People are completely forgetting that just like with Turing, the Radeon VII's going to have a bunch of unused hardware (in traditional gaming workloads) that it'll be able to dedicate towards Anti-Aliasing, but with a major difference in flexibility. These being Turing's Tensor Cores obviously, along with the 200-300GB/s of unused memory bandwidth while gaming, in Vega 20/the Radeon VII's case (memory bandwidth scaling for gaming is going to flatline at around 700-750GB/s, leaving an entire HBM2 stack sitting idle).

    Nvidia's Tensor Cores ONLY work with matrix math, & are thus useless in gaming workloads outside running pre-trained, machine learning AA algorithms (DLSS); whereas the Radeon VII's huge excess of memory bandwidth can be used to either accelerate similar algorithms via DirectML, or run various forms bog-standard AI requiring NO PRE-TRAINING with minimal perf hit.
     
  36. dgz

    dgz [H]ardness Supreme

    Messages:
    4,949
    Joined:
    Feb 15, 2010
    Deep learning is obviously better than super sampling. Is there anything tensor cores can't do?
     
    Maddness likes this.
  37. kirbyrj

    kirbyrj Stay [H]ard

    Messages:
    23,607
    Joined:
    Feb 1, 2005
    Cost less
     
  38. MaZa

    MaZa 2[H]4U

    Messages:
    2,651
    Joined:
    Sep 21, 2008
    Do my taxes.