AMD RX 5700 XT card is better than a GTX 1080 at ray tracing in new Crytek demo

Discussion in 'Video Cards' started by kac77, Nov 13, 2019.

  1. kac77

    kac77 2[H]4U

    Messages:
    2,291
    Joined:
    Dec 13, 2008
    "As you might expect, the Nvidia RTX 2080 Ti was at the top, followed by the other top RTX cards; the 2080 Super, 2070 Super, 2060 Super, and 2060. But after that, AMD’s cards start nudging their way into the rankings, beating out capable last-generation GTX 10-series GPUs."

    "The Neon Noir demo will be publicly available by the close of November 13, giving anyone they want a chance to try it out. It’ll be downloadable from Crytek’s Marketplace."

    https://www.digitaltrends.com/computing/amd-rx-5700-xt-beast-gtx-1080-in-crytek-ray-tracing/
     
    Mega6 likes this.
  2. Auer

    Auer Gawd

    Messages:
    1,003
    Joined:
    Nov 2, 2018
    And the 2060 S, 2070 S 2080 S etc are all better than the 5700XT.

    Probably a more up to date comparison. 2060 non super and the 5700XT are tied it seems.

    My non super (I like saying that, sounds like an apology lol) managed this at 1440p Ultra:

    Neon_noir_ray_tracing_benchmark_2527 Screenshot 2019.11.13 - 16.08.26.37.png
     
    IdiotInCharge, Armenius and AceGoober like this.
  3. Ready4Dis

    Ready4Dis Gawd

    Messages:
    589
    Joined:
    Nov 4, 2015
    While true, I think his point was the 5700XT without Hardware R/T (presumably using shaders) is just as fast as (or faster than) the1080 without hardware R/T... it's impressive that just using shaders it's able to keep up with the 2060 which has hardware R/T.
     
    kac77 and SPARTAN VI like this.
  4. Auer

    Auer Gawd

    Messages:
    1,003
    Joined:
    Nov 2, 2018
    Crytek Bench does not take advantage of the RT cores.
     
  5. lightsout

    lightsout [H]ard|Gawd

    Messages:
    1,049
    Joined:
    Mar 15, 2014
    Wait what? Why is this even news then. THe 5700xt should always beat a 1080, I assumed the same as the guy above, we were comparing cards without RT cores, but if we are then the XT should be more inline with a 1080ti.

    Seems to me that this is a win for Nvidia not AMD.
     
    auntjemima likes this.
  6. MangoSeed

    MangoSeed Gawd

    Messages:
    579
    Joined:
    Oct 15, 2014
    Correct.

    It’s a purely shader based implementation using a mix of mesh raytracing where high precision is needed and voxel cone tracing for everything else.

    Very impressive and very smart use of limited shader resources.
     
  7. Ready4Dis

    Ready4Dis Gawd

    Messages:
    589
    Joined:
    Nov 4, 2015
    I must have missed that point, my apologies. This skews my view some then :). When they start using hardware R/T the 2000 series should get a pretty hefty increase.. wonder why they released without support?
     
    Auer likes this.
  8. Stoly

    Stoly [H]ardness Supreme

    Messages:
    6,415
    Joined:
    Jul 26, 2005
    I think thats the point of it

    doing rt without dedicated hardware

    rtx accelaration is coming at a later time apparently
     
    Armenius and Auer like this.
  9. Auer

    Auer Gawd

    Messages:
    1,003
    Joined:
    Nov 2, 2018
    I'm guessing because they are selling it as "Hardware Agnostic" and don't want to make future hardware advantages a big deal atm.
     
  10. HockeyJon

    HockeyJon [H]ard|Gawd

    Messages:
    1,105
    Joined:
    Dec 14, 2014
    It’s news because the narrative from Nvidia is that you should pay their insane prices for the RTX series because AMD can’t do ray tracing. Clearly they can.
     
  11. HockeyJon

    HockeyJon [H]ard|Gawd

    Messages:
    1,105
    Joined:
    Dec 14, 2014
    For the same reason games use Havok instead of PhysX. They already have ray tracing programmed in that ANYONE can use, so why go through the effort of programming something that’s proprietary to Nvidia on top of that?
     
    deruberhanyok and kac77 like this.
  12. kac77

    kac77 2[H]4U

    Messages:
    2,291
    Joined:
    Dec 13, 2008
    Not necessarily. It's hard ware agnostic because any video card that supports the DirectX 12 spec will be able to do ray tracing. RTX cores are separate. The likelihood of someone going back and accounting for the RTX cores plus the shaders is going to be quite low.

    Will Nvidia probably add development money to make it possible? sure. But it's only when they do, not an across-the-board thing. So every time a game uses the hardware agnostic path and doesn't account for the RTX cores all other cards have a possibility of beating out the Nvidia ones.

    This is why Nvidia is increasing shader performance in the next generation. The RTX cores were always a gamble because the spec that included ray tracing came out long before the RTX models did.
     
    Last edited: Nov 14, 2019
  13. Auer

    Auer Gawd

    Messages:
    1,003
    Joined:
    Nov 2, 2018
    Nvidia is increasing shader performance because that's what they always have done. AMD too.

    RTX or HW Agnostic doesn't matter if Nv maintains it's Performance lead as usual it will still be on top.

    The cool thing here is of course that with more software based non Agnostic RT solutions you don't necessarily have to have a top performer to get the benefits.
    A win for all.
     
    IdiotInCharge and Armenius like this.
  14. Armenius

    Armenius I Drive Myself to the [H]ospital

    Messages:
    19,406
    Joined:
    Jan 28, 2014
    Hate to break it to you, but PhysX is more widely used today than Havok, including in console games. I can understand if people don't know that if they still only equate PhysX with PPU- or GPU-accelerated physics simulation only, which is primarily only used in scientific applications these days.
     
  15. Ready4Dis

    Ready4Dis Gawd

    Messages:
    589
    Joined:
    Nov 4, 2015
    I read a bit deeper, they already were doing cone tracing for ambient occlusion, they simply added in colors data in order to do reflections and not just lighting, so it was just extending what was already being done. In order to support hardware R/T it would have had to be completely rewritten. This is why it was put off to a future project, since it would take a lot more effort than just slightly modifying their existing data structures/rendering path.
     
    deruberhanyok likes this.
  16. Ready4Dis

    Ready4Dis Gawd

    Messages:
    589
    Joined:
    Nov 4, 2015
    Yes, but to be fair to most people PhysX started out as an nvidia marketting term after they bought out another company for it... after much failure to gain traction due to self imposed limitations (on both their hardware and competitors), it has since been turned into a software solution (I'm not sure if it uses shaders at this point?) and continues on in this form since it was open sourced in 2018. So, when people think PhysX, they think of the hardware version that nVidia tried to push and ultimately failed. If they had opened it up to other hardware vendors, we'd probably all have hardware physics instead of in software and/or shaders, but that isn't how nVidia roles until it's too late and their only way to save it from complete oblivion was to open source it and stop wasting money on hardware.
     
    blackmomba and kac77 like this.
  17. kac77

    kac77 2[H]4U

    Messages:
    2,291
    Joined:
    Dec 13, 2008
    As long as I can remember nVidia has always specialized in polygon throughput not shader performance.

    AMD has always had higher shader performance but way lower polygon performance.

    It's been that way for a very long time.

    The reason shaders importance is now the focus for nVidia is because it's shader performance that counts for the RT API.

    You can't fake that either you have it or not. The fact that from the 2080 ti to the 2070 super are all twice the size of a 5700 xt speaks volumes. Even on 7nm the 2080 on down are still going to be allot larger.

    The RTX cores were a bad idea because to do full scene RT which is where everyone wants to get to, it will require absolutely massive die sizes that can't even be made today. The RT API and DirectX 12 realizes this and comes up with a method to implement the technology that comes at a lesser performance hit.

    It won't look as good. But in reality it's the only way forward until RTX type architectures are actually manufacturable for full scene performance.

    This is why many people called the RTX architecture a gimmick. Real-time ray racing individual objects such as a shadow or a house or one object in a full scene is one thing. Real-time Ray tracing a full scene frame after frame after frame (correction per pixel process)is quite another and the RTX architecture does not do the latter. Not even close. nor will Nvidia ever be able to do real-time Ray tracing utilizing specific cores with the manufacturing processes we have now.

    This is a factual statement VS of my own opinion. Real-time Ray tracing has been on the radar for decades and there is no technology currently available that does it fully in all aspects without serious framerate hits. Pixar and all other studios that produce three-D degenerated media utilize massive farms to do real-time Ray tracing. You are not going to get that out of one card in one PC.

    It really is an injustice to the tech community if journalists skip over this fact.
     
    Last edited: Nov 14, 2019
    dvsman and dave343 like this.
  18. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    12,408
    Joined:
    Jun 13, 2003
    What. References.

    No.

    Than a 7nm AMD GPU without RT? Probably. Also, 1 + 1 is still two.

    Well yeah, everyone does want to get there (and more), and no, it can't be done today- which is why they're not doing it today outside of tech demos like Quake II RTX. That doesn't make hardware RT a bad choice, rather much the complete opposite. For reference, that's what AMD is doing too. You know, when they get around to it, two years after Nvidia, which is more or less right on schedule for them.

    Nope. This method is simulating the effect of RT in what is the most half-assed way possible. That's not a bad thing in and of itself, but it's also not even close to what can be done in hardware.

    False dichotomy. The 'between' of hybrid rasterization and ray tracing is the best balance of hardware resources available, but you're claiming that it's not an option. It's the only option.

    Most did it because of the performance hit. They're wrong, as the feature does work very well, but they're not wrong about the performance hit.

    Well, actually with Quake II RTX, it does. More performance is needed for more detailed games and that will come in time.

    Never ever ever!

    Care to lend your time machine so we can see what processes look like in ten years to verify your claim?

    So if we can't do cinematic ray tracing on consumer hardware, we should just stick to shitty software rendering until we can?
     
    Tup3x, amenx, Armenius and 2 others like this.
  19. kac77

    kac77 2[H]4U

    Messages:
    2,291
    Joined:
    Dec 13, 2008
    " Cinematic Ray tracing" is not a technical term in any way stretch of the imagination. WTH... Are you serious?!!

    What makes computer generated Cinema look as good as it does is because Ray tracing is performed frame after frame. The process is quite long. To differentiate it from what gamers is expect is crazy pants.

    With regards to the lower fidelity that is an assumption on my part. But I'm pretty sure there will be a difference. Maybe not massive but a difference will be noted.

    What makes real-time Ray tracing hardware that is dedicated to it has to do with manufacturing process which I already said. There is no one that does real-time Ray tracing that does not understand this.

    You are saying that it is wrong to believe that dedicated hardware is a mistake. Okay let's go with that. What hardware exists today that can do full scene real-time Ray tracing at 60 frames per second at 4K?
     
  20. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    12,408
    Joined:
    Jun 13, 2003
    No, it isn't a technical term, and no, I didn't claim that it was. It is a generalization of your Pixar example.

    ...no.

    Also no. Gamers don't expect Pixar-level graphics from their consumer-level device.
     
    Tup3x, amenx, Armenius and 2 others like this.
  21. kac77

    kac77 2[H]4U

    Messages:
    2,291
    Joined:
    Dec 13, 2008
    Great. Then tell me what they expect.

    Just to let you know what gamers require is more than what Pixar does.

    And after reviewing what I said earlier I'll restate not frame after frame, you're talking a per pixel process.
     
    Last edited: Nov 14, 2019
  22. whateverer

    whateverer Gawd

    Messages:
    994
    Joined:
    Nov 2, 2016

    Right, this is the whole reason Crytek made headlines with their demo in the first place. It uses even lower accuracy tricks than RTX does, and it's barely playable with a basic set of lighting effects.

    But this is what Crytek requires to stay in the press, while working on engine changes to catch up wit the rest of the RTX community.

    The reality is that RTX is the future, but this buys them time to catch up with the rest of the class. But the worry is: will there be any students left to be taught the Crytek Way after they've already taken this long with no sign of RTX code in the engine?
     
    Last edited: Nov 14, 2019
  23. kac77

    kac77 2[H]4U

    Messages:
    2,291
    Joined:
    Dec 13, 2008
    You are aware this is counter to what he said earlier right? You might have missed it but read carefully.
     
  24. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    12,408
    Joined:
    Jun 13, 2003
    Better.

    Like?

    Do pixels not make up frames?

    whateverer appears to have a solid understanding.
     
    amenx, GoldenTiger and whateverer like this.
  25. kac77

    kac77 2[H]4U

    Messages:
    2,291
    Joined:
    Dec 13, 2008
    Better is not an answer but you knew that. Right now you're being disingenuous.

    If all your here to do is offer a rebuttal that's fine but I can't respond if you are going to be disingenuous about the topic.

    With regards to the frame after frame mistake I already apologized for that mistake. I said it that way in haste in order for easier understanding which was dumb but as soon as I realized it I did concede and correct.

    But if you want a conversation on why an API agnostic approach is better I can speak to that.
     
  26. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    12,408
    Joined:
    Jun 13, 2003
    Not at all disingenuous, it's the only answer that fits such a broad question.

    What is API agnostic in this case? RT hardware is already supported in DirectX, Vulkan, and OpenGL (at least). RTX is just Nvidia's branding of such, which AMD and Intel might respectively develop when they get around to shipping parts with RT hardware.
     
    amenx, Armenius, GoldenTiger and 2 others like this.
  27. kac77

    kac77 2[H]4U

    Messages:
    2,291
    Joined:
    Dec 13, 2008
    Incorrect when I asked for specifics you answered broadly.
    RT is supported. Nvidia hardware specific functions are not. You know this. It's closed sourced. Hell that's in the article itself.

    You can be disingenuous all you like but answering questions with "better" while comical and yes I laughed is fun and all its not appropriate for this discussion.
     
    blackmomba likes this.
  28. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    12,408
    Joined:
    Jun 13, 2003
    Better than the previous generation.

    So don't use them? DXR and Vulkan raytracing are available without using 'hardware specific functions'.


    Your claim is that 'gamers want more than what Pixar is doing'. You haven't specified more.
     
    Armenius and GoldenTiger like this.
  29. kac77

    kac77 2[H]4U

    Messages:
    2,291
    Joined:
    Dec 13, 2008
    Um previous generation? The whole point is discussing a feature we didn't have before.


    That's literally what the hardware agnostic approach is doing...hello.


    That's no reason to act like an idiot. The difference between what Pixar is doing and what gamers require is that the animations aren't pre-rendered. They are done in real time in accordance to the viewer and all objects within a 3D plane. This is way different than a movie.
     
    l337*g0at likes this.
  30. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    12,408
    Joined:
    Jun 13, 2003
    Then say that.
     
  31. kac77

    kac77 2[H]4U

    Messages:
    2,291
    Joined:
    Dec 13, 2008
    Didn't really think I needed to with you.
     
  32. HockeyJon

    HockeyJon [H]ard|Gawd

    Messages:
    1,105
    Joined:
    Dec 14, 2014
    I’m not saying it’s not in use, I’m saying that Havok provided a non-proprietary alternative similar to what Crytek is doing for ray tracing.
     
  33. GoldenTiger

    GoldenTiger [H]ard as it Gets

    Messages:
    18,954
    Joined:
    Dec 2, 2004
    Rtx isn't proprietary. It is just dxr with a fancy marketing name. Dxr requires rt hardware to run properly.
     
    Armenius and IdiotInCharge like this.
  34. Dayaks

    Dayaks [H]ardness Supreme

    Messages:
    7,377
    Joined:
    Feb 22, 2012
    RTX is an umbrella of nVidia garbage, I mean technology and programming toolkits.

    While RTX RT runs on DXR most of the time, it would still take a few tweaks to run on AMD if the RTX toolkits are used.
     
  35. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    12,408
    Joined:
    Jun 13, 2003
    This is almost certainly true, and it's one of the reasons that AMD should continue to be criticized for coming in two years late as usual.

    One of Nvidia's fairly consistent first-mover advantages is in software support. Like ATi with the 9700 Pro, with DX10, DX11, DX12, and now ray tracing, Nvidia has led not just with hardware but also with drivers to support the new APIs as well as toolkits to speed developer uptake.

    In the case of RT where existing engines need significant rework in order to implement a second hybrid render path with rasterization and ray tracing together, there can be little doubt that the support Nvidia has provided has significantly eased the transition for many developers, and that's even more important when one considers that the transition to RT also includes the transition to Vulkan or DX12 from DX11. Many development houses (I'm looking at you DICE!) still struggle with DX12.


    As has been the case for AMD graphics since they bought ATi, AMD has screwed themselves. Hopefully they've taken cues as to how Nvidia has worked with developers and how developers have implemented RT so that their eventual hardware release supporting RT will be less of a shitshow than say the Ryzen release was- or worse, when they made the transition to DX10.
     
    sabrewolf732, Armenius and Dayaks like this.
  36. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,692
    Joined:
    Apr 22, 2006

    DF analysis. Demo is DX11, and wouldn't have access to RT HW, even if was desired.

    Across the board NVidia architectures are better suited for the Crytek demo, without using RT HW. Using cards of comparable eras of course.

    This is just NVidia Shader cores vs AMD Shader cores.

    2070 Super handily outdoes 5700 XT. This widens the typical delta between these cards.
    GTX 1080 likewise outdoes Vega 64. Typically these cards about tied, but GTX 1080 pull ahead here.

     
  37. Stoly

    Stoly [H]ardness Supreme

    Messages:
    6,415
    Joined:
    Jul 26, 2005

    I somewhat surprised by this. As AMD supposedly has more compute/shader power than nvidia.
     
  38. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,692
    Joined:
    Apr 22, 2006
    Yes especially with Pascal vs Vega. Things equalized a bit more for Turning vs Navi.
     
  39. Stoly

    Stoly [H]ardness Supreme

    Messages:
    6,415
    Joined:
    Jul 26, 2005
    I could only assume that tables could turn once DX12/vulkan is supported. But turn once again once RTX is supported.
     
  40. MangoSeed

    MangoSeed Gawd

    Messages:
    579
    Joined:
    Oct 15, 2014
    In past generations AMD had more raw shader power but it wasn’t always easy to use its full potential.

    With Navi they’ve followed Nvidia and cut back the raw power in exchange for more flexibility and efficiency.

    The 5700 xt and 2070 super are perfectly matched in terms of specs but it seems in this particular demo Turing is pulling ahead. Could be due to the separate integer ALUs.
     
    Armenius likes this.