AMD RX 5700 XT card is better than a GTX 1080 at ray tracing in new Crytek demo

kac77

2[H]4U
Joined
Dec 13, 2008
Messages
3,314
"As you might expect, the Nvidia RTX 2080 Ti was at the top, followed by the other top RTX cards; the 2080 Super, 2070 Super, 2060 Super, and 2060. But after that, AMD’s cards start nudging their way into the rankings, beating out capable last-generation GTX 10-series GPUs."

"The Neon Noir demo will be publicly available by the close of November 13, giving anyone they want a chance to try it out. It’ll be downloadable from Crytek’s Marketplace."

https://www.digitaltrends.com/computing/amd-rx-5700-xt-beast-gtx-1080-in-crytek-ray-tracing/
 
  • Like
Reactions: Mega6
like this
And the 2060 S, 2070 S 2080 S etc are all better than the 5700XT.

Probably a more up to date comparison. 2060 non super and the 5700XT are tied it seems.

My non super (I like saying that, sounds like an apology lol) managed this at 1440p Ultra:

Neon_noir_ray_tracing_benchmark_2527 Screenshot 2019.11.13 - 16.08.26.37.png
 
And the 2060 S, 2070 S 2080 S etc are all better than the 5700XT.

Probably a more up to date comparison. 2060 non super and the 5700XT are tied it seems.

My non super (I like saying that, sounds like an apology lol) managed this at 1440p Ultra:

View attachment 199549

While true, I think his point was the 5700XT without Hardware R/T (presumably using shaders) is just as fast as (or faster than) the1080 without hardware R/T... it's impressive that just using shaders it's able to keep up with the 2060 which has hardware R/T.
 
Crytek Bench does not take advantage of the RT cores.
Wait what? Why is this even news then. THe 5700xt should always beat a 1080, I assumed the same as the guy above, we were comparing cards without RT cores, but if we are then the XT should be more inline with a 1080ti.

Seems to me that this is a win for Nvidia not AMD.
 
Crytek Bench does not take advantage of the RT cores.
I must have missed that point, my apologies. This skews my view some then :). When they start using hardware R/T the 2000 series should get a pretty hefty increase.. wonder why they released without support?
 
  • Like
Reactions: Auer
like this
I must have missed that point, my apologies. This skews my view some then :). When they start using hardware R/T the 2000 series should get a pretty hefty increase.. wonder why they released without support?
I think thats the point of it

doing rt without dedicated hardware

rtx accelaration is coming at a later time apparently
 
I must have missed that point, my apologies. This skews my view some then :). When they start using hardware R/T the 2000 series should get a pretty hefty increase.. wonder why they released without support?

I'm guessing because they are selling it as "Hardware Agnostic" and don't want to make future hardware advantages a big deal atm.
 
Wait what? Why is this even news then. THe 5700xt should always beat a 1080, I assumed the same as the guy above, we were comparing cards without RT cores, but if we are then the XT should be more inline with a 1080ti.

Seems to me that this is a win for Nvidia not AMD.

It’s news because the narrative from Nvidia is that you should pay their insane prices for the RTX series because AMD can’t do ray tracing. Clearly they can.
 
I must have missed that point, my apologies. This skews my view some then :). When they start using hardware R/T the 2000 series should get a pretty hefty increase.. wonder why they released without support?

For the same reason games use Havok instead of PhysX. They already have ray tracing programmed in that ANYONE can use, so why go through the effort of programming something that’s proprietary to Nvidia on top of that?
 
I must have missed that point, my apologies. This skews my view some then :). When they start using hardware R/T the 2000 series should get a pretty hefty increase.. wonder why they released without support?
Not necessarily. It's hard ware agnostic because any video card that supports the DirectX 12 spec will be able to do ray tracing. RTX cores are separate. The likelihood of someone going back and accounting for the RTX cores plus the shaders is going to be quite low.

Will Nvidia probably add development money to make it possible? sure. But it's only when they do, not an across-the-board thing. So every time a game uses the hardware agnostic path and doesn't account for the RTX cores all other cards have a possibility of beating out the Nvidia ones.

This is why Nvidia is increasing shader performance in the next generation. The RTX cores were always a gamble because the spec that included ray tracing came out long before the RTX models did.
 
Last edited:
Not necessarily. It's hard ware agnostic because any video card that supports the DirectX 12 spec will be able to do ray tracing. RTX cores are separate. The likelihood of someone going back and accounting for the RTX cores plus the shaders is going to be quite low.

Will Nvidia probably add development money to make it possible? sure. But it's only when they do, not an across-the-board thing. So every time a game uses the hardware agnostic path and doesn't account for the RTX cores all other cards have a possibility of beating out the Nvidia ones.

This is why Nvidia is increasing shader performance in the next generation. The RTX cores were always a gamble because the spec that included ray tracing came out long before the RTX models did.

Nvidia is increasing shader performance because that's what they always have done. AMD too.

RTX or HW Agnostic doesn't matter if Nv maintains it's Performance lead as usual it will still be on top.

The cool thing here is of course that with more software based non Agnostic RT solutions you don't necessarily have to have a top performer to get the benefits.
A win for all.
 
For the same reason games use Havok instead of PhysX. They already have ray tracing programmed in that ANYONE can use, so why go through the effort of programming something that’s proprietary to Nvidia on top of that?
Hate to break it to you, but PhysX is more widely used today than Havok, including in console games. I can understand if people don't know that if they still only equate PhysX with PPU- or GPU-accelerated physics simulation only, which is primarily only used in scientific applications these days.
 
I'm guessing because they are selling it as "Hardware Agnostic" and don't want to make future hardware advantages a big deal atm.
I read a bit deeper, they already were doing cone tracing for ambient occlusion, they simply added in colors data in order to do reflections and not just lighting, so it was just extending what was already being done. In order to support hardware R/T it would have had to be completely rewritten. This is why it was put off to a future project, since it would take a lot more effort than just slightly modifying their existing data structures/rendering path.
 
Hate to break it to you, but PhysX is more widely used today than Havok, including in console games. I can understand if people don't know that if they still only equate PhysX with PPU- or GPU-accelerated physics simulation only, which is primarily only used in scientific applications these days.
Yes, but to be fair to most people PhysX started out as an nvidia marketting term after they bought out another company for it... after much failure to gain traction due to self imposed limitations (on both their hardware and competitors), it has since been turned into a software solution (I'm not sure if it uses shaders at this point?) and continues on in this form since it was open sourced in 2018. So, when people think PhysX, they think of the hardware version that nVidia tried to push and ultimately failed. If they had opened it up to other hardware vendors, we'd probably all have hardware physics instead of in software and/or shaders, but that isn't how nVidia roles until it's too late and their only way to save it from complete oblivion was to open source it and stop wasting money on hardware.
 
Nvidia is increasing shader performance because that's what they always have done. AMD too.

RTX or HW Agnostic doesn't matter if Nv maintains it's Performance lead as usual it will still be on top.

The cool thing here is of course that with more software based non Agnostic RT solutions you don't necessarily have to have a top performer to get the benefits.
A win for all.
As long as I can remember nVidia has always specialized in polygon throughput not shader performance.

AMD has always had higher shader performance but way lower polygon performance.

It's been that way for a very long time.

The reason shaders importance is now the focus for nVidia is because it's shader performance that counts for the RT API.

You can't fake that either you have it or not. The fact that from the 2080 ti to the 2070 super are all twice the size of a 5700 xt speaks volumes. Even on 7nm the 2080 on down are still going to be allot larger.

The RTX cores were a bad idea because to do full scene RT which is where everyone wants to get to, it will require absolutely massive die sizes that can't even be made today. The RT API and DirectX 12 realizes this and comes up with a method to implement the technology that comes at a lesser performance hit.

It won't look as good. But in reality it's the only way forward until RTX type architectures are actually manufacturable for full scene performance.

This is why many people called the RTX architecture a gimmick. Real-time ray racing individual objects such as a shadow or a house or one object in a full scene is one thing. Real-time Ray tracing a full scene frame after frame after frame (correction per pixel process)is quite another and the RTX architecture does not do the latter. Not even close. nor will Nvidia ever be able to do real-time Ray tracing utilizing specific cores with the manufacturing processes we have now.

This is a factual statement VS of my own opinion. Real-time Ray tracing has been on the radar for decades and there is no technology currently available that does it fully in all aspects without serious framerate hits. Pixar and all other studios that produce three-D degenerated media utilize massive farms to do real-time Ray tracing. You are not going to get that out of one card in one PC.

It really is an injustice to the tech community if journalists skip over this fact.
 
Last edited:
As long as I can remember nVidia has always specialized in polygon throughput not shader performance.

AMD has always had higher shader performance but way lower polygon performance.

It's been that way for a very long time.

What. References.

The reason shaders importance is now the focus for nVidia is because it's shader performance that counts for the RT API.

No.

You can't fake that either you have it or not. The fact that from the 2080 ti to the 2070 super are all twice the size of a 5700 xt speaks volumes. Even on 7nm the 2080 on down are still going to be allot larger.

Than a 7nm AMD GPU without RT? Probably. Also, 1 + 1 is still two.

The RTX cores were a bad idea because to do full scene RT which is where everyone wants to get to, it will require absolutely massive die sizes that can't even be made today.

Well yeah, everyone does want to get there (and more), and no, it can't be done today- which is why they're not doing it today outside of tech demos like Quake II RTX. That doesn't make hardware RT a bad choice, rather much the complete opposite. For reference, that's what AMD is doing too. You know, when they get around to it, two years after Nvidia, which is more or less right on schedule for them.

The RT API and DirectX 12 realizes this and comes up with a method to implement the technology that comes at a lesser performance hit.

Nope. This method is simulating the effect of RT in what is the most half-assed way possible. That's not a bad thing in and of itself, but it's also not even close to what can be done in hardware.

It won't look as good. But in reality it's the only way forward until RTX type architectures are actually manufacturable for full scene performance.

False dichotomy. The 'between' of hybrid rasterization and ray tracing is the best balance of hardware resources available, but you're claiming that it's not an option. It's the only option.

This is why many people called the RTX architecture a gimmick.

Most did it because of the performance hit. They're wrong, as the feature does work very well, but they're not wrong about the performance hit.

Real-time ray racing individual objects such as a shadow or a house or one object in a full scene is one thing. Real-time Ray tracing a full scene frame after frame after frame is quite another and the RTX architecture does not do the latter. Not even close.

Well, actually with Quake II RTX, it does. More performance is needed for more detailed games and that will come in time.

nor will Nvidia ever be able to do real-time Ray tracing utilizing specific cores with the manufacturing processes we have now.

Never ever ever!

Care to lend your time machine so we can see what processes look like in ten years to verify your claim?

This is a factual statement VS of my own opinion. Real-time Ray tracing has been on the radar for decades and there is no technology currently available that does it fully in all aspects without serious framerate hits. Pixar and all other studios that produce three-D degenerated media utilize massive farms to do real-time Ray tracing. You are not going to get that out of one card in one PC.

So if we can't do cinematic ray tracing on consumer hardware, we should just stick to shitty software rendering until we can?
 
What. References.



No.



Than a 7nm AMD GPU without RT? Probably. Also, 1 + 1 is still two.



Well yeah, everyone does want to get there (and more), and no, it can't be done today- which is why they're not doing it today outside of tech demos like Quake II RTX. That doesn't make hardware RT a bad choice, rather much the complete opposite. For reference, that's what AMD is doing too. You know, when they get around to it, two years after Nvidia, which is more or less right on schedule for them.



Nope. This method is simulating the effect of RT in what is the most half-assed way possible. That's not a bad thing in and of itself, but it's also not even close to what can be done in hardware.



False dichotomy. The 'between' of hybrid rasterization and ray tracing is the best balance of hardware resources available, but you're claiming that it's not an option. It's the only option.



Most did it because of the performance hit. They're wrong, as the feature does work very well, but they're not wrong about the performance hit.



Well, actually with Quake II RTX, it does. More performance is needed for more detailed games and that will come in time.



Never ever ever!

Care to lend your time machine so we can see what processes look like in ten years to verify your claim?



So if we can't do cinematic ray tracing on consumer hardware, we should just stick to shitty software rendering until we can?
" Cinematic Ray tracing" is not a technical term in any way stretch of the imagination. WTH... Are you serious?!!

What makes computer generated Cinema look as good as it does is because Ray tracing is performed frame after frame. The process is quite long. To differentiate it from what gamers is expect is crazy pants.

With regards to the lower fidelity that is an assumption on my part. But I'm pretty sure there will be a difference. Maybe not massive but a difference will be noted.

What makes real-time Ray tracing hardware that is dedicated to it has to do with manufacturing process which I already said. There is no one that does real-time Ray tracing that does not understand this.

You are saying that it is wrong to believe that dedicated hardware is a mistake. Okay let's go with that. What hardware exists today that can do full scene real-time Ray tracing at 60 frames per second at 4K?
 
" Cinematic Ray tracing" is not a technical term in any way stretch of the imagination. WTH... Are you serious?!!

No, it isn't a technical term, and no, I didn't claim that it was. It is a generalization of your Pixar example.

What makes computer generated Cinema look as good as it does is because Ray tracing is performed frame after frame.

...no.

To differentiate it from what gamers is expect is crazy pants.

Also no. Gamers don't expect Pixar-level graphics from their consumer-level device.
 
No, it isn't a technical term, and no, I didn't claim that it was. It is a generalization of your Pixar example.



...no.



Also no. Gamers don't expect Pixar-level graphics from their consumer-level device.
Great. Then tell me what they expect.

Just to let you know what gamers require is more than what Pixar does.

And after reviewing what I said earlier I'll restate not frame after frame, you're talking a per pixel process.
 
Last edited:
Also no. Gamers don't expect Pixar-level graphics from their consumer-level device.


Right, this is the whole reason Crytek made headlines with their demo in the first place. It uses even lower accuracy tricks than RTX does, and it's barely playable with a basic set of lighting effects.

But this is what Crytek requires to stay in the press, while working on engine changes to catch up wit the rest of the RTX community.

The reality is that RTX is the future, but this buys them time to catch up with the rest of the class. But the worry is: will there be any students left to be taught the Crytek Way after they've already taken this long with no sign of RTX code in the engine?
 
Last edited:
Right, this is the whole reason Crytek made headlines with their demo in the first place. It uses even lower accuracy tricks than RTX does, and it's barely playable with a basic set of lighting effects.

But this is what Crytek requires to stay in the press, while working on engine changes to catch up wit the rest of the RTX community.

The reality is that RTX is the future, but this buys them time to catch up with the rest of the class. But the worry is: will there be any students left to teach the Crytek way after they've already taken this long with no sign of RTX code in the engine?
You are aware this is counter to what he said earlier right? You might have missed it but read carefully.
 
Great. Then tell me what they expect.

Better.

Just to let you know what gamers require is more than what Pixar does.

Like?

And after reviewing what I said earlier I'll restate not frame after frame, you're talking a per pixel process.

Do pixels not make up frames?

You are aware this is counter to what he said earlier right? You might have missed it but read carefully.

whateverer appears to have a solid understanding.
 
Better.



Like?



Do pixels not make up frames?



whateverer appears to have a solid understanding.

Better is not an answer but you knew that. Right now you're being disingenuous.

If all your here to do is offer a rebuttal that's fine but I can't respond if you are going to be disingenuous about the topic.

With regards to the frame after frame mistake I already apologized for that mistake. I said it that way in haste in order for easier understanding which was dumb but as soon as I realized it I did concede and correct.

But if you want a conversation on why an API agnostic approach is better I can speak to that.
 
Better is not an answer but you knew that. Right now you're being disingenuous.

Not at all disingenuous, it's the only answer that fits such a broad question.

if you want a conversation on why an API agnostic approach is better I can speak to that.

What is API agnostic in this case? RT hardware is already supported in DirectX, Vulkan, and OpenGL (at least). RTX is just Nvidia's branding of such, which AMD and Intel might respectively develop when they get around to shipping parts with RT hardware.
 
Not at all disingenuous, it's the only answer that fits such a broad question.
hardware.
Incorrect when I asked for specifics you answered broadly.
What is API agnostic in this case? RT hardware is already supported in DirectX, Vulkan, and OpenGL (at least). RTX is just Nvidia's branding of such, which AMD and Intel might respectively develop when they get around to shipping parts with RT hardware.
RT is supported. Nvidia hardware specific functions are not. You know this. It's closed sourced. Hell that's in the article itself.

You can be disingenuous all you like but answering questions with "better" while comical and yes I laughed is fun and all its not appropriate for this discussion.
 
Incorrect when I asked for specifics you answered broadly.

Better than the previous generation.

RT is supported. Nvidia hardware specific functions are not. You know this. It's closed sourced. Hell that's in the article itself.

So don't use them? DXR and Vulkan raytracing are available without using 'hardware specific functions'.


You can be disingenuous all you like but answering questions with "better" while comical and yes I laughed is fun and all its not appropriate for this discussion.

Your claim is that 'gamers want more than what Pixar is doing'. You haven't specified more.
 
Better than the previous generation.
Um previous generation? The whole point is discussing a feature we didn't have before.


So don't use them? DXR and Vulkan raytracing are available without using 'hardware specific functions'.
That's literally what the hardware agnostic approach is doing...hello.


Your claim is that 'gamers want more than what Pixar is doing'. You haven't specified more.

That's no reason to act like an idiot. The difference between what Pixar is doing and what gamers require is that the animations aren't pre-rendered. They are done in real time in accordance to the viewer and all objects within a 3D plane. This is way different than a movie.
 
That's no reason to act like an idiot. The difference between what Pixar is doing and what gamers require is that the animations aren't pre-rendered. They are done in real time in accordance to the viewer and all objects within a 3D plane.

Then say that.
 
Hate to break it to you, but PhysX is more widely used today than Havok, including in console games. I can understand if people don't know that if they still only equate PhysX with PPU- or GPU-accelerated physics simulation only, which is primarily only used in scientific applications these days.

I’m not saying it’s not in use, I’m saying that Havok provided a non-proprietary alternative similar to what Crytek is doing for ray tracing.
 
RTX is an umbrella of nVidia garbage, I mean technology and programming toolkits.

Wikipedia said:
In addition to ray tracing, RTX includes artificial intelligence integration, common asset formats, rasterization (CUDA) support, and simulation APIs. The components of RTX are:[7]

  • AI-accelerated features (NGX)
  • Asset formats (USD and MDL)
  • Rasterization including advanced shaders
  • Raytracing via OptiX, Microsoft DXR and Vulkan
  • Simulation tools:

While RTX RT runs on DXR most of the time, it would still take a few tweaks to run on AMD if the RTX toolkits are used.
 
While RTX RT runs on DXR most of the time, it would still take a few tweaks to run on AMD if the RTX toolkits are used.

This is almost certainly true, and it's one of the reasons that AMD should continue to be criticized for coming in two years late as usual.

One of Nvidia's fairly consistent first-mover advantages is in software support. Like ATi with the 9700 Pro, with DX10, DX11, DX12, and now ray tracing, Nvidia has led not just with hardware but also with drivers to support the new APIs as well as toolkits to speed developer uptake.

In the case of RT where existing engines need significant rework in order to implement a second hybrid render path with rasterization and ray tracing together, there can be little doubt that the support Nvidia has provided has significantly eased the transition for many developers, and that's even more important when one considers that the transition to RT also includes the transition to Vulkan or DX12 from DX11. Many development houses (I'm looking at you DICE!) still struggle with DX12.


As has been the case for AMD graphics since they bought ATi, AMD has screwed themselves. Hopefully they've taken cues as to how Nvidia has worked with developers and how developers have implemented RT so that their eventual hardware release supporting RT will be less of a shitshow than say the Ryzen release was- or worse, when they made the transition to DX10.
 
"As you might expect, the Nvidia RTX 2080 Ti was at the top, followed by the other top RTX cards; the 2080 Super, 2070 Super, 2060 Super, and 2060. But after that, AMD’s cards start nudging their way into the rankings, beating out capable last-generation GTX 10-series GPUs."

"The Neon Noir demo will be publicly available by the close of November 13, giving anyone they want a chance to try it out. It’ll be downloadable from Crytek’s Marketplace."

https://www.digitaltrends.com/computing/amd-rx-5700-xt-beast-gtx-1080-in-crytek-ray-tracing/


DF analysis. Demo is DX11, and wouldn't have access to RT HW, even if was desired.

Across the board NVidia architectures are better suited for the Crytek demo, without using RT HW. Using cards of comparable eras of course.

This is just NVidia Shader cores vs AMD Shader cores.

2070 Super handily outdoes 5700 XT. This widens the typical delta between these cards.
GTX 1080 likewise outdoes Vega 64. Typically these cards about tied, but GTX 1080 pull ahead here.

 
DF analysis. Demo is DX11, and wouldn't have access to RT HW, even if was desired.

Across the board NVidia architectures are better suited for the Crytek demo, without using RT HW. Using cards of comparable eras of course.

This is just NVidia Shader cores vs AMD Shader cores.

2070 Super handily outdoes 5700 XT. This widens the typical delta between these cards.
GTX 1080 likewise outdoes Vega 64. Typically these cards about tied, but GTX 1080 pull ahead here.




I somewhat surprised by this. As AMD supposedly has more compute/shader power than nvidia.
 
I could only assume that tables could turn once DX12/vulkan is supported. But turn once again once RTX is supported.
 
I somewhat surprised by this. As AMD supposedly has more compute/shader power than nvidia.

In past generations AMD had more raw shader power but it wasn’t always easy to use its full potential.

With Navi they’ve followed Nvidia and cut back the raw power in exchange for more flexibility and efficiency.

The 5700 xt and 2070 super are perfectly matched in terms of specs but it seems in this particular demo Turing is pulling ahead. Could be due to the separate integer ALUs.
 
Back
Top