AMD RX 5700 XT card is better than a GTX 1080 at ray tracing in new Crytek demo

I could only assume that tables could turn once DX12/vulkan is supported. But turn once again once RTX is supported.

Why would you assume that? I’m not aware of any specific advantage Navi has in DX12/Vulkan. If anything it’s Nvidia that has a driver advantage in DX11 so moving to DX12 should even the playing field.
 
With Navi they’ve followed Nvidia and cut back the raw power in exchange for more flexibility and efficiency.

To add, if we look at Polaris vs. Vega, AMD has followed Nvidia in building compute-light consumer dies and compute-heavy dies for FirePros and so on.

The challenge is that AMD's consumer dies tend to top out and the performance level of Nvidia's *60 parts while their compute dies tend to top off around Nvidia's enthusiast *80 parts, in terms of gaming, and with respect to the GTX1000 generation, Nvidia built the 1080Ti full-size compute-light die too.


I maintain that if AMD had been willing to scale up Polaris and now Navi, they'd be far more gaming competitive at the top-end than they have been with their compute-heavy dies like Vega.
 
Whats the lifespan of DX11? Will we actually one day see a DX11 RT AAA title?

Or is RT really just a Vulkan/DX12 thing going in to the future....
 
To add, if we look at Polaris vs. Vega, AMD has followed Nvidia in building compute-light consumer dies and compute-heavy dies for FirePros and so on.

The challenge is that AMD's consumer dies tend to top out and the performance level of Nvidia's *60 parts while their compute dies tend to top off around Nvidia's enthusiast *80 parts, in terms of gaming, and with respect to the GTX1000 generation, Nvidia built the 1080Ti full-size compute-light die too.


I maintain that if AMD had been willing to scale up Polaris and now Navi, they'd be far more gaming competitive at the top-end than they have been with their compute-heavy dies like Vega
.

At the cost of a major power increase. I’m beginning to think that is why we haven’t seen a high end first gen Navi card. And with the latest rumours saying Navi 2 may launch in mid 2020, we may not see one at all until then.
 
  • Like
Reactions: amenx
like this
Whats the lifespan of DX11? Will we actually one day see a DX11 RT AAA title?

Or is RT really just a Vulkan/DX12 thing going in to the future....

I suppose there could be a DX11 software RT title using Crytek engine.

But I seriously doubt Microsoft will backport DXR to DX11. Why would they bother? If you want HW RT support. It will be DX12 or Vulkan.
 
" Cinematic Ray tracing" is not a technical term in any way stretch of the imagination. WTH... Are you serious?!!

What makes computer generated Cinema look as good as it does is because Ray tracing is performed frame after frame. The process is quite long. To differentiate it from what gamers is expect is crazy pants.

With regards to the lower fidelity that is an assumption on my part. But I'm pretty sure there will be a difference. Maybe not massive but a difference will be noted.

What makes real-time Ray tracing hardware that is dedicated to it has to do with manufacturing process which I already said. There is no one that does real-time Ray tracing that does not understand this.

You are saying that it is wrong to believe that dedicated hardware is a mistake. Okay let's go with that. What hardware exists today that can do full scene real-time Ray tracing at 60 frames per second at 4K?

Up until recently harrdware hardware had trouble with rasterized games at 60 fps 4k, let's get rid of traditional raster.
 
I somewhat surprised by this. As AMD supposedly has more compute/shader power than nvidia.
The problem with AMD hardware is their shader cores sit idle a surprising amount of time, especially in DX11, and it shows in the Crytek demo. This is what asynchronous compute was for in DX12 and is why AMD usually always shows a bigger performance delta compared to NVIDIA from DX11.
 
let's get rid of traditional raster

Rasterization is still significantly faster at a range of rendering stages, so it's not likely to be going anywhere soon. What we're quite likely to see instead is a shift in relying on ray tracing hardware for lighting wherever possible, however this hybrid approach is going to take significant development.

On the plus side, now that AMD is putting ray tracing hardware into the console APUs, game developers will be incentivized to put it to good use.
 
Rasterization is still significantly faster at a range of rendering stages, so it's not likely to be going anywhere soon. What we're quite likely to see instead is a shift in relying on ray tracing hardware for lighting wherever possible, however this hybrid approach is going to take significant development.

On the plus side, now that AMD is putting ray tracing hardware into the console APUs, game developers will be incentivized to put it to good use.

I'm being facetious.

He's stating RT isn't useful as it is unable to be run at 4k 60fps, we can make the same argument about rasterization.
 
Here are my results for Radeon 5700 XT AE on Ryzen 3 3900x (in sig). Video does not really compare to actual visuals as most would know plus is buttery smooth, unlike video, when running.

5700XTNeonNoirBench.png


 
This is almost certainly true, and it's one of the reasons that AMD should continue to be criticized for coming in two years late as usual.

One of Nvidia's fairly consistent first-mover advantages is in software support. Like ATi with the 9700 Pro, with DX10, DX11, DX12, and now ray tracing, Nvidia has led not just with hardware but also with drivers to support the new APIs as well as toolkits to speed developer uptake.

In the case of RT where existing engines need significant rework in order to implement a second hybrid render path with rasterization and ray tracing together, there can be little doubt that the support Nvidia has provided has significantly eased the transition for many developers, and that's even more important when one considers that the transition to RT also includes the transition to Vulkan or DX12 from DX11. Many development houses (I'm looking at you DICE!) still struggle with DX12.


As has been the case for AMD graphics since they bought ATi, AMD has screwed themselves. Hopefully they've taken cues as to how Nvidia has worked with developers and how developers have implemented RT so that their eventual hardware release supporting RT will be less of a shitshow than say the Ryzen release was- or worse, when they made the transition to DX10.

Mostly true, except the part about Nvidia and dx12, as it was as AMD that pushed vulkan (mantle) and Microsoft with dx12. Nvidia was a bit later to that game and struggled with performance (games ran better under dx11 on Nvidia in most cases).
Other than that, sure Nvidia did some things first/better in between and after. The problem is, AMD doesn't have the resources to push features or developers like Nvidia. Hopefully their recent successes will change this and allow them to be more involved, which is where Nvidia really gains it's advantages.
Still not sure I'd call ryzen a shit show, I had more issues when I bought Skylake (6600k) the day it came out than I've had with my ryzen 1600. It took 2 bios revisions to get the random crashes fixed on Skylake for me (and that was stock speeds with stock memory clocks). This is no different than AMD pains that some had to deal with. I don't consider Skylake a shit show either, just a new platform with some early adoption pains.
Maybe I'm just lucky/unlucky, but I've had similar experiences with all of these companies, from Nvidia to AMD to Intel. Driver issues, buggy bios, circuit board corrosion on PCB (looking at you Nvidia). It happens to all of them, but I do agree AMD seems a little slower at fixing some of these issues which can be frustrating.
 
Not much up on RT performance,but doesn’t it take a render farm to do true ray tracing and isn’t that very time consuming?
 
Not much up on RT performance,but doesn’t it take a render farm to do true ray tracing and isn’t that very time consuming?
Yep. Full scene, which is what everyone wants takes far more than a single card to do.
 
I'm being facetious.

He's stating RT isn't useful as it is unable to be run at 4k 60fps, we can make the same argument about rasterization.
Never stated that. Quite the opposite actually. What I said was to pretend ray tracing a single object in a scene over millions that are not as somehow a killer feature given the hardware impact is ridiculous.

Is it nice to have? Sure. But in no way is it exclusive to Nvidia to begin with anyway. I think Intel did a demo in 2008 and AMD Pro cards have been able to do it since 2016 (if not earlier but from what I know 2016 for sure). ALL DX12 cards are capable of doing it.

In addition to this rasterization can render a full scene at acceptable rates. No hardware u can get today will do this with Ray tracing at anything above 7 - 10 fps and that's at 720p.

My biggest beef is that it's disingenuous as hell to sell a card as only being able to do something that other cards can do and then not bothering to make sure the public knew the difference between full scene which no card can do versus one object which all cards can do.

Then when you look at TDP I would hate to see what that would even look like.
 
Never stated that. Quite the opposite actually. What I said was to pretend ray tracing a single object in a scene over millions that are not as somehow a killer feature given the hardware impact is ridiculous.

Is it nice to have? Sure. But in no way is it exclusive to Nvidia to begin with anyway. I think Intel did a demo in 2008 and AMD Pro cards have been able to do it since 2016 (if not earlier but from what I know 2016 for sure). ALL DX12 cards are capable of doing it.

In addition to this rasterization can render a full scene at acceptable rates. No hardware u can get today will do this with Ray tracing at anything above 7 - 10 fps and that's at 720p.

My biggest beef is that it's disingenuous as hell to sell a card as only being able to do something that other cards can do and then not bothering to make sure the public knew the difference between full scene which no card can do versus one object which all cards can do.

Then when you look at TDP I would hate to see what that would even look like.

Control does way more than “one object”.

https://www.nvidia.com/en-us/geforce/news/control-rtx-ray-tracing-dlss-out-now/

Also doesn’t matter to the consumer if all dx12 cards can do it if I can only turn it on / is viable on RTX cards.

It’s in it’s infancy but it is very impressive when done right.
 
Never stated that. Quite the opposite actually. What I said was to pretend ray tracing a single object in a scene over millions that are not as somehow a killer feature given the hardware impact is ridiculous.



Oh, I thought you said this.

??? said:
You are saying that it is wrong to believe that dedicated hardware is a mistake. Okay let's go with that. What hardware exists today that can do full scene real-time Ray tracing at 60 frames per second at 4K?
 
Using rays in a geometric scene for reflections, maps out what rasterized object/texture/lighting to map onto the reflective surface. Just a tool and no real lighting from ray tracing. Still a much better accurate model to use with rasterization. True raytracing would use photon mapping, energy loss as it scatters through the scene and light up the scene depending upon the materials present with reflection, refraction, shadows, caustics and so on as light bounces around and gets absorbed. One you can do in real time and the other is a very hard and probably long off reality where real raytracing or should I say full ray tracing for high quality imaging can take hundreds of processors minutes to hours per frame.

I too find how RTX was promoted as disingenuous, somewhat deceitful, can be done on very limited scenes with a number of tricks using rasterization but the quality looks very poor with a lot of noise.

The Crytek demo is very interesting but it is not a real game with a much larger game world to reflect. I don't think it really tests usefulness of hardware with the restrictions of the demo. In the benchmark or demo you know exactly where the camera is facing and what objects, geometry can be culled from being rendered. A free moving camera and camera paths which you can randomly do would make this more meaningful. Still in closed type environments reflections should be very possible without having to have dedicated hardware so looks like some very good stuff Crytek is doing here.

The reflections in the demo gives a bigger feel to the environment, a more 3d feel while I find the reflections exaggerated, still rather neat. I can see future games where you have to use those reflections for deciding your actions on how to move, shoot, hide etc. a new jump in game play.

Since Microsoft and Sony are working on using this new technique loosely Raytracing with AMD on next gen consoles, they will probably define what it will be used for more so than Nvidia. What hardware is needed and how used. Nvidia in the long run may not play a big part in this at all in the end.
 
Since Microsoft and Sony are working on using this new technique loosely Raytracing with AMD on next gen consoles, they will probably define what it will be used for more so than Nvidia. What hardware is needed and how used. Nvidia in the long run may not play a big part in this at all in the end.

Like with the current generation of consoles...?

Not likely.

Especially since AMD continues to refuse to compete in the desktop high-end space.
 
Like with the current generation of consoles...?

Not likely.

Especially since AMD continues to refuse to compete in the desktop high-end space.

"Refuse"? Please, short of the overpriced ti and titan, which really is ultra high end, please show me where they "refuse" to do so?
 
"Refuse"? Please, short of the overpriced ti and titan, which really is ultra high end, please show me where they "refuse" to do so?

Every "high end" die released by AMD has been compute heavy and not optimized for gaming. Every single one.

AMD refuses to release large gaming-focused GPUs, which is why they're consistently two years behind in terms of performance, features, and efficiency for gaming.

I know you're okay with that given your religious preferences, but some of us wish for the old ATi back that actually brought not just competitive but market-leading products out.
 
Every "high end" die released by AMD has been compute heavy and not optimized for gaming. Every single one.

AMD refuses to release large gaming-focused GPUs, which is why they're consistently two years behind in terms of performance, features, and efficiency for gaming.

I know you're okay with that given your religious preferences, but some of us wish for the old ATi back that actually brought not just competitive but market-leading products out.

Vega Architecture was too far behind Pascal, to enable a successful business case for a 1080Ti competitor. Vega 64 was already a bigger die with more transistors that 1080 Ti, and it was a 1080 competitor.

Not to forget that AMD was in a more precarious financial position then, making any big chip that might fail, doubly risky.

Now Navi looks to be much more on par with transistor counts for performance with Turing. And on top of that, AMD has much better financials.

So the situation is significantly different. AMD certainly could release a big Navi to compete with 2080Ti. Rumors are that it will.

Though it remains to be seen, how competitive, and if it even arrives before Ampere.
 
My biggest beef is that it's disingenuous as hell to sell a card as only being able to do something that other cards can do and then not bothering to make sure the public knew the difference between full scene which no card can do versus one object which all cards can do.

This literally never happened.

Nvidia has posted multiple research papers and game specific guides clearly describing how raytracing improves IQ when combined with rasterization. At no point did they claim their hardware can simulate every bounce of light in real-time.

They also never claimed that no other hardware can do raytracing. The raytracing algorithm is very old, relatively simple and can be coded in a few lines of C++. Raytracing has been used for decades. Nobody thinks Nvidia invented it or owns it.

What Nvidia claimed is that their hardware implementation can cast rays faster. It is purely a performance claim. And that claim is obviously true.
 
Every "high end" die released by AMD has been compute heavy and not optimized for gaming. Every single one.

AMD refuses to release large gaming-focused GPUs, which is why they're consistently two years behind in terms of performance, features, and efficiency for gaming.

I know you're okay with that given your religious preferences, but some of us wish for the old ATi back that actually brought not just competitive but market-leading products out.

I am ok with that given my religious preferences? What the hell is that supposed to mean? Also, you have yet to show proof that they refused to do so. :D
 
Also, you have yet to show proof that they refused to do so.

They haven't, ever, despite continuous pleas from their community to do so.

Vega Architecture was too far behind Pascal, to enable a successful business case for a 1080Ti competitor. Vega 64 was already a bigger die with more transistors that 1080 Ti, and it was a 1080 competitor.

These dies were more competitive with the dies used in the Quadros and Teslas that they were matched against, again, because that is what they were designed for. They weren't designed for gaming first like Polaris and Navi (so far) were.

Now, we can easily argue that AMDs targeting of HPC with their larger GPU dies has been prudent from a business standpoint, but from a gaming standpoint, they've yet to deliver in the same way that ATi did before.
 
They haven't, ever, despite continuous pleas from their community to do so.



These dies were more competitive with the dies used in the Quadros and Teslas that they were matched against, again, because that is what they were designed for. They weren't designed for gaming first like Polaris and Navi (so far) were.

Now, we can easily argue that AMDs targeting of HPC with their larger GPU dies has been prudent from a business standpoint, but from a gaming standpoint, they've yet to deliver in the same way that ATi did before.

Which still does not show that they refused.
 
Here are my results for Radeon 5700 XT AE on Ryzen 3 3900x (in sig). Video does not really compare to actual visuals as most would know plus is buttery smooth, unlike video, when running.




I just ran this benchmark at 3440x1440 and got like 3455 (not sure now) and was concerned when I saw how high your scores were. Then I got 7492@1080p Ultra windowed.
Why am I scoring higher than you? Ryzen 5 2600, Sapphire Pulse 5700XT. Nothing fancy. 19.11.3 Beta drivers.

Looks pretty cool. For some reason it reminds me the first time I ran the 3D Mark 2011 (was it?) demo with the bank lobby shootout. Just pretty.
 
I just ran this benchmark at 3440x1440 and got like 3455 (not sure now) and was concerned when I saw how high your scores were. Then I got 7492@1080p Ultra windowed.
Why am I scoring higher than you? Ryzen 5 2600, Sapphire Pulse 5700XT. Nothing fancy. 19.11.3 Beta drivers.

Looks pretty cool. For some reason it reminds me the first time I ran the 3D Mark 2011 (was it?) demo with the bank lobby shootout. Just pretty.
hmmm, not sure. I did all my tests full screen so it was probably also using GPU scaling if that makes any difference. Still both performing about the same is what one would expect in the end.
 
Like with the current generation of consoles...?

Not likely.

Especially since AMD continues to refuse to compete in the desktop high-end space.

I would not call it refuse just AMD was not capable at the time to compete with Nvidia gaming performance. Sounds like AMD is not refusing on upcoming generation but will wait for actual proof.
 
I would not call it refuse just AMD was not capable at the time to compete with Nvidia gaming performance. Sounds like AMD is not refusing on upcoming generation but will wait for actual proof.

Take any of their mid-size GPUs that cut back on compute. Double it.

This is what Nvidia has done, and what AMD has chosen not to.
 
Take any of their mid-size GPUs that cut back on compute. Double it.

This is what Nvidia has done, and what AMD has chosen not to.
Currently it is most likely fab availability for 7nm limiting how much AMD can release and less in AMD choice. Too bad AMD did not also work with Samsung with GPU production.
 
Take any of their mid-size GPUs that cut back on compute. Double it.

This is what Nvidia has done, and what AMD has chosen not to.

It's not that simple. Where do think compute performance comes from. It's from the shader cores. More shader cores = higher compute.

NVidia had better efficiency tricks that let it make the most of their cores, so they could do higher game performance for the same compute performance. AMD Vega also had an occupancy issue driven by the need to deliver instructions in blocks of 4, that has been fixed in Navi.

Now with Navi AMD occupancy and efficiency are similar to NVidias, and they get similar gaming performance for similar compute performance.
 
IIRC the venerable 8800GTX was the first card to do Raytracing via nvidia Optix. No where near realtime mind you, but baby steps... look at where are we now.
 
Currently it is most likely fab availability for 7nm limiting how much AMD can release and less in AMD choice. Too bad AMD did not also work with Samsung with GPU production.

They really could have been doing it for years and haven't been.

It's not that simple. Where do think compute performance comes from. It's from the shader cores. More shader cores = higher compute.

Well, shader cores multiplied by clockspeed with the memory bandwidth to support it all.

NVidia had better efficiency tricks that let it make the most of their cores, so they could do higher game performance for the same compute performance.

This is similar to IPC on CPUs; consider that Zen has more 'compute performance' and yet still comes in around Skylake-levels. Depending on the application of course. Same for the GPUs

AMD Vega also had an occupancy issue driven by the need to deliver instructions in blocks of 4, that has been fixed in Navi.

I really don't think it's that simple.

Now with Navi AMD occupancy and efficiency are similar to NVidias, and they get similar gaming performance for similar compute performance.

The performance is there -- not really much point in grabbing Nvidia below the 2070 unless for G-Sync or other specific application support -- but the efficiency just isn't.

Not yet.

When we start seeing mobile Navi releases, and we compare them to Nvidia's next mobile GPUs, then we can talk ;)
 
Now with Navi AMD occupancy and efficiency are similar to NVidias, and they get similar gaming performance for similar compute performance.

Technically Nvidia has more raw compute now with the separate integer alus.
 
I am more interested in how Ampere will compare to Turing for shader performance as well as RT. Is it more then just a die shrink and packing a few more transistors or a much further design change. As for AMD going to 7+ TSMC and introductory with hardware RT (chiplet or monolith for RT?).

Currently it looks like AMD will own the less than $200 spot for perf/$, owns the $300 to $400 and has nothing over $500 to compare to for gaming - professional side is different depending upon application. Ampere and RNDA2 can't come fast enough. Looks like June of next year for one or both to hit.
 
Recommend this video dealing with viability and future of RTX, does not look good but who knows, Intel's version GPP program that was used. Shows this demo as well and how the current generation stacks up, did not know that it was DX 11 and did not use Vulkan RT or Microsoft DSR:

 
Back
Top