Well, you and anyone else with half a brain that actually read the details on the technology. 20 years ago we had raytracing, when neither RT nor GPUs existed. Of course the RT cores are only meant to accelerate raytracing calculations (currently done at 1 ray per pixel or less, then just denoising the mess they output). You could do the same on a CPU, just slower.

Seems like Crytek have designed a way to use shaders to raytrace more efficiently. Again, this is nothing new, as Microsoft has always said that DXR would use shaders in the absence of raytracing hardware (which is all that RTX does, accelerate calculations to then translate them to DXR so they work via DX12):

View attachment 148412

The interesting thing will be if Crytek's approach enables at least 1080p30 rendering with a similar degree of quality as the RTX 20 series.

This is my impression also. I am unsure where the idea that ray-tracing can only be done on RTX cards has come from. RTX is just NVIDIA’s platform name, part of which includes ray-tracing. In their white papers, and even on their blog, NVIDIA has been clear that RT cores simply accelerate parts of the ray-tracing pipeline (which they also state can currently be done using traditional shaders), and to my knowledge they aren't even directly exposed. As long as an application incorporates ray-tracing through APIs like DirectX or Vulkan, NVIDIA's driver (i.e. RTX) will handle the rest.

In the case of this demo, Crytek already states that they will be optimizing their ray-tracing implementation to use DX12 (and I would assume DXR) and Vulkan, meaning that any rays that Cryengine does trace in real-time will be accelerated by the RT cores. Depending on what this number is (which I would guess in this demo video is quite small since it's running in real-time on a Vega 56), the speed-up from using an RTX vs Non-RTX gpu may not be dramatic. However, unless Crytek is doing some magic and somehow actually tracing rays for every pixel, using an RTX card would allow them to either trace more rays at the same performance target, or potentially even path trace (one can dream!).
 
I've known from the beginning that ray tracing doesn't need any special cores to be achieved and that the RT cores in RTX cards are there to accelerate it. I feel people are getting overly excited by this canned demo. It's impressive and all but we need to see how it looks and performs in a game before declaring RTX cards a complete scam when it comes to ray tracing.
I've been waiting for someone to do what Crytek did. What Nvidia has done isn't new. It just isn't, and it's been done way back in 2010 and it never took off. Simple reason is that whatever the ASIC can do, a Xeon does it nearly as well and can run other software. That's the problem with Nvidia's Ray Trace ASIC in that it does nothing else but Ray Tracing.

Fast forward a few years and people have learned to do Real Time Ray Tracing on a GPU through hybrid Ray-Tracing. Which is a huge achievement since GPU's don't have good logic branch prediction like CPU's. There is a method I forget that can solve this problem and that's what these Japanese programmers did.

My thinking is that someone could actually combine CPU+GPU to do what Nvidia does or better. I don't think that's what Crytek is doing here as they're using the GPU I assume. But gotta remember is that AMD GPU's have really good compute performance hence why they were heavily favored for crypto mining over Nvidia. Maybe this method works really well on AMD but not so much on Nvidia cards, which may explain why Nvidia put in ASIC's.

You know what we need? We need that Quake 2 Ray Tracing mod to use this method to see how it compares to Nvidia's RTX.

 
Assuming there is no trickery involved this is quite impressive. I'm assuming the video is a sales pitch trying to get Devs to license their engine for the next gen of consoles which are both probably using Navi.
 
Putting dedicated ray-tracing hardware into a GPU for real-time rendering isn't new?

:ROFLMAO:

Still isn't anyone that has done it either. What Nvidia has done is find uses for Tensor flow hardware. That is hardly the same thing.

Don't get me wrong if there putting tensor cores in their GPUs these days for their AI business its wise to find game related uses for it so your not shipping parts with 20% of the die doing nothing. Still as many people have talked about... using tensor for Ray casting is smart, but its probably not the best way to achieve a ray traced hybrid render anyway. There is a reason Nvidia has to apply a ton of denoise to make their tensor powered tech work.

As others have said... bring on the Quake mod spin that uses this method and lets do some IQ and Performance comparisons. (My money is on the GPU agnostic shader method on both scores... it should be easier to work with and faster.)
 
Assuming there is no trickery involved this is quite impressive. I'm assuming the video is a sales pitch trying to get Devs to license their engine for the next gen of consoles which are both probably using Navi.

AMD has more then hinted that their next gen console parts with be capable of hybrid tracing. Its very possible this is the very tech they have been hinting at, MS has always said DX tracing could use tensor or shaders to do the work.

Should be an interesting year in GPUs as more navi stuff leaks out.
 
Technically speaking, physics are physics, light is light, and math is math. whatever Crytek is doing to calculate their raytracing should look exactly the same as what DXR or RTX calculate.
there are no concrete standards for how rendering engine handle physics models, so even with the top non-realtime ray tracers they all look different and you gotta work to make them look similar...

in other words we're still in the early days of figuring out "technically" accurate rendering, so what "should" be is not a realistic thing to expect
 
It would be interesting if a game engine was ever explicitly built around MIDI sequenced pixel rendering. I mean think of black midi and all the note changes that is capable of sequencing so not individual compute shading pixels?
 
Dayyyuuuum. The Crytek guys are hella competent. This is crazy good. Looks fluid.
 
It would be interesting if a game engine was ever explicitly built around MIDI sequenced pixel rendering. I mean think of black midi and all the note changes that is capable of sequencing so not individual compute shading pixels?
dude, send your resume to crytek asap, they need your help :D
 
Ermmmhh......DXR is hardware agnostic?
https://en.wikipedia.org/wiki/DirectX_Raytracing

This is not very useful without the comparing performance of different SKU's.

Fun fact:
Both the 1080ti (pascal) and Nvidia Titan V (volta) is able to do DXR.

The 1080ti is just about 10% of the DXR performance of the 2080 Ti...so again:

Being able to do DXR is not the main issue...the performance (and performance gap) is.
 
Nvidia should take notes, that's how you demonstrate raytracing.
Less bullshit, more delivery. They allowed freesync after years to try take marketshare, now their only other advantage is gone and it was using a 2 year old GPU.. good job Crytek.
 
Nvidia should take notes, that's how you demonstrate raytracing.

Ray tracing has been well demonstrated for decades...

They allowed freesync after years to try take marketshare, now their only other advantage is gone and it was using a 2 year old GPU

G-Sync was first and is still a superior implementation, and aside from that they still have faster and more efficient GPUs...

good job Crytek.

Well yeah, good job on the tech demo video!

Now let's see it in games, and let's see performance by independent reviewers.
 
Every instance of RTX ray tracing feels a bit off to me. It's like every demo or bit of gameplay I have seen is either over-implemented or not noticable. This didn't feel that way. Very impressive.

Also any implementation is cheating unless it is computing photon path of every ray of light as the basis for rendering.
 
Last edited:
Links to performance differences using DXR between Radeon VII and 2080 ti:
RadeonVII_DXR_2.png


200_Ti_raytracing.png
 
So it will work for any card. That is great news. I guess as it stands it will be a while (again) before enough titles have it to make it worth while.

Nice to see that on a Vega too.

I don't know enough about RT and how it works. But It seems it will not take anything away from RTX at the moment.
Not that there is much to take away anyhow.

This time next year should be interesting.
 
Well, you and anyone else with half a brain that actually read the details on the technology. 20 years ago we had raytracing, when neither RT nor GPUs existed. Of course the RT cores are only meant to accelerate raytracing calculations (currently done at 1 ray per pixel or less, then just denoising the mess they output). You could do the same on a CPU, just slower.

Seems like Crytek have designed a way to use shaders to raytrace more efficiently. Again, this is nothing new, as Microsoft has always said that DXR would use shaders in the absence of raytracing hardware (which is all that RTX does, accelerate calculations to then translate them to DXR so they work via DX12):

View attachment 148412

The interesting thing will be if Crytek's approach enables at least 1080p30 rendering with a similar degree of quality as the RTX 20 series.


DXR is HW agnostic, that is the point of a system API.

BUT, it is dependent on HW Vendor providing DXR driver implementation.

Which AMD doesn't do. So this demo is definitely not using DXR, but some kind of custom RT implementation using common DX12 features already supported.

Longer term you can expect most games with RT will use DXR API.
 
Well, you and anyone else with half a brain that actually read the details on the technology. 20 years ago we had raytracing, when neither RT nor GPUs existed. Of course the RT cores are only meant to accelerate raytracing calculations (currently done at 1 ray per pixel or less, then just denoising the mess they output). You could do the same on a CPU, just slower.

Seems like Crytek have designed a way to use shaders to raytrace more efficiently. Again, this is nothing new, as Microsoft has always said that DXR would use shaders in the absence of raytracing hardware (which is all that RTX does, accelerate calculations to then translate them to DXR so they work via DX12):

View attachment 148412

The interesting thing will be if Crytek's approach enables at least 1080p30 rendering with a similar degree of quality as the RTX 20 series.

You got that one backwards.

DXR is DirectX Raytracing...what RTX does is provide a dedicated hardware path (ASIC) so the DXR doesn't have to run via the shader-cores.
The shader-core can then be used for rasterization and not compete with RT for the SAME resources.

And be careful about saying this is RTX 20x0 level of performance, because you are lacking a vital component:
This demo's performance on RTX 20x0 hardware.

Until you have that...no comparions can be made.
 
So much for the 1200$ plus vid cards lol. I always thought that developers could do WAY more in engine than the bullshit RTX crap
 
  • Like
Reactions: N4CR
like this
The level of technical insight (the lack of) in this thread is sad.
That DXR works on non-RTX hardware was never a secret?

The thing that RTX (RT-cores + Tensor cores) did... was to give a HUGE performance benefit...making it PLAYABLE...unlike single digit perfomance on DXR when running on the shaders.

Again, until we have performance numbers from different SKU’s....NOTHING is news about this.
 
More on the primary factor (performance) being ignored here:


The level of technical insight (the lack of) in this thread is sad.
That DXR works on non-RTX hardware was never a secret?

The thing that RTX (RT-cores + Tensor cores) did... was to give a HUGE performance benefit...making it PLAYABLE...unlike single digit perfomance on DXR when running on the shaders.

Again, until we have performance numbers from different SKU’s....NOTHING is news about this.


as for talking about ignoring things.. NON RTX card's shaders are performing double duty, where as the RTX cards are using Tensor cores to offload the Ray tracing load onto, meaning their shaders are not performing double duty. If you add the equivalent number of shaders to a non RTX card as there are tensor cores on a RTX card, and dedicate those shaders only doing Ray Tracing, I suspect that the performance factor would be eliminated if not out perform tensor cores.
 
as for talking about ignoring things.. NON RTX card's shaders are performing double duty, where as the RTX cards are using Tensor cores to offload the Ray tracing load onto, meaning their shaders are not performing double duty. If you add the equivalent number of shaders to a non RTX card as there are tensor cores on a RTX card, and dedicate those shaders only doing Ray Tracing, I suspect that the performance factor would be eliminated if not out perform tensor cores.

Compare the performance gap between a 1080 Ti and a RTX 2080 and a Titan V and your “theory” falls flat on it’s face, sorry.
 
there are no concrete standards for how rendering engine handle physics models, so even with the top non-realtime ray tracers they all look different and you gotta work to make them look similar...

in other words we're still in the early days of figuring out "technically" accurate rendering, so what "should" be is not a realistic thing to expect

Wait, raytracing follows phisically based light rendering. That's what I meant when I said physics, light, and math are just that. There's no faking it. You can raytrace with more or less rays per pixel, but they should all look the same if calculating the same number of rpp. Is that not accurate?

You got that one backwards.

DXR is DirectX Raytracing...what RTX does is provide a dedicated hardware path (ASIC) so the DXR doesn't have to run via the shader-cores.

That... is the exact same thing I (and others) have said:

upload_2019-3-16_9-16-45.png

If it's the "translate" back to DXR part that confused you, Nvidia is after all a middle man between DX and the gamer, and their driver does precisely that: translate DX code into Nvidia proprietary code to run faster and closer-to-metal, calculate results, translate back to DX interpretable code. I mean, that's what all GPUs do.
 
Crytek does some amazing things with software, but it was my understanding few are using Cryengine. I thought there was a migration due to the ease of development and lower cost to UE basically across the board...

That's a solid demo and performance on a Vega 56. The nvidia haters are still holding onto the ohhh shiny bit when it comes to RT, but this is agnostic so we can stop that. The reflections and lighting really help with immersion and what they put together looked pretty dang good.
 
  • Like
Reactions: MBTP
like this
So conclusion: Taytracing should be add on cards. AMD should team up the the Taytracing company whatchamacallit, and release one as well as chiplet to be included in future Ryzen releases.
....Power VR is the one.. Wonder if they are good for RT.
 
Last edited:
So conclusion: Taytracing should be add on cards. AMD should team up the the Taytracing company whatchamacallit, and release one as well as chiplet to be included in future Ryzen releases.
....Power VR is the one.. Wonder if they are good for RT.

Is that some kind of Taylor Swift tracking app? :p

On topic, happy to see more DXR, it can only mean good things.
 
it was my understanding few are using Cryengine. I thought there was a migration due to the ease of development and lower cost to UE basically across the board...

Many have made the jump, yes. That's probably why Crytek is trying to show off, to capture clients. We'll probably see a similar update for UE in the next few months. Epic wants you to use Epic tools, not Nvidia's. Any engine maker will support other companies' tech when they absolutely have to, otherwise, they'll develop their own integrated code.
 
Many have made the jump, yes. That's probably why Crytek is trying to show off, to capture clients. We'll probably see a similar update for UE in the next few months. Epic wants you to use Epic tools, not Nvidia's. Any engine maker will support other companies' tech when they absolutely have to, otherwise, they'll develop their own integrated code.
At worst its a pretty good demonstration of their tech all things considered, but I wonder if they have the funds like Epic to push it. Devs and engineers at Crytek are doing amazing work on the budget they have, but Epic seems to be trying to bury them and you are right will likely attempt to copy it.
 
Nvidia should take notes, that's how you demonstrate raytracing.
Less bullshit, more delivery. They
A canned demo instead of actual games? No. Thats not delivery of anything.

If all Nvidia did was put out a demo then the price-butthurt-bridage would really be losing their minds.
 
Last edited:
How are they even testing in DXR, when AFAIK, you need vendor drivers that support DXR and AMD doesn't provide any?

I assume there's a reference or fallback layer.

When the DX12 SDK first launched, I'm pretty sure I remember using the WARP device temporarily. It was like a pre-release version or something and the API wasn't even finalized yet and I could develop even without supported hardware/drivers.
 
they should all look the same if calculating the same number of rpp. Is that not accurate?
i think no. This is just based on my use of raytracers for arch viz over 15 years. They may use the same basic fundamental theories but the light path models, materials, atnospherics, cameras, tonemapping... all have variations which result in the final pixels looking different. Most of the time the difference is due to the engineers trying to find ways to boost render times. I've even talked to some of the developers and they say "close enough in realsim, but saves 10x time so it's worth it"
 
Last edited:
Compare the performance gap between a 1080 Ti and a RTX 2080 and a Titan V and your “theory” falls flat on it’s face, sorry.
?? How so? I am unaware that the 1080ti/2080/titan v has dedicated shaders just for Raytracing, they are all still performing double duty if the tensor cores are not used (2080/titan v). You would also need drivers to support such a function, which non exist. I am also unaware that their are any 2080's or titan v's without tensor cores. Not to mention the fp16/fp32 differences between the 1080ti and 2080/titan v.
 
Last edited:
Back
Top