Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Well, you and anyone else with half a brain that actually read the details on the technology. 20 years ago we had raytracing, when neither RT nor GPUs existed. Of course the RT cores are only meant to accelerate raytracing calculations (currently done at 1 ray per pixel or less, then just denoising the mess they output). You could do the same on a CPU, just slower.
Seems like Crytek have designed a way to use shaders to raytrace more efficiently. Again, this is nothing new, as Microsoft has always said that DXR would use shaders in the absence of raytracing hardware (which is all that RTX does, accelerate calculations to then translate them to DXR so they work via DX12):
View attachment 148412
The interesting thing will be if Crytek's approach enables at least 1080p30 rendering with a similar degree of quality as the RTX 20 series.
I've been waiting for someone to do what Crytek did. What Nvidia has done isn't new. It just isn't, and it's been done way back in 2010 and it never took off. Simple reason is that whatever the ASIC can do, a Xeon does it nearly as well and can run other software. That's the problem with Nvidia's Ray Trace ASIC in that it does nothing else but Ray Tracing.I've known from the beginning that ray tracing doesn't need any special cores to be achieved and that the RT cores in RTX cards are there to accelerate it. I feel people are getting overly excited by this canned demo. It's impressive and all but we need to see how it looks and performs in a game before declaring RTX cards a complete scam when it comes to ray tracing.
What Nvidia has done isn't new.
Putting dedicated ray-tracing hardware into a GPU for real-time rendering isn't new?
![]()
Assuming there is no trickery involved this is quite impressive. I'm assuming the video is a sales pitch trying to get Devs to license their engine for the next gen of consoles which are both probably using Navi.
there are no concrete standards for how rendering engine handle physics models, so even with the top non-realtime ray tracers they all look different and you gotta work to make them look similar...Technically speaking, physics are physics, light is light, and math is math. whatever Crytek is doing to calculate their raytracing should look exactly the same as what DXR or RTX calculate.
dude, send your resume to crytek asap, they need your helpIt would be interesting if a game engine was ever explicitly built around MIDI sequenced pixel rendering. I mean think of black midi and all the note changes that is capable of sequencing so not individual compute shading pixels?
Nvidia should take notes, that's how you demonstrate raytracing.
They allowed freesync after years to try take marketshare, now their only other advantage is gone and it was using a 2 year old GPU
good job Crytek.
Well, you and anyone else with half a brain that actually read the details on the technology. 20 years ago we had raytracing, when neither RT nor GPUs existed. Of course the RT cores are only meant to accelerate raytracing calculations (currently done at 1 ray per pixel or less, then just denoising the mess they output). You could do the same on a CPU, just slower.
Seems like Crytek have designed a way to use shaders to raytrace more efficiently. Again, this is nothing new, as Microsoft has always said that DXR would use shaders in the absence of raytracing hardware (which is all that RTX does, accelerate calculations to then translate them to DXR so they work via DX12):
View attachment 148412
The interesting thing will be if Crytek's approach enables at least 1080p30 rendering with a similar degree of quality as the RTX 20 series.
Well, you and anyone else with half a brain that actually read the details on the technology. 20 years ago we had raytracing, when neither RT nor GPUs existed. Of course the RT cores are only meant to accelerate raytracing calculations (currently done at 1 ray per pixel or less, then just denoising the mess they output). You could do the same on a CPU, just slower.
Seems like Crytek have designed a way to use shaders to raytrace more efficiently. Again, this is nothing new, as Microsoft has always said that DXR would use shaders in the absence of raytracing hardware (which is all that RTX does, accelerate calculations to then translate them to DXR so they work via DX12):
View attachment 148412
The interesting thing will be if Crytek's approach enables at least 1080p30 rendering with a similar degree of quality as the RTX 20 series.
More on the primary factor (performance) being ignored here:
The level of technical insight (the lack of) in this thread is sad.
That DXR works on non-RTX hardware was never a secret?
The thing that RTX (RT-cores + Tensor cores) did... was to give a HUGE performance benefit...making it PLAYABLE...unlike single digit perfomance on DXR when running on the shaders.
Again, until we have performance numbers from different SKU’s....NOTHING is news about this.
as for talking about ignoring things.. NON RTX card's shaders are performing double duty, where as the RTX cards are using Tensor cores to offload the Ray tracing load onto, meaning their shaders are not performing double duty. If you add the equivalent number of shaders to a non RTX card as there are tensor cores on a RTX card, and dedicate those shaders only doing Ray Tracing, I suspect that the performance factor would be eliminated if not out perform tensor cores.
there are no concrete standards for how rendering engine handle physics models, so even with the top non-realtime ray tracers they all look different and you gotta work to make them look similar...
in other words we're still in the early days of figuring out "technically" accurate rendering, so what "should" be is not a realistic thing to expect
You got that one backwards.
DXR is DirectX Raytracing...what RTX does is provide a dedicated hardware path (ASIC) so the DXR doesn't have to run via the shader-cores.
So conclusion: Taytracing should be add on cards. AMD should team up the the Taytracing company whatchamacallit, and release one as well as chiplet to be included in future Ryzen releases.
....Power VR is the one.. Wonder if they are good for RT.
it was my understanding few are using Cryengine. I thought there was a migration due to the ease of development and lower cost to UE basically across the board...
At worst its a pretty good demonstration of their tech all things considered, but I wonder if they have the funds like Epic to push it. Devs and engineers at Crytek are doing amazing work on the budget they have, but Epic seems to be trying to bury them and you are right will likely attempt to copy it.Many have made the jump, yes. That's probably why Crytek is trying to show off, to capture clients. We'll probably see a similar update for UE in the next few months. Epic wants you to use Epic tools, not Nvidia's. Any engine maker will support other companies' tech when they absolutely have to, otherwise, they'll develop their own integrated code.
Links to performance differences using DXR between Radeon VII and 2080 ti:View attachment 148498
View attachment 148499
A canned demo instead of actual games? No. Thats not delivery of anything.Nvidia should take notes, that's how you demonstrate raytracing.
Less bullshit, more delivery. They
Like if this was the only way to do it...
Like if this was the only way to do it...
How are they even testing in DXR, when AFAIK, you need vendor drivers that support DXR and AMD doesn't provide any?
i think no. This is just based on my use of raytracers for arch viz over 15 years. They may use the same basic fundamental theories but the light path models, materials, atnospherics, cameras, tonemapping... all have variations which result in the final pixels looking different. Most of the time the difference is due to the engineers trying to find ways to boost render times. I've even talked to some of the developers and they say "close enough in realsim, but saves 10x time so it's worth it"they should all look the same if calculating the same number of rpp. Is that not accurate?
?? How so? I am unaware that the 1080ti/2080/titan v has dedicated shaders just for Raytracing, they are all still performing double duty if the tensor cores are not used (2080/titan v). You would also need drivers to support such a function, which non exist. I am also unaware that their are any 2080's or titan v's without tensor cores. Not to mention the fp16/fp32 differences between the 1080ti and 2080/titan v.Compare the performance gap between a 1080 Ti and a RTX 2080 and a Titan V and your “theory” falls flat on it’s face, sorry.