Separate names with a comma.
Discussion in 'HardForum Tech News' started by cageymaru, Mar 15, 2019.
Yeah, but let’s see you try to run Space Invaders on that!
This is my impression also. I am unsure where the idea that ray-tracing can only be done on RTX cards has come from. RTX is just NVIDIA’s platform name, part of which includes ray-tracing. In their white papers, and even on their blog, NVIDIA has been clear that RT cores simply accelerate parts of the ray-tracing pipeline (which they also state can currently be done using traditional shaders), and to my knowledge they aren't even directly exposed. As long as an application incorporates ray-tracing through APIs like DirectX or Vulkan, NVIDIA's driver (i.e. RTX) will handle the rest.
In the case of this demo, Crytek already states that they will be optimizing their ray-tracing implementation to use DX12 (and I would assume DXR) and Vulkan, meaning that any rays that Cryengine does trace in real-time will be accelerated by the RT cores. Depending on what this number is (which I would guess in this demo video is quite small since it's running in real-time on a Vega 56), the speed-up from using an RTX vs Non-RTX gpu may not be dramatic. However, unless Crytek is doing some magic and somehow actually tracing rays for every pixel, using an RTX card would allow them to either trace more rays at the same performance target, or potentially even path trace (one can dream!).
I've been waiting for someone to do what Crytek did. What Nvidia has done isn't new. It just isn't, and it's been done way back in 2010 and it never took off. Simple reason is that whatever the ASIC can do, a Xeon does it nearly as well and can run other software. That's the problem with Nvidia's Ray Trace ASIC in that it does nothing else but Ray Tracing.
Fast forward a few years and people have learned to do Real Time Ray Tracing on a GPU through hybrid Ray-Tracing. Which is a huge achievement since GPU's don't have good logic branch prediction like CPU's. There is a method I forget that can solve this problem and that's what these Japanese programmers did.
My thinking is that someone could actually combine CPU+GPU to do what Nvidia does or better. I don't think that's what Crytek is doing here as they're using the GPU I assume. But gotta remember is that AMD GPU's have really good compute performance hence why they were heavily favored for crypto mining over Nvidia. Maybe this method works really well on AMD but not so much on Nvidia cards, which may explain why Nvidia put in ASIC's.
You know what we need? We need that Quake 2 Ray Tracing mod to use this method to see how it compares to Nvidia's RTX.
Putting dedicated ray-tracing hardware into a GPU for real-time rendering isn't new?
Assuming there is no trickery involved this is quite impressive. I'm assuming the video is a sales pitch trying to get Devs to license their engine for the next gen of consoles which are both probably using Navi.
Still isn't anyone that has done it either. What Nvidia has done is find uses for Tensor flow hardware. That is hardly the same thing.
Don't get me wrong if there putting tensor cores in their GPUs these days for their AI business its wise to find game related uses for it so your not shipping parts with 20% of the die doing nothing. Still as many people have talked about... using tensor for Ray casting is smart, but its probably not the best way to achieve a ray traced hybrid render anyway. There is a reason Nvidia has to apply a ton of denoise to make their tensor powered tech work.
As others have said... bring on the Quake mod spin that uses this method and lets do some IQ and Performance comparisons. (My money is on the GPU agnostic shader method on both scores... it should be easier to work with and faster.)
AMD has more then hinted that their next gen console parts with be capable of hybrid tracing. Its very possible this is the very tech they have been hinting at, MS has always said DX tracing could use tensor or shaders to do the work.
Should be an interesting year in GPUs as more navi stuff leaks out.
Looked mind-blowing in 4k - I liked how everything looks so real.
there are no concrete standards for how rendering engine handle physics models, so even with the top non-realtime ray tracers they all look different and you gotta work to make them look similar...
in other words we're still in the early days of figuring out "technically" accurate rendering, so what "should" be is not a realistic thing to expect
It would be interesting if a game engine was ever explicitly built around MIDI sequenced pixel rendering. I mean think of black midi and all the note changes that is capable of sequencing so not individual compute shading pixels?
Dayyyuuuum. The Crytek guys are hella competent. This is crazy good. Looks fluid.
dude, send your resume to crytek asap, they need your help
Ermmmhh......DXR is hardware agnostic?
This is not very useful without the comparing performance of different SKU's.
Both the 1080ti (pascal) and Nvidia Titan V (volta) is able to do DXR.
The 1080ti is just about 10% of the DXR performance of the 2080 Ti...so again:
Being able to do DXR is not the main issue...the performance (and performance gap) is.
Nvidia should take notes, that's how you demonstrate raytracing.
Less bullshit, more delivery. They allowed freesync after years to try take marketshare, now their only other advantage is gone and it was using a 2 year old GPU.. good job Crytek.
Ray tracing has been well demonstrated for decades...
G-Sync was first and is still a superior implementation, and aside from that they still have faster and more efficient GPUs...
Well yeah, good job on the tech demo video!
Now let's see it in games, and let's see performance by independent reviewers.
Every instance of RTX ray tracing feels a bit off to me. It's like every demo or bit of gameplay I have seen is either over-implemented or not noticable. This didn't feel that way. Very impressive.
Also any implementation is cheating unless it is computing photon path of every ray of light as the basis for rendering.
Links to performance differences using DXR between Radeon VII and 2080 ti:
So it will work for any card. That is great news. I guess as it stands it will be a while (again) before enough titles have it to make it worth while.
Nice to see that on a Vega too.
I don't know enough about RT and how it works. But It seems it will not take anything away from RTX at the moment.
Not that there is much to take away anyhow.
This time next year should be interesting.
More on the primary factor (performance) being ignored here:
DXR is HW agnostic, that is the point of a system API.
BUT, it is dependent on HW Vendor providing DXR driver implementation.
Which AMD doesn't do. So this demo is definitely not using DXR, but some kind of custom RT implementation using common DX12 features already supported.
Longer term you can expect most games with RT will use DXR API.
You got that one backwards.
DXR is DirectX Raytracing...what RTX does is provide a dedicated hardware path (ASIC) so the DXR doesn't have to run via the shader-cores.
The shader-core can then be used for rasterization and not compete with RT for the SAME resources.
And be careful about saying this is RTX 20x0 level of performance, because you are lacking a vital component:
This demo's performance on RTX 20x0 hardware.
Until you have that...no comparions can be made.
So much for the 1200$ plus vid cards lol. I always thought that developers could do WAY more in engine than the bullshit RTX crap
Nvidia throwing out the bamboozle.
The level of technical insight (the lack of) in this thread is sad.
That DXR works on non-RTX hardware was never a secret?
The thing that RTX (RT-cores + Tensor cores) did... was to give a HUGE performance benefit...making it PLAYABLE...unlike single digit perfomance on DXR when running on the shaders.
Again, until we have performance numbers from different SKU’s....NOTHING is news about this.
as for talking about ignoring things.. NON RTX card's shaders are performing double duty, where as the RTX cards are using Tensor cores to offload the Ray tracing load onto, meaning their shaders are not performing double duty. If you add the equivalent number of shaders to a non RTX card as there are tensor cores on a RTX card, and dedicate those shaders only doing Ray Tracing, I suspect that the performance factor would be eliminated if not out perform tensor cores.
Compare the performance gap between a 1080 Ti and a RTX 2080 and a Titan V and your “theory” falls flat on it’s face, sorry.
Wait, raytracing follows phisically based light rendering. That's what I meant when I said physics, light, and math are just that. There's no faking it. You can raytrace with more or less rays per pixel, but they should all look the same if calculating the same number of rpp. Is that not accurate?
That... is the exact same thing I (and others) have said:
If it's the "translate" back to DXR part that confused you, Nvidia is after all a middle man between DX and the gamer, and their driver does precisely that: translate DX code into Nvidia proprietary code to run faster and closer-to-metal, calculate results, translate back to DX interpretable code. I mean, that's what all GPUs do.
Crytek does some amazing things with software, but it was my understanding few are using Cryengine. I thought there was a migration due to the ease of development and lower cost to UE basically across the board...
That's a solid demo and performance on a Vega 56. The nvidia haters are still holding onto the ohhh shiny bit when it comes to RT, but this is agnostic so we can stop that. The reflections and lighting really help with immersion and what they put together looked pretty dang good.
So conclusion: Taytracing should be add on cards. AMD should team up the the Taytracing company whatchamacallit, and release one as well as chiplet to be included in future Ryzen releases.
....Power VR is the one.. Wonder if they are good for RT.
Is that some kind of Taylor Swift tracking app?
On topic, happy to see more DXR, it can only mean good things.
Many have made the jump, yes. That's probably why Crytek is trying to show off, to capture clients. We'll probably see a similar update for UE in the next few months. Epic wants you to use Epic tools, not Nvidia's. Any engine maker will support other companies' tech when they absolutely have to, otherwise, they'll develop their own integrated code.
At worst its a pretty good demonstration of their tech all things considered, but I wonder if they have the funds like Epic to push it. Devs and engineers at Crytek are doing amazing work on the budget they have, but Epic seems to be trying to bury them and you are right will likely attempt to copy it.
Like if this was the only way to do it...
A canned demo instead of actual games? No. Thats not delivery of anything.
If all Nvidia did was put out a demo then the price-butthurt-bridage would really be losing their minds.
It the most efficient (performance) way of doing it...feel free to offer an alternative that isn’t wishfull thinking?
How are they even testing in DXR, when AFAIK, you need vendor drivers that support DXR and AMD doesn't provide any?
Don't know about the RTX stuff, but that demo is sweet.
I assume there's a reference or fallback layer.
When the DX12 SDK first launched, I'm pretty sure I remember using the WARP device temporarily. It was like a pre-release version or something and the API wasn't even finalized yet and I could develop even without supported hardware/drivers.
i think no. This is just based on my use of raytracers for arch viz over 15 years. They may use the same basic fundamental theories but the light path models, materials, atnospherics, cameras, tonemapping... all have variations which result in the final pixels looking different. Most of the time the difference is due to the engineers trying to find ways to boost render times. I've even talked to some of the developers and they say "close enough in realsim, but saves 10x time so it's worth it"
?? How so? I am unaware that the 1080ti/2080/titan v has dedicated shaders just for Raytracing, they are all still performing double duty if the tensor cores are not used (2080/titan v). You would also need drivers to support such a function, which non exist. I am also unaware that their are any 2080's or titan v's without tensor cores. Not to mention the fp16/fp32 differences between the 1080ti and 2080/titan v.