Crytek Ray Tracing Demo NOIR out, hardware agnostic!

It's a hexagon, and no, that's not it. It's clear between multiple casings that a hexagonal shape is being reflected.

You are right, it's hexagon, but my point stands.. It is an illusion. You are ignoring the effect of what water does to a reflection, specially disrupted water. It is not a hard surface such as a piece of glass, but a moving fluctuating surface. Not to mention the camera movement, as that hexagon reflection only lasts a moment.. We all have seen things appear miss shaped in real life all due to the angle we are looking at an object, shadows, and lighting.. You can't expect it not to happen when creating of real life rendering models.
 
Last edited:
Here are some detail from the devs about this:
https://www.cryengine.com/news/how-we-made-neon-noir-ray-traced-reflections-in-cryengine-and-more

Tidbits:
Yes. Our current implementation is both API and hardware agnostic. It runs in 1080p with 30 fps on a Vega 56. Reducing the resolution of reflections allows much better performance without too much quality loss. For example in half-resolution mode it runs 1440p / 40+ fps.

However, RTX will allow the effects to run at a higher resolution. At the moment on GTX 1080, we usually compute reflections and refractions at half-screen resolution. RTX will probably allow full-screen 4k resolution. It will also help us to have more dynamic elements in the scene, whereas currently, we have some limitations.

To ensure our ray-traced reflections performed as well as possible we didn’t use the original render geometry for reflections, but rather a less detailed version that is easier to process. This optimization is similar to traditional LODs (level of detail objects) that replace the original render geometry as you move away from it, and the object gets smaller on the screen.

In fact, all the objects in the Neon Noir Demo use low-poly versions of themselves for reflections. As a few people have commented, it is noticeable on the bullets, but it’s a lot harder to spot in most cases. That said, we can easily fix the reflection on the bullets by using more detailed LODs or just not using LODs at all.
 
The video description says it was done in real time, and the youtube video itself is 4K @ 30 FPS.


The video is upscaled and finished at 4K/30. I think the demo itself was 1080p/30. Youtube gives you more bitrate if you upscale your videos. More still, if you capture at 60fps or frame double so that youtube thinks its 60fps.
 
Told everyone it was horseshit - no way it was running at native 4k, which was all the hype was trumpeting. It's just 1080p at 30fps.

And guess what - there are RTX full-resolution demos that run at above 20fps on the GTX 1080, so a little optimization is all it took here.

Looks more like a cheap demo to me, optimized for the single card. Show me the real performance improvement vs RTX (re-create one of their demos on your engine) if you want to convince me you have a Golden Goose here.
 
Told everyone it was horseshit - no way it was running at native 4k, which was all the hype was trumpeting. It's just 1080p at 30fps.

And guess what - there are RTX full-resolution demos that run at above 20fps on the GTX 1080, so a little optimization is all it took here.

Looks more like a cheap demo to me, optimized for the single card. Show me the real performance improvement vs RTX (re-create one of their demos on your engine) if you want to convince me you have a Golden Goose here.

That’s a strangely negative response. It’s great they made a demo that is hardware agnostic. Of course it’ll run better with hardware designed for RT.

But like they said, lower the resolution it’ll run even better. Given current reflections in games are run at much lower resolutions I see no problem with this.

At the end of the day I just want hardware agnostic RT that the vast majority of machines can run. We all know what happens to proprietary tech....
 
That’s a strangely negative response. It’s great they made a demo that is hardware agnostic. Of course it’ll run better with hardware designed for RT.

But like they said, lower the resolution it’ll run even better. Given current reflections in games are run at much lower resolutions I see no problem with this.

At the end of the day I just want hardware agnostic RT that the vast majority of machines can run. We all know what happens to proprietary tech....

Google DXR...you just failed...bigtime.
 
I know what DXR is. Where is it actually implemented and GPU agnostic running Vegas?

DXR is vendor agnostic...that AMD's driver is lacking support (AMD's side, not microsoft) doesn't alter this fact.

But I get it...NV is evil, DXR is evil...until AMD enables driver/hardware support...and RTX kill kittens...right?
 
DXR is vendor agnostic...that AMD's driver is lacking support (AMD's side, not microsoft) doesn't alter this fact.

But I get it...NV is evil, DXR is evil...until AMD enables driver/hardware support...and RTX kill kittens...right?

DXR might technically be vendor agnostic but the current implementation is anything but that. Game developers would have to program completely different for anything not nVidia.

I never said nVidia is evil although, since you brought it up, it’s no secret they push proprietary techs which is not good for adoption, competition, or in the end gamers. More importantly it increases risk of failure.

I didn’t even mention AMD vs nVidia. You’re the one with the hardon for nVidia. I have a 2080ti and couldn’t give a damn about RT in it’s current form.
 
Last edited:
DXR might technically be vendor agnostic but the current implementation is anything but that. Game developers would have to program completely different for anything not nVidia.

I never said nVidia is evil although, since you brought it up, it’s no secret they push proprietary techs which is not good for adoption, competition, or in the end gamers. More importantly it increases risk of failure.

I didn’t even mention AMD vs nVidia. You’re the one with the hardon for nVidia. I have a 2080ti and couldn’t give a damn about RT in it’s current form.
I don't think you're understanding this right. DXR is hardware agnostic, just like the rest of the DirectX suite is. It accepts commands to do ray tracing in DirectX format, and hands them off to the graphics card driver, which must then decide how to do whatever the task is. On an nVidia card, the driver has the option of recruiting the octotree and matrix accelerators ("RTX cores") if the GPU is so-equipped, but it can apparently also do the same operation using only the CUDA cores, at the obvious expense of it running very slowly.

I don't see any reason any other vendor, such as AMD or Intel, couldn't use the same or similar math to what nVidia uses to do that operation on their own floating point compute units.
 
I don't think you're understanding this right. DXR is hardware agnostic, just like the rest of the DirectX suite is. It accepts commands to do ray tracing in DirectX format, and hands them off to the graphics card driver, which must then decide how to do whatever the task is. On an nVidia card, the driver has the option of recruiting the octotree and matrix accelerators ("RTX cores") if the GPU is so-equipped, but it can apparently also do the same operation using only the CUDA cores, at the obvious expense of it running very slowly.

I don't see any reason any other vendor, such as AMD or Intel, couldn't use the same or similar math to what nVidia uses to do that operation on their own floating point compute units.

Nah, I understand just fine. I followed this very closely and read/watched everything I could find.

Currently the way it is game developers need to program specifically for each vendor. So all their work using nVidia’s tool kitss graphics cards is not transferable.

Ideally Microsoft would have made a universal tool kit similar to nVidia’s that would have been hardware agnostic and made it easy for developers.

So we have them developing games the traditional way, plus they have to add on RT functionality and it’s (currently) vendor specific programming.
 
Nah, I understand just fine. I followed this very closely and read/watched everything I could find.

Currently the way it is game developers need to program specifically for each vendor. So all their work using nVidia’s tool kitss graphics cards is not transferable.

Ideally Microsoft would have made a universal tool kit similar to nVidia’s that would have been hardware agnostic and made it easy for developers.

So we have them developing games the traditional way, plus they have to add on RT functionality and it’s (currently) vendor specific programming.
http://cwyman.org/code/dxrTutors/dxr_tutors.md.html
I don't have time to work through it today (and I also don't have an RTX card handy to test), but there's an example of a theoretically vendor agnostic application that uses DXR for ray tracing support.

How well it works I can't say, but it doesn't appear that you have to do anything specific for nvidia cards. It does say that it only works on specific nvidia cards, but it appears that has more to do with the need for the graphics device (as reported to directx) to support DXR, and at the time they wrote it, that was just nVidia. It also talks about how a later Windows update added software emulation, but they hadn't tested on that version.

So, I suspect you're right in the sense that nvidia initially provided a way to access the ray tracing hardware prior to the release of the compatible version of windows, but it also appears that DXR is now supported via a Microsoft API that abstracts the specific hardware away.
 
Nah, I understand just fine. I followed this very closely and read/watched everything I could find.

Currently the way it is game developers need to program specifically for each vendor. So all their work using nVidia’s tool kitss graphics cards is not transferable.

Ideally Microsoft would have made a universal tool kit similar to nVidia’s that would have been hardware agnostic and made it easy for developers.

So we have them developing games the traditional way, plus they have to add on RT functionality and it’s (currently) vendor specific programming.

I'm going to oversimplyfy here but RTX is basically nvidia's DXR implementation to run on RTX cores. On Pascal DXR runs on shaders. AMD could make DXR run on VEGA, but it would just run as bad as PASCAL.
 
RTX = NVIDIA GPU's with DXR support.
DXR = Vendor agnostic DX12 raytracing API.


Simple as that...if you claim something else, we have left the realm of facts and entered fanboy FUD domain.

That you don't understand something is your problem.
 
RTX = NVIDIA GPU's with DXR support.
DXR = Vendor agnostic DX12 raytracing API.

We might be able to break it down like this:

Games that currently support DXR, and by extension RTX, should support ray tracing on an AMD Vega or upcoming Intel GPU with a DXR-enabled driver solely through DXR today.

But this may or may not be the case. As Dayaks seems to be infering from publicly available information, game developers are targeting DXR and RTX. The two technologies may not be easily divorced and may require significant effort from developers in order to support non-RTX implementations of DXR. This consideration is compounded with the understanding that a pure DXR implementation on RTX hardware is, in most significant uses of ray tracing, just far too slow. On RTX hardware, RTX is needed to get anywhere close to playable performance. If competitors do not have some means of getting to similar levels of performance, i.e. either stupendously beafy RT hardware exposed to DXR or perhaps their own RTX-like solutions, game developers may not bother with a porting effort.

So, until other GPU vendors provision their hardware for ray tracing in their publicly released drivers, we don't really know what that's going to look like, except that without dedicated RT hardware, it's going to be really slow.
 
The intro said "API agnostic" which makes me wonder if this demo is using DXR at all (Im going to guess no).
 
The intro said "API agnostic" which makes me wonder if this demo is using DXR at all (Im going to guess no).

You know what DXR is?
Your statement makes me question that you do...
 
Side note:
People should stop treating this implementation to those you see in games...to many shortcuts, visual artifacts, lesser fidelity etc. compared to games already out.

I bet that is why the heading is worded as it is...to muddy the waters...NVIDIA evil and all that right?
 
It's not comparable- but it is likely a way forward that 'smooths' the transition to using RT hardware, versus say the jarring example that BFV set.

The first big step is just getting DXR / Vulkan ray tracing into game engines and being used by released games in some initial fashion. Next would be finding ways to use it without tanking performance, but perhaps even enhancing performance, while delivering qualitative visual upgrades. One would expect these upgrades to still fall significantly short of the promise of ray tracing (and path tracing). This is what game developers are working on now. Next is to figure out how to 'dilute' the technology so that it will run without ray tracing hardware, which is what Crytek is showing off, and that gets us back to where we were before RTX: more hardware means more performance in a fairly linear fashion.

This is how ray tracing is going to be used on upcoming games for the next console refresh: limited effects with very prudent implementations. It will make a tremendous difference versus previous generations and that work in parallel with desktop ports (and likely mobile ports too, when mobile ASICs catch up) will likely be very portable.
 
Side note:
People should stop treating this implementation to those you see in games...to many shortcuts, visual artifacts, lesser fidelity etc. compared to games already out.

I bet that is why the heading is worded as it is...to muddy the waters...NVIDIA evil and all that right?

Why does this demo have your panties in a bunch? People want a non proprietary way of running ray tracing that actually performs. RTX as it stands right now is a painful joke unless your running a 2080ti or better and settle for 1440p resolution or lower. Also no one gives a damn about visual fidelity if the game runs like a slide show.
 
People want a non proprietary way of running ray tracing that actually performs.

DXR isn't proprietary.

RTX as it stands right now is a painful joke unless your running a 2080ti or better and settle for 1440p resolution or lower.

It works quite well and significantly enhances visuals on multiple AAA games. That's far from a painful joke, especially since it's the only game in town.

Also no one gives a damn about visual fidelity if the game runs like a slide show.

Well, if 'runs like a slideshow' were actually true, then yeah. But since it isn't, there's real value in RTX.
 
DXR isn't proprietary.

I'm fascinated that some people are having such a hard time understanding this very simple concept. Why, I wonder?

DXR - raytracing via DirectX 12. Hardware agnostic, GPUs must support DX12.
RTX - Nvidia's way of accelerating DXR. Proprietary.

Right now, the only way to accelerate DXR via hardware on a GPU is RTX. When AMD releases DXR hardware-capable cards, we'll have another proprietary path to accelerate raytracing DX12 code, that is, DXR.

Why are some people so confused? It's literally how GPUs have always worked: AMD and NV both supported, say, DX11, but the way each architecture accelerated DX11 code have always followed proprietary paths. What is so hard about this?
 
I'm fascinated that some people are having such a hard time understanding this very simple concept. Why, I wonder?

DXR - raytracing via DirectX 12. Hardware agnostic, GPUs must support DX12.
RTX - Nvidia's way of accelerating DXR. Proprietary.

Right now, the only way to accelerate DXR via hardware on a GPU is RTX. When AMD releases DXR hardware-capable cards, we'll have another proprietary path to accelerate raytracing DX12 code, that is, DXR.

Why are some people so confused? It's literally how GPUs have always worked: AMD and NV both supported, say, DX11, but the way each architecture accelerated DX11 code have always followed proprietary paths. What is so hard about this?

Your post makes too much sense. Therefore, it is incomprehensible to many.
 
I'm fascinated that some people are having such a hard time understanding this very simple concept. Why, I wonder?

DXR - raytracing via DirectX 12. Hardware agnostic, GPUs must support DX12.
RTX - Nvidia's way of accelerating DXR. Proprietary.

Right now, the only way to accelerate DXR via hardware on a GPU is RTX. When AMD releases DXR hardware-capable cards, we'll have another proprietary path to accelerate raytracing DX12 code, that is, DXR.

Why are some people so confused? It's literally how GPUs have always worked: AMD and NV both supported, say, DX11, but the way each architecture accelerated DX11 code have always followed proprietary paths. What is so hard about this?

The confusion comes from the fact that games that support ray tracing advertise(d) themselves as supporting RTX, and not DXR. Therefore it seems like they are supporting a proprietary feature that can only be used on the RTX line of cards. There were no "NV's implementation of DX11 games", there was only DX11 games.
 
The confusion comes from the fact that games that support ray tracing advertise(d) themselves as supporting RTX, and not DXR. Therefore it seems like they are supporting a proprietary feature that can only be used on the RTX line of cards. There were no "NV's implementation of DX11 games", there was only DX11 games.

So... nobody should trust marketing and they should do their own research on how things work. Isn't that the same it has always been? It's going to be a mess if once AMD releases their DXR capable hardware we refer to that in games as: with RTX and XYZ support! Instead of just: supports DXR.
 
Could RTX be conflated to be similar with Gameworks? Proprietary but open?

To some degree I'd accept that description as fair. Gameworks is proprietary, not open, but is also standards compliant- i.e., the Gameworks framework is implemented within DirectX. Perhaps there are Nvidia-specific extensions that are available, but we have not really seen evidence of these. More than likely anything specific to Nvidia in Gameworks is for optimizations.

RTX is entirely an extension on top of DXR. It's proprietary, not open, and may or may not be compliant to any standards. We don't really know what other manufacturers (Intel, AMD, mobile...) will do with RT and DXR. Perhaps they'll come up with something similar to RTX for denoising, perhaps that will become a standard, even part of DirectX / DXR, or perhaps they'll all do it their own way.
 
Sorry, I didn’t mean open so much as I meant will run on AMD. Poorly conveyed that.
 
Sorry, I didn’t mean open so much as I meant will run on AMD. Poorly conveyed that.

We have no indication that games that leverage RTX functionality on Nvidia GPUs will be able to run those code paths on AMD GPUs. The DXR stuff should work.
 
DXR isn't proprietary.



It works quite well and significantly enhances visuals on multiple AAA games. That's far from a painful joke, especially since it's the only game in town.



Well, if 'runs like a slideshow' were actually true, then yeah. But since it isn't, there's real value in RTX.

Where did I say DXR is proprietary I pointed out RTX on purpose which is. Your standards of performs good and significantly enhances visuals are far different then mine, dropping below 60 fps for a few reflections seems a bit much to me. Then we have not even figured on the fact you have to spend over 1000 dollars just to even get that. Also DLSS does not enhance visuals, from what I have seen it blurs and degrades the quality.

RTX-Performance-Exodus-640x245.png


Thats one of the better cases for it and those numbers still suck.
 
So... nobody should trust marketing and they should do their own research on how things work. Isn't that the same it has always been? It's going to be a mess if once AMD releases their DXR capable hardware we refer to that in games as: with RTX and XYZ support! Instead of just: supports DXR.
You asked why is there confusion. I explained it. Then you come back with people should do their research. Yeah, that's a circular argument. If everyone had a PhD about proprietary and open implementations of ray tracing there would be no confusion in the first place.
 
You asked why is there confusion. I explained it. Then you come back with people should do their research. Yeah, that's a circular argument. If everyone had a PhD about proprietary and open implementations of ray tracing there would be no confusion in the first place.

Oh, no no, sorry, I definitely did not mean that to sound antagonistic at all - you know how text can be, with no context as to tone. You're absolutely right that the way it's being talked about is creating the confusion, 0 disagreement from me. The frustrating part is when people talk online about things they have no understanding of - again, not referring to you at all. But then again, what I just described = the internet. And yes, myself being a PhD, I'm not part of that confused and ignorant consumer mass, so it frustrates me, but that's not only when it comes to technology, ignorant people spreading FUD and misinformation are everywhere - seems like even more of them exist these days! Or maybe we just have much easier communication at everyone's fingertips.

Bottom line - you're absolutely right on the cause for confusion. I just expected [H] members to be more informed in general, but apparently that's a chimera anywhere these days. Apologies if my previous post sounded antagonistic, not my intention!
 
Where did I say DXR is proprietary I pointed out RTX on purpose which is. Your standards of performs good and significantly enhances visuals are far different then mine, dropping below 60 fps for a few reflections seems a bit much to me. Then we have not even figured on the fact you have to spend over 1000 dollars just to even get that. Also DLSS does not enhance visuals, from what I have seen it blurs and degrades the quality.

Well, there's only one solution that runs DXR. RTX is not a prerequisite- it's what makes the performance acceptable in the first place.

View attachment 161180

Thats one of the better cases for it and those numbers still suck.

Where do they suck?

Performance is subjective. ~60FPS (with good frametimes) with better visuals in a slowly paced game with variable v-sync would be fine. No, I'm not willing to take that hit in some cases, and yes, I'm willing to take it in others. I play on a 9900k with a 1080Ti, and I play on an ultrabook with integrated graphics.
 
Hi, Hello
Your friendly lurker here;
DXR is just front gate, the implementation of RTX still exist in the developer's (game) code to call on "RTX" features / instruction sets compute which is the black box people dislike. You should view DXR as a thin client if you prefer. There is also a way to run the DXR fallback function which is the "DXR" people mean - though its not the 'real' or native raytracing people mean or want.


you can find some samples here
https://github.com/microsoft/DirectX-Graphics-Samples/tree/master/Samples/Desktop/D3D12Raytracing
 
DXR can be run via hardware acceleration or software (shaders). Regardless of how you do it, it's still DX12 code and still hardware agnostic. Don't know what you mean by "real", as even if you did it on a CPU it'd still be real raytracing, doesn't become "fake" because you do it on software VS hardware.

Developers will implement RTX code to accelerate DXR, just like they've always implemented Nvidia/AMD code paths to run DX12, 11, 10, 9 and I can't recall anything prior to that. When AMD releases their accelerator, that code will also be implemented by developers. RTX/AMD's version are not raytracing. DXR is the code, AMD/Nvidia just accelerate it in their own ways that make more sense for their own hardware.
 
DXR can be run via hardware acceleration or software (shaders). Regardless of how you do it, it's still DX12 code and still hardware agnostic. Don't know what you mean by "real", as even if you did it on a CPU it'd still be real raytracing, doesn't become "fake" because you do it on software VS hardware.

Developers will implement RTX code to accelerate DXR, just like they've always implemented Nvidia/AMD code paths to run DX12, 11, 10, 9 and I can't recall anything prior to that. When AMD releases their accelerator, that code will also be implemented by developers. RTX/AMD's version are not raytracing. DXR is the code, AMD/Nvidia just accelerate it in their own ways that make more sense for their own hardware.

The "real" was meant in lieu of conversation. Thats way I wrote "real" or native raytracing.

For the rest, and 2nd paragraph, I can't agree less. Its all fake, and especially the RTX solution. Mainly because its a hybrid system, and never meant to be pure RT ;)

- Not to mention most of the effects people get boners about. Visually were available by programming vortex shader since early dx8.1 implementations. Wait let me guess 3dmark2001 lobby? (hate me more), but you can pretty much add such vertex implementation on any texture to use | and you can have many layers so it will look real like water, pool of water, car, window whatever, reflective surfaces of objects its all doable with little performance hit vs RT implementation.

People are just being taken for a ride with such low quality effects. Sure you'll likely use argument of cube maps, but you can create var at cube map, and update it in 40-60fps to make it look smooth (it doesn't take that much of flops... oblivion, farcry and many others were already doing but farcry1 actually had shaders exposed and you can edit them.), and even select which objects are to be reflected, A nice trick that was used by crytek in RT implemntation is having 2 "worlds", basically you take high polygon objects and render them normally, and for reflection you render the low poly bonded object. (ancient trick'o).

(Now there are certain effects that are really hard to do with typical rasterization, like indirect shadow interaction, and indirect light reflection - and more that i cannot think at the moment.)
But the effects they are advertising for with RTX... aren't that hard to do if you understand how to program shaders, or at very least have access to them (since most engines do not provide you the code base for the shaders, and do not allow you to reload it on fly to see your changes - maybe if I get bored I'll play with farcry1 engine and add those eye-candy effects... or something)

Now lets jump back to the "code paths" of directX they never were that. DX API's were, and are just calls to video card to do something... They (dX) have no idea how to do it. (the fallback in dxr has little more, and it knows how-to, but current implementations only use it as form of API front end, and will never work on DXR amd card unless amd tries to reverse engineer RTX. In this argument, you should see how different cards interpret same effects differently, last one i remember was ashes of singularity where snow/terrain shader was interpreted differently between nvidia and amd; but best, and easiest example would be republic commando. back to RT, DXR will only fallback to non-native accelerator (like rt-cores physx6.0) compute where it knows what its doing, and even then either game developer needs to make their code able to use fallback or/and gpu maker needs to create drivers that will interpret same calls from DXR in "fallback" mode to execute on shaders. If that wasn't the case, you would be able to run metro with dxr on any gpu or cpu from-get-go like you could physx in couple games.
 
Back
Top