Separate names with a comma.
Discussion in 'Video Cards' started by shad0w4life, Mar 15, 2019.
Looks VERY good, better than any implementation to date I'd say
Just watched that. Honestly looks better than RTX, LOL.
*That* was done with a Vega56?!?!?!?!?! Woooooooooow! nVidia's RTX is now, undoubtedly, a broken mess and complete gimmick.
That's about all I can type to sum up my thoughts in a simplistic manner.
Yeah, I love how Crytek did it on a Vega 56. Great that they picked AMD, and not even their best card. Good show.
Very nice demo, noticed a few stutters(panning into street from window) but overall looked really good.
Can I haz demo download?
so , now I can get ray tracing and dx 12 on windows 7 ?? haha
long live win 7 ......
So when this gets implemented in to games and released, will RTX cards have any additional benefits from their tech?
Or will the tensors etc just be dormant?
Crytek undoubtedly has a DXR/Vukan hardware ray-tracing renderer working, but this is likely aimed at potential engine licensee clients that want to hit a 'wider' base. What we don't know is what settings and performance we're seeing in the video.
The video description says it was done in real time, and the youtube video itself is 4K @ 30 FPS. Things get easier if you only have to hit 30 FPS, especially if the application is deterministic and doesn't have to worry about input from the user.
But, I'd imagine that a production version of this that got made into a real game would implement use of the RTX cores, if they're available. I suspect it probably works by just doing the quadtree operation for each ray in a regular shader, as opposed to delegating that to the RTX core.
Would a driver be able to delegate the RT task to the RTX core ?
Depends on the implementation, but that's theoretically what DXR is supposed to accomplish. The game tells DXR "do this ray tracing work" and DXR then gets it done, either by telling the GPU to do it using whatever hardware acceleration it has available, or perhaps as a compatibility mode implemented as a shader if RTX isn't available.
It's also possible that Crytek has come up with their own shader that doesn't rely on DXR, and in that case, it would likely be the game and not the driver that decides whether or not the RTX cores get used. Without more detail about how they're doing this, it's hard to say.
They’d have to program it to do use RT cores. It’s not automatic but nVidia provides toolkits built ontop of DirectX to help.
DXR is hardware agnostic...but this debate is useless as long as there is not performance numbers from various SKU’s...
CPU’s can do raytraycing too...but the performance...well it sucks compared to GPU’s.
That's the issue though- we have a video. Post a demo that I can run and that we can compare results with!
It really isn't. The technique used by Crytek here is SVOGI which uses voxel based cone tracing which is faster than actual raytracing but also is going to produce more artifacts (not really apparent in this small scale demo). Nvidia RTX is meant for accelerating raytracing tasks. You could build a hardware agnostic raytracer but then implement acceleration by using Nvidia RT cores. Which is actually exactly what Crytek plans to do for these features in the future.
I guess you didn't see the 'noise' in BF5 from the DXR implementation.
Would it be possible to use a second card to just run ray tracing off it?
This demo is not without issues, notice how the round casings being hexagonal in the relfections?
Honestly that looks great. If that has good performance and is hardware agnostic it’s way better than what we traditionally have. I guess we’ll have to see on performance.
We need technology that is hardware agnostic and good performance for adoption’s sake...
That looked amazing. And not hardware dependent? Where do I sign up?
License this tech to the Cyberpunk 2077 game engine please!
Why can't Crysis run this demo on a Vega 56/64, Radeon VII, a GTX 2080 and GTX 2080TI and report the results. Just use the same cpu.
Wouldn't this better show the effect of ray tracing cores vs no ray tracing cores?
Because it's not capable of using Nvidia's RT cores at the moment. It's something they will implement in a future update.
Could still compare between models / OEM and also see how many FPS they had using their method.
A others pointed out, a download would be nice
downloadable benchmark yes please.
DXR is hardware agnostic
Actually the octagon effect you are referring to is an optical illusion caused by the bottom of the casing being submerged in the water, as well as the moving water/ripples cutting off the reflection of the top of the casing. As well as shadows and lighting helping the effect.
I would bet they are using low-poly proxy objects for the reflection.
That would save processing if they are ray tracing collisions into the geometry.
It's a hexagon, and no, that's not it. It's clear between multiple casings that a hexagonal shape is being reflected.
Looks to be non-tessellated, optimization for reflections in the demo to keep geometry down.
The main question is, 'what corners are they cutting?'. Can't say that Crytek's implementation is any better or worse really without more direct comparisons, to include what kind of performance they're getting here to begin with.
Well since AMD can get geometry limited quicker than Nvidia GPU's, reflections will have none viewed objects that will need to be present for ray tracing to work (looks like a simplified geometry for ray tracing purposes and rendering for reflections). Game engine looks to be ready for next Gen Consoles, so running on AMD would be paramount. PS5 ~ Vega 56 in other words. What we need is a release demo so we can figure out the performance, better yet a real game.
Looks very nice. With this and other recent news, I guess my GTX 1080 will have longer legs than I was expecting.
As said multiple times, DXR is hardware agnostic. So long as the hardware supports feature level 12_1 it can do DXR if the drivers are programmed to forward the API calls.
Well, not the way nVidia has taken it. They have devs performing extra programming that only benefits nVidia cards.
I had “good peformance” in there too... hardware agnostic AND good performance. At the end of the day we need wide adoption so RT is the default not an add on gimmick.
Can't wait for RT in WoW Classic!
This is an implementation of SVOGI (CryEngine, Lumberyard), not full raytracing of the complete scene. It is essentially a voxel-based method to calculate global illumination (incl. reflections), i.e. the ray-tracing is used only in the same way as the RTX's ray-tracing is used in Battlefield 5. The rest of the graphics is handeled in traditional manner.
Given the compute power of AMD GPUs (and the fact that voxel-based ray-tracing is a lot cheaper than the ray-based approach from Nvidia), there is no reason to doubt that this is a V56.
Nvidia approach was to provide hardware acceleration of ray intersection and de-noising and rest is in hand of software developers, they can do whatever the heck they want to do with i.
Anything that does with shooting rays can be accelerated with RT cores.
With RTX tech one can do fully path traced games or add effect to rasterized games or even use it to just improve anti-aliasing. Even non-graphical calculations can be accelerated like using sound-tracing to provide realistic audio in games or give basic form of "sight" for AI bots or basically whatever human ingenuity will come out with.
Why do we think this is SVOGI? Is murkskopf likely to have inside information? Crytek has dabbled in SVOs before, but this really looks more like the scene representation being traced into is a BVH with the raw geometry (at a lower LOD), not an SVO.
This is still much cheaper than what BF V is doing because sharp reflections are much easier than rougher materials (rays are more coherent and noise is much less of an issue). It's entirely reasonable that this can do 4K30 or better (could be much better) on a V56.