AMD RX 5700 XT card is better than a GTX 1080 at ray tracing in new Crytek demo

I am more interested in how Ampere will compare to Turing for shader performance as well as RT. Is it more then just a die shrink and packing a few more transistors or a much further design change. As for AMD going to 7+ TSMC and introductory with hardware RT (chiplet or monolith for RT?).

Currently it looks like AMD will own the less than $200 spot for perf/$, owns the $300 to $400 and has nothing over $500 to compare to for gaming - professional side is different depending upon application. Ampere and RNDA2 can't come fast enough. Looks like June of next year for one or both to hit.

I would imagine the 1650 Super making some difference in the sub $200 market tho.
https://www.evga.com/products/product.aspx?pn=04G-P4-1357-KR
 
  • Like
Reactions: noko
like this
Recommend this video dealing with viability and future of RTX, does not look good but who knows, Intel's version GPP program that was used. Shows this demo as well and how the current generation stacks up, did not know that it was DX 11 and did not use Vulkan RT or Microsoft DSR:



I gave it a chance, but stopped when he claimed you can't tell the difference with RTX on or off in SOTTR.

You absolutely can. Or at least I can.

But then I don't have a Youtube presence that requires an agenda to get clicks and all that.

(Also Mr. Coreteks really needs to hire a narrator. That shit makes you dizzy)
 
3hhscc.jpg
 
This literally never happened.

Nvidia has posted multiple research papers and game specific guides clearly describing how raytracing improves IQ when combined with rasterization. At no point did they claim their hardware can simulate every bounce of light in real-time.

They also never claimed that no other hardware can do raytracing. The raytracing algorithm is very old, relatively simple and can be coded in a few lines of C++. Raytracing has been used for decades. Nobody thinks Nvidia invented it or owns it.

What Nvidia claimed is that their hardware implementation can cast rays faster. It is purely a performance claim. And that claim is obviously true.
Feel free to find a review that makes this point clear. No one makes this distinction.
 
  • Like
Reactions: noko
like this
I gave it a chance, but stopped when he claimed you can't tell the difference with RTX on or off in SOTTR.

You absolutely can. Or at least I can.

But then I don't have a Youtube presence that requires an agenda to get clicks and all that.

(Also Mr. Coreteks really needs to hire a narrator. That shit makes you dizzy)
Yes I know looking at a video can be a far cry to actual game experience in front of a monitor. Still I think he has very pertinent points, over a year and only 6 titles with RTX and nothing that just stands out. If you have to look for it, it probably is not that important. Performance impact where you just can't turn everything up is means it won't be used. Plus long term wise RTX may not even be in developers window to use meaning those RT cores may just be dead space for more titles than not. To me so far RTX looks like it is heading to be Nvidia's biggest failure. Now I did find it interesting that Nvidia actually does have MCM GPU's, at least testing -> Nvidia first to using multi-GPU's and or separate chips for like RT and graphics? Nvidia may have some very big surprises for us, which I hope happens, if Ampere is just another version of Turing and pricing I don't think they will fare well.
 
  • Like
Reactions: Auer
like this
Plus long term wise RTX may not even be in developers window to use meaning those RT cores may just be dead space for more titles than not.

Which you support with what?

To me so far RTX looks like it is heading to be Nvidia's biggest failure.

See above.

Now I did find it interesting that Nvidia actually does have MCM GPU's, at least testing -> Nvidia first to using multi-GPU's and or separate chips for like RT and graphics?

Extremely unlikely that they'll put RT on separate dies from rasterization. Far more likely that they'll have dies with mixes of compute units. This would be like using a separate chiplet for FP units on a CPU.

if Ampere is just another version of Turing and pricing I don't think they will fare well.

With chiplets GPU manufacturers can scale performance while keeping price under control, similar to how AMD has done with CPU chiplets. But do expect Ampere to be based on Turing. It'll be a die shrink, a few architectural tweaks all around, and perhaps the introduction of a chiplet arrangement, if history is to be a guide.
 
Which you support with what?



See above.



Extremely unlikely that they'll put RT on separate dies from rasterization. Far more likely that they'll have dies with mixes of compute units. This would be like using a separate chiplet for FP units on a CPU.



With chiplets GPU manufacturers can scale performance while keeping price under control, similar to how AMD has done with CPU chiplets. But do expect Ampere to be based on Turing. It'll be a die shrink, a few architectural tweaks all around, and perhaps the introduction of a chiplet arrangement, if history is to be a guide.
Linked video for one.

As for Nvidia and MCM for gaming, only Nvidia would know if and when. RT is a perfect application for a dedicated fixed function separate chip, all it has to pass to the GPU for rasterization are light maps, shadow maps, speculative maps, refraction . . . etc. Could use un-tessellated geometry parsing - bandwidth would be minimal. No clue and was only a thought. Getting yields high enough on a new small process node at Samsung and make enough chips for demand etc. makes MCM look rather desirable is my thought. I do hope Nvidia has some very big surprises for us. AMD may go a totally different route by not using fixed function hardware, which is totally opposite what I once thought. So Nvidia and/or AMD can really surprise us in the end for 2020.
 
Which you support with what?
Because right now the RTX cores are separate. The 2080 series does have upgrades over the 1080. But the RT cores from what I remember are separate.

So unless Nvidia does as you say and merge the capabilities between the two there will be a cap based on manufacturing node.

It comes down to what Nvidia does with ampere. It's anyone's guess at this point.
 
RT is a perfect application for a dedicated fixed function separate chip, all it has to pass to GPU for rasterization are light maps, shadow maps, speculative maps, refraction . . . etc. Could use un-tessellated geometry parsing - bandwidth would be minimal. No clue and was only a thought.

I can see how this might work, but the latency penalty involved for any deviation of RT usage kind of kills it in my mind, at least so long as a flexible 'hybrid' ray tracing implementation is needed.

Getting yields up on a new process small node at Samsung, make enough chips for demand etc. makes MCM look rather desirable is my thought.

It's working quite well for AMD -- they've staked their entire CPU product lineup on it!

Because right now the RTX cores are separate. The 2080 series does have upgrades over the 1080. But the RT cores from what I remember are separate.

So unless Nvidia does as you say and merge the capabilities between the two there will be a cap based on manufacturing node.

They're separate in that they're not 'in line' with the raster cores, but they are distributed around the GPU die and they share extremely low-latency access to the same resources. Splitting them up would kill a lot of that.
 
  • Like
Reactions: noko
like this
Because right now the RTX cores are separate. The 2080 series does have upgrades over the 1080. But the RT cores from what I remember are separate.

So unless Nvidia does as you say and merge the capabilities between the two there will be a cap based on manufacturing node.

It comes down to what Nvidia does with ampere. It's anyone's guess at this point.
Just thinking here, with a separate RT chip, Nvidia could pack in a huge amount of RT cores 4x + over what Turing have, standardize RT performance between the different levels of GPU's where RT does indeed just works with minimal to no performance loss making it a turn on feature whenever possible.
 
I can see how this might work, but the latency penalty involved for any deviation of RT usage kind of kills it in my mind, at least so long as a flexible 'hybrid' ray tracing implementation is needed.



It's working quite well for AMD -- they've staked their entire CPU product lineup on it!



They're separate in that they're not 'in line' with the raster cores, but they are distributed around the GPU die and they share extremely low-latency access to the same resources. Splitting them up would kill a lot of that.
The thing about light maps are they don't change that much from frame to frame when you have a whole scene calculated as in all applicable objects. Only dynamic lights or changing positions as in shadows which could be many. Still a hell of a complex problem to work out but Nvidia already did that with Turing. Not sure what bandwidth internally the RT cores have to the rest of the GPU or if anyone tested or could test. I can't believe Nvidia left themselves no options to dramatically increase RT performance to level where it is just an all the time turn on feature for all games using it. So very hopeful they pull a new hardware revolution or be first at it. Fixed dedicated hardware can always do a fixed task faster, RT would definitely fit the bill faster than a software or more flexible processor method.
 
The thing about light maps are they don't change that much from frame to frame when you have a whole scene calculated as in all applicable objects. Only dynamic lights or changing positions as in shadows which could be many.

That's the issue: essentially, every light needs to be 'dynamic' due to the possibilities of user interaction. You need to do the whole scene every frame, at least until a 'light source culling' algorithm can be put to use like what is used for z-culling to eliminate geometry and pixels that would otherwise be drawn over.

I can't believe Nvidia left themselves no options to dramatically increase RT performance to level where it is just an all the time turn on feature for all games using it.

They're making the largest processors ever made -- literally running up against a hard transistor budget. Adding more RT cores or adding flexibility to other cores in order to be able to perform RT calculations would eat into other transistor budgets and thus reduce performance elsewhere.

They had to pick a point to balance it, and generally speaking have done well enough.

Also remember that RT isn't just 'on' or 'off'; implementations and intensities vary, so just as Nvidia must increase RT performance alongside raster performance in Ampere, so must developers get better at using RT resources effectively.

Which I think we'll see as essentially every game coming out for the next generation of consoles is going to want to claim to use ray tracing.
 
That's the issue: essentially, every light needs to be 'dynamic' due to the possibilities of user interaction. You need to do the whole scene every frame, at least until a 'light source culling' algorithm can be put to use like what is used for z-culling to eliminate geometry and pixels that would otherwise be drawn over.



They're making the largest processors ever made -- literally running up against a hard transistor budget. Adding more RT cores or adding flexibility to other cores in order to be able to perform RT calculations would eat into other transistor budgets and thus reduce performance elsewhere.

They had to pick a point to balance it, and generally speaking have done well enough.

Also remember that RT isn't just 'on' or 'off'; implementations and intensities vary, so just as Nvidia must increase RT performance alongside raster performance in Ampere, so must developers get better at using RT resources effectively.

Which I think we'll see as essentially every game coming out for the next generation of consoles is going to want to claim to use ray tracing.
Ideally only changes have to updated and not whole light maps, add a flash light and movement -> only changes that that motion has will be updated. This is no different then current implementation from what I understand.

I don't see the 12nm to 7nm EUV node giving much in order of chipsize decrease for adding allot more transistors while having good enough yields -> something has to give. Just shrinking down Big Turing would be around a 500mm squared chip with minimal performance increase yet probably terrible yields, at least initially. Unless midsize chips are release first for desktop with performance around a 2080Ti which would not be moving the needle for RT or overall performance - maybe just a cheaper price for that performance.
 
Ideally only changes have to updated and not whole light maps, add a flash light and movement -> only changes that that motion has will be updated. This is no different then current implementation from what I understand.

You're making the assumption that all lights need to be static now, and that they need to stay static, outside of say a flashlight.

I can think of many scenarios where this would be true, but I can also think of many scenarios where it isn't -- and I wouldn't want to be limited to the former.
 
Just thinking here, with a separate RT chip, Nvidia could pack in a huge amount of RT cores 4x + over what Turing have, standardize RT performance between the different levels of GPU's where RT does indeed just works with minimal to no performance loss making it a turn on feature whenever possible.
With Turing is already fixed. The problem is manufacturing nodes dictate how big that section of cores can be.

If you did a multi programmable / unified core you'll be able to fit more of them. People forget the RTX chip is at the maximum for the node.

When you do ray tracing on the card TDP increases beyond what any non Ray tracing game will do. There's another thing that increases. The draw calls increase. This means that what formally would be fine on the six core now requires an 8-core. 2080 is at 18 billion transistors, 5700 xt at 10 billion.

Edit: if by fixed you mean the same number of cores regardless of model I doubt Nvidia would do that.
 
You're making the assumption that all lights need to be static now, and that they need to stay static, outside of say a flashlight.

I can think of many scenarios where this would be true, but I can also think of many scenarios where it isn't -- and I wouldn't want to be limited to the former.
Basically you only make changes, even if whole light maps over what ever bus you have - basic compression - other more advance compression techniques to limit latency can be used. Less data needed to be sent would tend to give less latency. Not every frame would need to be updated, some interpretation could also be used on the GPU.

My thoughts: Vega 10 to Vega 20 with a minimal amount of transistor increase due to FP32 units gave roughly a 1.47 or 68% of the size of Vega 10. If Ampere performance per shader is roughly the same as turning you would need to shrink down TU 102 from 756mm^2 to roughly 500mm^2 for same to better performance - that would be a massive chip for a new node. If high clock speed and other improvements maybe a 400-450mm^2. If not then you have a less performing chip then your previous generation with a much higher cost to fabricate. Would make no sense. Ampere other improvements will most likely need more transistors to benefit as well. MCM seem to solve many of these issues, give you better yields, allow for much more performance, standardize fixed function features etc. Does not mean Nvidia can or will with Ampere, I just see it coming. Unless Nvidia just dumps RT cores and Tensor cores and modifies the shaders for RT or combines them in, I only see a more radical change will get Nvidia the next level.
 
MCM seem to solve many of these issues, give you better yields, allow for much more performance, standardize fixed function features etc. Does not mean Nvidia can or will with Ampere, I just see it coming.

At this point I'm pretty much expecting them to do it.
 
With Turing is already fixed. The problem is manufacturing nodes dictate how big that section of cores can be.

If you did a multi programmable / unified core you'll be able to fit more of them. People forget the RTX chip is at the maximum for the node.

When you do ray tracing on the card TDP increases beyond what any non Ray tracing game will do. There's another thing that increases. The draw calls increase. This means that what formally would be fine on the six core now requires an 8-core. 2080 is at 18 billion transistors, 5700 xt at 10 billion.

Edit: if by fixed you mean the same number of cores regardless of model I doubt Nvidia would do that.
Tensor and RT cores on Turing are fix function blocks and are separate from the rest of the GPU so you have to have dedicated circuitry to hook them up. Separating those fixed function blocks and other fixed function blocks such as encoding/decoding video to a separate die would make no difference for the GPU just that they will still need to be hooked up but by a different means. Once you can separate the fixed function common in your GPU's like video encoding/decoding, RT, Tensor cores and other stuff which could be more broadly standardized between GPU versions - you can really start packing in the Cuda cores, cache etc. on a dedicated die and still make smaller chips for better yields. I just don't know if Nvidia is able to do that - are they more advance and capable hardware designer for MCM then AMD? Frankly they have a hell a lot more money to dedicate to something like that. Still maybe not Ampere but I see it coming.
 
Last edited:
  • Like
Reactions: kac77
like this
Feel free to find a review that makes this point clear. No one makes this distinction.

First you claim Nvidia is misleading people and now you’re saying it’s reviewers’ fault that people are confused. What exactly do you think reviewers should be saying that they aren’t?

For anyone that’s interested there’s a lot of info out there that describes how current raytracing implementations work.
 
First you claim Nvidia is misleading people and now you’re saying it’s reviewers’ fault that people are confused. What exactly do you think reviewers should be saying that they aren’t?

For anyone that’s interested there’s a lot of info out there that describes how current raytracing implementations work.
I find Nvidia misleading with RTX, they do show demo wise more raytraced lighting but in reality in games no lighting is really done. Really what Nvidia and I am sure AMD, Sony and Microsoft is not real time ray tracing from a technical point of view it is enhanced rasterization, by using limited raytrace calculations to better map out lighting effects. It is very limited and the scenes shown as real time raytracing is mostly pre-rendered lighting, likely by huge render farms that can take days to render the complex lighting for light maps etc. and not RTX at all.

That is why some just don't see much difference between RTX on and off because it is very much limited. To have that high quality lighting actually done in real time you would need 100 samples/rays plus per pixel (for low quality) RTX is not even close to deliver the number of samples per pixel to effectively light up a typical game of around 1 sample per pixel. It falls so short of actual ray tracing I am not sure what to think. I do not see reviewers accurately describing what actually RTX is doing. It has sure potential but misleading, Nvidia showing beatifully rendered scenes where RTX is doing virtually nothing since it was already pre-rendered lightwise knowing many will just suck it up. For like VRay, RTX gives about a 40% faster rate when turned on but not real time render rates, that is it, not even close to realtime raytracing with the limits of VRay GPU (CPU VRay has more capability). If a reviewer showed RTX rendering fully a VRay setup scene to show what real ray tracing is and how hard it really is -> That may get the point across.
 
Last edited:
  • Like
Reactions: kac77
like this
Tensor and RT cores on Turing are fix function blocks and are separate from the rest of the GPU so you have to have dedicated circuitry to hook them up. Separating those fixed function blocks and other fixed function blocks such as encoding/decoding video to a separate die would make no difference for the GPU just that they will still need to be hooked up but by a different means.

The RT cores are embedded in the SM. Which means the shader core has very high bandwidth and low latency access to RT results. It’s certainly not the same as moving it to a separate chip. RT cores are essentially at the same level as texture units. You wouldn’t move those off chip would you?
 
I find Nvidia misleading with RTX, they do show demo wise more raytraced lighting but in reality in games no lighting is really done. Really what Nvidia and I am sure AMD, Sony and Microsoft is not real time ray tracing from a technical point of view it is enhanced rasterization, by using limited raytrace calculations to better map out lighting effects. It is very limited and the scenes shown as real time raytracing is mostly pre-rendered lighting, likely by huge render farms that can take days to render the complex lighting for light maps etc. and not RTX at all.

That is why some just don't see much difference between RTX on and off because it is very much limited. To have that high quality lighting actually done in real time you would need 100 samples/rays plus per pixel (for low quality) RTX is not even close to deliver the number of samples per pixel to effectively light up a typical game of around 1 sample per pixel. It falls so short of actual ray tracing I am not sure what to think. I do not see reviewers accurately describing what actually RTX is doing. It has sure potential but misleading, Nvidia showing beatifully rendered scenes where RTX is doing virtually nothing since it was already pre-rendered lightwise knowing many will just suck it up. For like VRay, RTX gives about a 40% faster rate when turned on but not real time render rates, that is it, not even close to realtime raytracing with the limits of VRay GPU (CPU VRay has more capability). If a reviewer showed RTX rendering fully a VRay setup scene to show what real ray tracing is and how hard it really is -> That may get the point across.

What pre-rendered demos are Nvidia passing off as raytracing? The Star Wars thing?

Raytracing is simply sending a ray into a geometry data structure and returning the hit results. What do you mean by “real” raytracing. If you mean that the entire scene has to be raytraced then I’m sure you would agree that’s a pretty silly thing to say in 2019.
 
I find Nvidia misleading with RTX, they do show demo wise more raytraced lighting but in reality in games no lighting is really done. Really what Nvidia and I am sure AMD, Sony and Microsoft is not real time ray tracing from a technical point of view it is enhanced rasterization, by using limited raytrace calculations to better map out lighting effects. It is very limited and the scenes shown as real time raytracing is mostly pre-rendered lighting, likely by huge render farms that can take days to render the complex lighting for light maps etc. and not RTX at all.

That is why some just don't see much difference between RTX on and off because it is very much limited. To have that high quality lighting actually done in real time you would need 100 samples/rays plus per pixel (for low quality) RTX is not even close to deliver the number of samples per pixel to effectively light up a typical game of around 1 sample per pixel. It falls so short of actual ray tracing I am not sure what to think. I do not see reviewers accurately describing what actually RTX is doing. It has sure potential but misleading, Nvidia showing beatifully rendered scenes where RTX is doing virtually nothing since it was already pre-rendered lightwise knowing many will just suck it up. For like VRay, RTX gives about a 40% faster rate when turned on but not real time render rates, that is it, not even close to realtime raytracing with the limits of VRay GPU (CPU VRay has more capability). If a reviewer showed RTX rendering fully a VRay setup scene to show what real ray tracing is and how hard it really is -> That may get the point across.
Yup. You've also stated something I've been worried about before. I think I mentioned it earlier. The problem is because Nvidia isn't doing RT games justice this leaves wide latitude for someone else to come along and increase rasterization fidelity to give the illusion of a RT scene when in reality neither is doing it justice. This is the problem with letting demos and old games with updated textures and lighting go without stopping and dissecting how much of the product is ray traced and to what extent versus what's currently possible via rasterization.
 
I find Nvidia misleading with RTX, they do show demo wise more raytraced lighting but in reality in games no lighting is really done. Really what Nvidia and I am sure AMD, Sony and Microsoft is not real time ray tracing from a technical point of view it is enhanced rasterization, by using limited raytrace calculations to better map out lighting effects. It is very limited and the scenes shown as real time raytracing is mostly pre-rendered lighting, likely by huge render farms that can take days to render the complex lighting for light maps etc. and not RTX at all.

That is why some just don't see much difference between RTX on and off because it is very much limited. To have that high quality lighting actually done in real time you would need 100 samples/rays plus per pixel (for low quality) RTX is not even close to deliver the number of samples per pixel to effectively light up a typical game of around 1 sample per pixel. It falls so short of actual ray tracing I am not sure what to think. I do not see reviewers accurately describing what actually RTX is doing. It has sure potential but misleading, Nvidia showing beatifully rendered scenes where RTX is doing virtually nothing since it was already pre-rendered lightwise knowing many will just suck it up. For like VRay, RTX gives about a 40% faster rate when turned on but not real time render rates, that is it, not even close to realtime raytracing with the limits of VRay GPU (CPU VRay has more capability). If a reviewer showed RTX rendering fully a VRay setup scene to show what real ray tracing is and how hard it really is -> That may get the point across.


From a technical standpoint it is hybrid rendering and it can render RT effects in realtime (shadows, refletions, lighting, etc). Not full scene realtime raytracing. The scenes in the demos are real time rendered, no prebaked lightmaps, no prerendered RT effects.

It is indeed capable of full ratracing and is orders of magnitude faster than previous solutions. As evidenced in many professional rendering applications with RTX support like blender.
 
From a technical standpoint it is hybrid rendering and it can render RT effects in realtime (shadows, refletions, lighting, etc). Not full scene realtime raytracing. The scenes in the demos are real time rendered, no prebaked lightmaps, no prerendered RT effects.

It is indeed capable of full ratracing and is orders of magnitude faster than previous solutions. As evidenced in many professional rendering applications with RTX support like blender.
So are all DX12 cards and CPUs. Nvidia implementation is faster than what currently available. You can say that but it's not doing anything new. Nvidia didn't invent ray tracing.
 
So are all DX12 cards and CPUs. Nvidia implementation is faster than what currently available. You can say that but it's not doing anything new. Nvidia didn't invent ray tracing.

To split the middle: they do appear to have been the first to put a viable real-time solution together with a mix of dedicated hardware and a hybrid process that uses ray tracing for lighting but relies on rasterization for the rest of the rendering.
 
So are all DX12 cards and CPUs. Nvidia implementation is faster than what currently available. You can say that but it's not doing anything new. Nvidia didn't invent ray tracing.
And who the hell said they did?
 
And who the hell said they did?
Ever reviewer ever. The statement goes like this.. " if you want Ray tracing the only way to get it is with Nvidia." While it's technically true in a very limited scope, the truth is it's not.

There's a massive performance hit. Everyone knows this, but instead of reviewers saying you're not going to get full scene ray tracing, and what you do have is at a far lower fidelity than what Ray tracing is capable of, instead they are saying things like, "Take a look at the quake demo," which leads everyone to believe that A) the entire demo is ray traced, when it's not and B) the only card capable of delivering those level of graphics is the Nvidia card which isn't the case.

This is the area where it's deceitful. I'm glad Nvidia is pushing it but everyone should know exactly what is and isn't happening.
 
  • Like
Reactions: noko
like this
To split the middle: they do appear to have been the first to put a viable real-time solution together with a mix of dedicated hardware and a hybrid process that uses ray tracing for lighting but relies on rasterization for the rest of the rendering.
I have absolutely no problem with this statement. It's accurate.

Basically all I want is for the topics to be discussed with enough honesty that people who read and listen to reviews come out smarter in the end. I really don't care which company it is. I have AMD and Intel systems and no AMD GPU cards because I run a virtual gaming box on my Ivy server.

Here's the guide I did it's got my hardware there. It's still used today but the hardware is the same... Intel and Nvidia.

Guide: GPU Passthrough Using KVM + LVM2 + Ubuntu Gnome - From Beginning to End
 
Last edited:
Ever reviewer ever. The statement goes like this.. " if you want Ray tracing the only way to get it is with Nvidia." While it's technically true in a very limited scope, the truth is it's not.

There's a massive performance hit. Everyone knows this, but instead of reviewers saying you're not going to get full scene ray tracing, and what you do have is at a far lower fidelity than what Ray tracing is capable of, instead they are saying things like, "Take a look at the quake demo," which leads everyone to believe that A) the entire demo is ray traced, when it's not and B) the only card capable of delivering those level of graphics is the Nvidia card which isn't the case.

This is the area where it's deceitful. I'm glad Nvidia is pushing it but everyone should know exactly what is and isn't happening.

Nvidia IS the only brand capable of that level of graphics. At least till next year.
 
Everyone knows this, but instead of reviewers saying you're not going to get full scene ray tracing, and what you do have is at a far lower fidelity than what Ray tracing is capable of, instead they are saying things like, "Take a look at the quake demo," which leads everyone to believe that A) the entire demo is ray traced, when it's not and B) the only card capable of delivering those level of graphics is the Nvidia card which isn't the case.

That would be a pretty pointless thing to say. Here are a few other pointless things reviewers don’t say.

“This desktop CPU is really fast but it’s not as fast as that supercomputer over there!”

“This is the only card that can achieve 60fps at 4K. But I want to remind you that it can’t do 60fps at 8K. Buyer beware!”

“This game has some of the highest resolution textures we’ve ever seen. Sadly, they’re not as high as they could be if we had 100x more memory.”

Seriously though you would need to be pretty clueless to think Nvidia cards can do full scene raytracing in real-time.
 
There's a massive performance hit. Everyone knows this, but instead of reviewers saying you're not going to get full scene ray tracing, and what you do have is at a far lower fidelity than what Ray tracing is capable of, instead they are saying things like, "Take a look at the quake demo," which leads everyone to believe that A) the entire demo is ray traced, when it's not and B) the only card capable of delivering those level of graphics is the Nvidia card which isn't the case.

Of course there's a performance hit -- but it's not unplayable by a long shot.

Further, there's plenty of ignorance out there, and you're absolutely right to call it out. Nvidia's ray tracing hardware does 'just work', but making it work well with rasterization and modern game engines is going to take a lot of work, work that's just starting.

With respect to the Quake II RTX demo, it's not that the whole game is rendered through ray tracing -- it's that the lighting is. The rest is rasterized.


And yes, currently Nvidia hardware is the only hardware capable of delivering such visuals. But that's just AMD running along at their average two year follow distance -- it's not like RT hardware is hard.
 
From a technical standpoint it is hybrid rendering and it can render RT effects in realtime (shadows, refletions, lighting, etc). Not full scene realtime raytracing. The scenes in the demos are real time rendered, no prebaked lightmaps, no prerendered RT effects.

It is indeed capable of full ratracing and is orders of magnitude faster than previous solutions. As evidenced in many professional rendering applications with RTX support like blender.
Yes it is capable but the results are rather horrendous at times with a lot of noise and those demo's are still hybrid I do believe. Typical of low photon/sample rates per pixel. True hardware capability for real raytracing, VRay 2080Ti without using RT vs with gives a 40% boost in speed, not 200% or 3000%, just 40%, that is all what the hardware can give for real Raytracing. My Vega FEd can render sorta nice quality raytraced images rather quick in Blender or Modo, two of those will probably beat the 2080Ti using RT for real raytracing. How presented, knowing the folks not really understanding and calling it real time ray tracing to me is an utter farse.
 
True hardware capability for real raytracing, VRay 2080Ti without using RT vs with gives a 40% boost in speed, not 200% or 3000%, just 40%, that is all what the hardware can give for real Raytracing. My Vega FEd can render sorta nice quality raytraced images rather quick in Blender or Modo, two of those will probably beat the 2080Ti using RT for real raytracing.

So do an apples to apples? I'm pretty curious myself. Maybe see if they'll let you publish on FPSReview as a news article.

How presented, knowing the folks not really understanding and calling it real time ray tracing to me is an utter farse.

Well, it's using ray tracing, and it's real time, but I agree that it should be made clear that the hybrid rendering being done isn't 'real-time full-scene ray tracing'.
 
Back
Top