Why does raytracing suck so much, is it because of nvidia's bad hardware implementation?

Status
Not open for further replies.
Eh just to be clear I'm not arguing for AMD or anything here. I'm about as neutral in that fight as one could be. I consider DX12 to pretty much be a failure as well.

I'm not arguing against them- I just remain disappointed in them. We know they can do better.

With respect to DX12, I feel that many including myself underestimated just how much of a change it represented (alongside Vulkan / Mantle). All three are really very similar, and are really very different than any DirectX or OpenGL (or other) API before them on the desktop. EA is on their fourth or fifth Frostbyte-based game that has an unflattering DX12 implementation, for example, while Doom (2016) shows us just what the potential of a low-overhead API can be.

Like DX12 / Vulkan, it seems that we'll need to wait for the engine gurus to do near complete render rewrites to get the performance that the technology promises into shipping consumer applications.
 
I agree with your point but we are now what, 8-9 months post-release of the 20 series and how many titles actually support it? Is it even 10 games?

As said repeatedly, and should be obvious, is that this kind of change takes time. But it looks like going forward it will soon be the norm, to have it. It should be extremely clear by now that Ray Tracing is here to stay.

Supporting it now: less than 10.
Announced: more than 10.

https://www.rockpapershotgun.com/2019/06/11/nvidia-rtx-ray-tracing-dlss-games-confirmed-2/

Note list already incomplete by at least one. Doom Eternal (Vulkan).
https://www.dsogaming.com/news/doom-eternal-will-support-ray-tracing/

“RTX makes it look, you know, amazing. There are great benefits but it doesn’t necessarily expand our audience or that the way that the way that something like Stadia does so, but absolutely people can look forward to DOOM Eternal and id Tech 7 supporting ray tracing. Absolutely. I mean we love that stuff, the team loves it and I think we’ll do it better than anybody honestly.”
 
I'm not arguing against them- I just remain disappointed in them. We know they can do better.

With respect to DX12, I feel that many including myself underestimated just how much of a change it represented (alongside Vulkan / Mantle). All three are really very similar, and are really very different than any DirectX or OpenGL (or other) API before them on the desktop. EA is on their fourth or fifth Frostbyte-based game that has an unflattering DX12 implementation, for example, while Doom (2016) shows us just what the potential of a low-overhead API can be.

Like DX12 / Vulkan, it seems that we'll need to wait for the engine gurus to do near complete render rewrites to get the performance that the technology promises into shipping consumer applications.

When do you think DX12 will really show its promise? In 2020 perhaps?
 
I wasn't really impressed by the Quake II RTX demo, and at 4k, it was a stuttering mess on a 2080Ti. Turned it down to 1080p and it ran alright.

I'm sure Raytracing is the future, it's just not ready yet. The 2080ti is a great chip, but the price and tech just aren't where they should be.
 
I wasn't really impressed by the Quake II RTX demo, and at 4k, it was a stuttering mess on a 2080Ti.

If you were comparing it to literally anything else, you're doing it wrong.

Quake II RTX is full ray tracing, there isn't any rasterization going on, and that's why it's impressive. AAA-releases will be doing hybrid ray tracing and rasterization for the forseeable future and will see much better performance when developed with ray tracing in mind, on top of much better visuals.
 
Traced light paths involve a lot of low resolution float point math. (FP8 and FP16) You can calculate it on a higher precision part like a CPU or a standard GPU "core" "shader unit" but its not very efficient.

The NV solution was to take the tensor hardware they where already baking in for AI. Using that to calculate the ray math. Which is very logcial. Optical lens making calculations going back 30+ years have used tensor matrix math. With their latest second generation tensor hardware they allowed for lower precision math calculation. Its logical to use that bit of hardware. Of course it means using it eats up a lot of internal chip cache bandwidth. Its a ton of math taxing not a ton of shared cache. I'm sure NV will tune it some more in their next chips, perhaps make some improvements to their cache system as AMD has done with rdna.

AMD we are not exactly sure what their hardware solution will be... we know from their own slides. Their ultimate goal is to do full scene ray tracing in the cloud. They are betting on streaming to bring real true ray tracing to games. They do have a in between plan with the consoles and Navi+ next year. I believe we can already guess at what their solution will look like. RDNA introduces wave32 computation. Doing much the same thing NV is doing with tensors running at a lower float point to perform more "simple" math, such as ray calculation (but also other lower precision shader routines). Previous AMD cards vega/polaris use wave64 only... which in cases of branching math (such as ray calculation) becomes very inefficient potentially calculating very little per clock as the core waits on results to move forward.

RDNA runs dual computation cores... that can run at wave32. So much like NV is running their tensor cores at FP 8 and FP16 to calculate 2-4x the ray paths vs a full tensor. AMD could use the standard shader method... and by running them at half precision 2 at a time, they can in theory calculate 4 bits of math per clock where their old cards would have been doing one. The advantage over the NV solution (at least on paper) is all this math is happening in the same engine that is calculating regular shaders. It should have a lot lower impact on cache performance.

The main issue with using shaders to calculate rays (as NVs drivers allow on say a 1080) is the shaders bog down with lots of low precision ray calculation. It not efficient at all. AMDs RDNA may well solve that. AMD may well only need to supply proper drivers to allow hardware ray tracing on RDNA. (I suspect their Navi+ with ray tracing support won't be much more then a driver update.. right now why provide that which would only highlight how bad cards like Vega would be). I find this method much more realistic... considering Sony and MS are saying tracing is coming to their next consoles. I don't suspect they will be adding tensor hardware... its much more logical to allow the current shader cores to calculate rays. (the trick is speeding them up to make it happen.. and what AMD is talking about with dual computation pipes capable of running at half precision should do exactly that)
 
Last edited:
Raytracing sucks because it's raytracing. There isn't a magic bullet to make this work fast. This has only become (barely) possible in real time (putting more hardware into it), and it's usually not "complete" ray tracing even so.
 
The NV solution was to take the tensor hardware they where already baking in for AI. Using that to calculate the ray math.

Please stop repeating this unless you can provide proof. Nvidia’s technical documentation clearly states that ray traversal and intersections are done using fixed function hardware.

The internet really sucks.
 
Please stop repeating this unless you can provide proof. Nvidia’s technical documentation clearly states that ray traversal and intersections are done using fixed function hardware.

The internet really sucks.

Go do some reading of NV white papers. RT cores are just tensors running at lower precision. (if you need some further proof go find some benchamrks of Volta cards running with RTX on) I am not saying using tensor is a bad thing, I don't see anyone else shipping a consumer part with tensor cores. (also what difference does it make if we call the bit doing the math an "RT core" or a FP8/16 tensor matrix.... other then I guess having to admit NV doesn't design anything for games alone anymore... which shouldn't be a surprise to anyone NO one designs for games alone anymore)

Marketing just really sucks.
 
Same here alot is to be blamed directly to directx. Sometimes it seems as if microsoft does this on purpose because of its console xbox.

It's mostly that supporting a platform as dynamic as the PC is just hard. Consoles are static and much easier to develop for, despite the lack of hardware resources.
 
It's mostly that supporting a platform as dynamic as the PC is just hard. Consoles are static and much easier to develop for, despite the lack of hardware resources.

PC game development requires a crystal ball that you don't if your developing for a console.

Game development takes 4-5 years. So early in development the game devs have to rub their balls and guess what a mid range PC is going to look like. Aim to high and you end up with a Cryis that looks insane and wonderful and no one buys cause they can't run it. Aim to low and you end up with a Daikatana that looks 2 or 3 years out of date.

Developing for a console is much easier. Its also why we have gotten so many console ports the last number of years. Now that consoles are more PC like... their targets are set by their specs.

One advantage to streaming (to sort of get back to ray tracing)... developers will be able to target some serious server performance targets. It may trickle down to high end PC game rigs. Getting games that really push limits on home hardware. Perhaps the average mass market gamer won't be able to play those games without turning to streaming, but high end gaming rigs might be able to pull off good performance local. A future of Cryis like make your rig cry games might be just a few years away. lol
 
As said repeatedly, and should be obvious, is that this kind of change takes time. But it looks like going forward it will soon be the norm, to have it. It should be extremely clear by now that Ray Tracing is here to stay.

Supporting it now: less than 10.
Announced: more than 10.

https://www.rockpapershotgun.com/2019/06/11/nvidia-rtx-ray-tracing-dlss-games-confirmed-2/


Note list already incomplete by at least one. Doom Eternal (Vulkan).
https://www.dsogaming.com/news/doom-eternal-will-support-ray-tracing/
Any chance you can do the list with expected dates?
That would be nice to see.
 
Go do some reading of NV white papers. RT cores are just tensors running at lower precision. (if you need some further proof go find some benchamrks of Volta cards running with RTX on) I am not saying using tensor is a bad thing, I don't see anyone else shipping a consumer part with tensor cores. (also what difference does it make if we call the bit doing the math an "RT core" or a FP8/16 tensor matrix.... other then I guess having to admit NV doesn't design anything for games alone anymore... which shouldn't be a surprise to anyone NO one designs for games alone anymore)

Oh great your personal pet conspiracy theory again.

This is pure nonsense. You are arguing that NVidia is lying to everyone, pretending there are dedicated RT Cores, when in reality they don't exist and they are just re-using Tensor cores.

WTF would they do that? They wouldn't be able to keep that hidden very long, and so far no industry experts or competitors have called them on it.

NVidia describes in reasonable detail what the very specfic RT cores do on page 36 of this whitepaper:
https://www.nvidia.com/content/dam/...ure/NVIDIA-Turing-Architecture-Whitepaper.pdf

TURING RT CORES
At the heart of Turing’s hardware-based ray tracing acceleration is the new RT Core included in
each SM. RT Cores accelerate Bounding Volume Hierarchy (BVH) traversal and ray/triangle
intersection testing (ray casting) functions. (See Appendix D
Ray Tracing Overview on page 68 for more details on how BVH acceleration structures work). RT
Cores perform visibility testing on behalf of threads running in the SM.
RT Cores work together with advanced denoising filtering, a highly-efficient BVH acceleration
structure developed by NVIDIA Research, and RTX compatible APIs to achieve real time ray
tracing on single Turing GPU. RT Cores traverse the BVH autonomously, and by accelerating
traversal and ray/triangle intersection tests, they offload the SM, allowing it to handle other
vertex, pixel, and compute shading work. Functions such as BVH building and refitting are
handled by the driver, and ray generation and shading is managed by the application through
new types of shaders.
To better understand the function of RT Cores, and what exactly they accelerate, we should first
explain how ray tracing is performed on GPUs or CPUs without a dedicated hardware ray tracing
engine. Essentially, the process of BVH traversal would need to be performed by shader
operations and take thousands of instruction slots per ray cast to test against bounding box
intersections in the BVH until finally hitting a triangle and the color at the point of intersection
contributes to final pixel color (or if no triangle is hit, background color may be used to shade a
pixel).
Ray tracing without hardware acceleration requires thousands of software instruction slots per
ray to test successively smaller bounding boxes in the BVH structure until possibly hitting a
triangle. It’s a computationally intensive process making it impossible to do on GPUs in real-time
without hardware-based ray tracing acceleration (see Figure 19).
The RT Cores in Turing can process all the BVH traversal and ray-triangle intersection testing,
saving the SM from spending the thousands of instruction slots per ray, which could be an
enormous amount of instructions for an entire scene.


Note that they even break down the two different specialized units in each core.
The RT Core includes two specialized units.
The first unit does bounding box tests, and the second unit does ray-triangle intersection tests.
The SM only has to launch a ray probe, and the RT core does the BVH traversal and ray-triangle
tests, and return a hit or no hit to the SM. The SM is largely freed up to do other graphics or
compute work. See Figure 20 or an illustration of Turing ray tracing with RT Cores.

RT cores are Not Tensor processors. These are two completely different dedicated HW units. They are clearly separate in the SM as well:

GeForce_EditorsDay_Aug2018_Updated090318_1536034900-compressed-010.png



Note that a Turing SM has 4 independent Sub cores. Each sub core contains 2 Tensor cores.

Subcores do NOT contain RT Cores.

Instead it is clearly shown and documented that RT cores are outside the sub cores to let them operate independently in parallel.

In short everything you claim is completely disproven by the Whitepaper you claim you read.
 
Oh great your personal pet conspiracy theory again.

This is pure nonsense. You are arguing that NVidia is lying to everyone, pretending there are dedicated RT Cores, when in reality they don't exist and they are just re-using Tensor cores.

WTF would they do that? They wouldn't be able to keep that hidden very long, and so far no industry experts or competitors have called them on it.

NVidia describes in reasonable detail what the very specfic RT cores do on page 36 of this whitepaper:
https://www.nvidia.com/content/dam/...ure/NVIDIA-Turing-Architecture-Whitepaper.pdf




Note that they even break down the two different specialized units in each core.


RT cores are Not Tensor processors. These are two completely different dedicated HW units. They are clearly separate in the SM as well:

View attachment 167859


Note that a Turing SM has 4 independent Sub cores. Each sub core contains 2 Tensor cores.

Subcores do NOT contain RT Cores.

Instead it is clearly shown and documented that RT cores are outside the sub cores to let them operate independently in parallel.

In short everything you claim is completely disproven by the Whitepaper you claim you read.

While I agree with you I am not sure why he even cares. They could use any sort of sorcery they want to and if it’s the best I’ll buy it.

If AMD creates a card twice as good I’ll buy that too. I could care less it’s made from 7nm, dog shit, or pixie dust; all that matters is it’s performance / cost.
 
While I agree with you I am not sure why he even cares. They could use any sort of sorcery they want to and if it’s the best I’ll buy it.

If AMD creates a card twice as good I’ll buy that too. I could care less it’s made from 7nm, dog shit, or pixie dust; all that matters is it’s performance / cost.

Exactly. I am not sure why he thinks it would matter so much that NVidia would need to hide it.

If NVidia actually did get Tensor cores to do Bounding Box test, and Intersection tests, they would just say so and everyone would be fine with that.

But there is no reason to lie, and is crystal clear from the white paper that these are completely different HW units specialized in different function, in different locations.

It's rather baffling how/why he arrived at his unique conclusion.
 
Not sure why you guys think it bothers me... that NV is clearly using Tensor FP8 mode to calculate rays. YES read what you posted about their white paper... about BVD acceleration and how it works. I don't care what marketing word NV wants to use... AMD INTEL APPLE SAMSUNG and every tech company since forever has come up with really stupid marketing names.

Whatever NV is doing behind the driver is anyones guess. Its not open source... no game developer no OS developer or anyone but NV knows what is happening after the high level (and some low level calls in vulcan and DX12) call the NV driver. Just because they slap a rectangle at the bottom of the core and Slap the name RT core on it. Hardly means its not part of the tensor unit. (Which it clearly is lol)

Regaardless its off topic and isn't important. I am not sure where I threw any shade on NV at all... I simply pointed out why their soltuion isn't perfect for real time ray tracing... and unless someone can show me a benchmark where turning on Max Ray tracing in ANY shipping game doesn't tank performance by 30-50% I'll change my mind right now and bow down to their fantastic engineering. ;)

Go do some reading of NV white papers. RT cores are just tensors running at lower precision. (if you need some further proof go find some benchamrks of Volta cards running with RTX on) I am not saying using tensor is a bad thing, I don't see anyone else shipping a consumer part with tensor cores. (also what difference does it make if we call the bit doing the math an "RT core" or a FP8/16 tensor matrix.... other then I guess having to admit NV doesn't design anything for games alone anymore... which shouldn't be a surprise to anyone NO one designs for games alone anymore)

Marketing just really sucks.

My ONLY point brining up NVs solution using RT cores, low resolution tensor or full tensor for denoise or whatever mumbojumbo marketing speak we want to use. WAS that AMDs solution appears to potentially be different but much the same.

The issue with RT is that it requires lots of fast low precision math calculations. We need to calculate tens and perhaps hundreads of thousands of rays in a scene... those do not need to be calculated at FP64. A CPU sucks at real time ray tracing because its designed to do more complicated math then ray tracing requires. Instead of doing 100 really complicated calculations tracing requires 10s of thousands of very simple ones. (CPUs don't have fancy ways to split their 4 or 8 or 32 real cores into double the number of half presicion units) Regardless of any of us agreeing on calling NVs solution RT cores or Tensor+ or WTF other name you like... their solution is clear. They are breaking the math up into small bits and throwing them on a FP8/16 matrix which they are calling BVD and doing a metric ton of simple math really quickly. I HAVE NEVER claimed their solution was terrible... its ingenious, there marketing however is pure BS.

From what I have read about AMDs new RDNA stuff... it sounds to me like they are trying to solve the same issue. They are simply doing it with shader cores directly. By allowing those shaders to operate at a lower precison. So instead of calculation 1000 calculations at wave 64 they could potentially do 2000 calcualtions in the same clock at wave 32 precison. Which for ray tracing would still be more precison then is required. (however NV does use TENSORS as per their marketing and engieering folks for denoise work) so perhaps calculating at wave 32 AMD wouldn't need that much extra denoise work vs the even lower precision calcualtion NV is apparently doing up front.

Its all speculation and we'll see when AMD starts talking about actual hardware ray tracing down the road. Looks to me like they will be using shaders that can operate at lower presion. Which solves the problem in different yet very similer way as NVs. I coould be wrong perhaps Tensor and or some other form of RT core is coming to Navi+ down the road... btu that doesn't make a ton of sense to me. As it would be more then a simple + part I would think.
 
Last edited:
  • Like
Reactions: noko
like this
Exactly. I am not sure why he thinks it would matter so much that NVidia would need to hide it.

Last I will post on this... but in my mind. NVs engeineering guys have been adding tensor cores to their parts for 2 generations now for AI. They where tasked with finding ways to make them work for games... so NV didn't have to just fuse them off for consumer parts wasting a ton of money. So they threw a ton of ideas up and Ray tracing and DLSS made the most sense to pursue as bits that could use tensor hardware. Great ideas and I don't hate NV their software/hardware engeineers are great clearly... and yes both those technoclogies are promissing even if both are not perfect today.

I just think the NV marketing dept have heard the QQing about how NV chips are no longer designed for gamers first. I mean they didn't even sell gaming cards based on volta right. So calling the tensor mode doing BDV boundry math (which is a freaking math matrix its what tensors are designed to accelerate) a "RT core" makes it sound like turing was actually designed for gamers first. Which it wasn't. And it doesn't freaking matter. Its all marketing. Who cares if it was designed for servers and AI work or not. Clearly NVs marketing however wasn't wrong... it does seem to bother some to suggest Turing isn't a gamer first design. (and it doesn't matter)
 
Last edited:
At least I didn't waste as much time reading that nonsense as you did writing it. You really don't have a clue about this.

We won't know for likely 6+months when AMD or Sony or MS or someone starts talking about AMDs ray tracing solution. :)
 
AMD will be doing rays whenever the market's up for it .
even the 2080ti cant do current titles in full raytracing mode . it will have to get better by many factors .
its being said , you have to start somewhere , just like AA was a huge performance hit when it came out .
 
but in my mind

This I can agree with but please don't pass it off as fact. The math behind raytracing is relatively easy to understand but it's murder on current gpus. If Nvidia or anyone found a way to use tenors to trace rays fast they would be shouting it from the heavens.
 
It is done exactly as described by ChadD. Marketing wants you to believe they spent 10 years developing it. Titan V does ray tracing with equal performance loss as Turing yet it has no RT cores. …:D
That is because its done by Tensor just like Turing.
Look at this https://wccftech.com/titan-v-high-fps-battlefield-v-rtx/
Turing is faster simply because its newer tweaked arch. most likely few more tensor cores.
Oh, and Radeon VII is much sexier than 2080 Ti or Titan RTX ;)
 
Last edited:
  • Like
Reactions: ChadD
like this
Last I will post on this... but in my mind.

This is how conspiracy theories start. People don't understand something, so they make up stories that fit their world view, rather than challenge it. I Just don't get what challenged you so much, about how RTX actually works, that you had to retreat to the alternate story you are making up.


It is done exactly as described by ChadD. Marketing wants you to believe they spent 10 years developing it. Titan V does ray tracing with equal performance loss as Turing yet it has no RT cores. …:D
That is because its done by Tensor just like Turing.
Look at this https://wccftech.com/titan-v-high-fps-battlefield-v-rtx/
Turing is faster simply because its newer tweaked arch. most likely few more tensor cores.
Oh, and Radeon VII is much sexier than 2080 Ti or Titan RTX ;)


The blind leading the blind. :rolleyes:

Not that WCCF is any kind of source, but did you read your own link:
"The Titan V is probably this capable in ray tracing even without RT cores due to its sheer power. It has more shader units than the Titan RTX "

As explained everywhere, in the absence of RT cores, a large shader program is run, and nothing has more shaders than Titan V. Also BFV is hybrid title that runs variable amount of RT code.

Now compare the lowly 2060 vs Titan V in Port Royale RT benchmark. They are comparable, and the 2060 is just as fast, if not faster despite having a fraction of the Tensor cores the Titan V has.
https://www.dsogaming.com/news/nvid...-mark-port-royal-rtx-2060-as-fast-as-titan-v/
 
Last edited:
The amount of hoghwash posted is hillarious...Raytracing must really have ruffled some feathers...but watch the tune change when AMD supports DXR ;)
 
AMD will be doing rays whenever the market's up for it .
.

Now that IS marketing.

Both consoles are coming out with RT support next year, that means AMD is working on it, but they aren't ready yet.

So they have to pretend it's the market that isn't ready, when they simply don't have the product.
 
The tune will not change . AMD crowd does not care for ray tracing at $ 1200 and 30 FPS... :D
 
I see Doom Eternal is also going to support Ray Tracing. That and the likes of Cyberpunk 2077 support means that it is starting to gain momentum.
 
Are you high >? How is the market ready , 4-5 games and no one really uses this feature yet..... haha
 
Are you high >? How is the market ready , 4-5 games and no one really uses this feature yet..... haha

If that's directed at me, then No I don't take drugs. Although sometimes it would probably help. Doom Eternal and Cyberpunk 2077 and the new Wolfenstein are some of the most anticipapated upcoming games. I would say it definitely will gain traction.
 
Well, Then you contradicted yourself. The market will be ready someday and that is when AMD will support it....:D
 
That's fine but it doesn't explain wilfull ignorance about how the tech actually works. The AMD crowd as you call them are free to stand on the sidelines and hate all day long.
I recent that characterization.
Ray tracing is fine if you want it go for it , enjoy your RTX2060 or RTX 2070 and low resolution and lower frame rate . If that makes any of you happy cool.

What I recent even more that it is called ray tracing it is misleading where it should be named :
Nvidia ray tracing: the way you are ment to pay us (R).
Fast enough for ray tracing because in reality the segmented performance is clearly holding it back. Yet with stuff as Tessellation where was the segmentation between several cards within the same generation?
 
  • Like
Reactions: noko
like this
Are you high >? How is the market ready , 4-5 games and no one really uses this feature yet..... haha

As far as "The Market" not being ready, when does it ever make sense to sit back and let your competitor define the market? It's not a desirable position to be in. To suggest otherwise is spin.

The simple reality is that AMD doesn't have HW ready, so they have to ignore it, or spin it and they are trying to do both. As soon as they have HW ready they will bring it to market, regardless of the state of "The Market".

What matters much more than the number count of titles, are the killer apps, or in this case the killer games.

From what I see, Cyberpunk 2077 is the first Killer App for Ray Tracing. It's probably the most anticipated game coming out in the next year. After the CP announcement I have seen several people say, that they were going to get an RTX card now, and Doom Eternal will like be another. This one will be interested because they are using Vulkan and claim they are doing RT better than everyone else.

If NVidia is essentially giving you similar perf/$ as AMD with the Bonus of free RTX when games like CP 2077 and Doom Eternal coming to market with RT support, AMD is missing the boat. Only spin suggests otherwise.
 
Im not dishonest. Nvidia is. its your ignorance at play here. It was explained earlier what RTX cards are and yet you call me ignorant and a liar.
 
Funny how when faced with truth you resort to personal attacks. No wander some of the most informed members on this forum don't even care to post anymore... Have a great day....:D
 
Im not dishonest. Nvidia is. its your ignorance at play here. It was explained earlier what RTX cards are and yet you call me ignorant and a liar.

No, a lie was attempted, in conflict with the facts (whitepapers and performance data)...and you cling to that lie like a tick.

Only question that remains is if it is ignorance or deliberate lies
 
Well if AMD is going to do Raytracing which they have indicated they are working on you are looking at the hardware design that is going to use it, Navi. I think ChadD is on the right track seeing Wave 32 will allow raytracing ability right in the shaders considering RDNA has ASIC engines, compute units can be set aside as needed for ray tracing while shading is happening in parallel. This maybe a much more elegant design then Nvidia tack on RT cores that when used just chokes performance even for one aspect of raytracing such as reflections. Maybe Nvidia design sucks so bad because it was tacked on vice fully integrated into it. I don't know but ray tracing will be coming with AMD hardware - we just have to see how it all works out.

As for Nvidia raytracing hardware - very cool indeed! Just not enough for me to entice me to get this generation but do look forward to what Nvidia will do next plus more titles and applications to use the unique feature.
 
Status
Not open for further replies.
Back
Top