I'm not looking to get into the AMD vs Nvidia banter or at least Anti-RTX. I've stated more than enough, I think the cost is too high and there just isn't enough software to justify it right now. I will say this was an impressive demo. What I really want to know after watching this. What kind of a performance advantage if any would RT specific acceleration cores have on the result? Does RT acceleration have any negative impact on quality? Will this implementation be able to use RT acceleration? Can ray traced surfaces look realistic or just impossibly shiny? I'm just left with more questions than before. The last one isn't a new one though...

Okay one last thing, this is driving me nuts!

People keep talking about tensor cores like they are the same thing as the RT cores. I was under the impression that Tensor cores are low precision matrix computational 4x4 FP16 with FP32 output. Designed specifically for AI processing. RT cores are something new, designed to accelerate ray tracing by handling some of the more intricate computations needed for calculating ray bounces and intersections. Are they the same unit or separate? I thought separate, maybe I read wrong???
 
Last edited:
If we want realism then at some point we may need to abandoned or 3d mesh + uv mapped textures technique. Especially true for interactive content such as games. What we're seeing here is a much needed improvement to the light model but not an improvement to everything. So yea, all surfaces won't suddenly appear realistic. There are other things that contribute to that.

As for performance you can go look at some "fast" ray tracers like octane and even something like keyshit (which isn't unbiased but it may be more similar to what a game engine does) there is a ton of activity in non-gaming applications if anyone is interested... gaming RT is just a cute newborn puppy in comparison
 
Nope, Nvidia wasn't the first.

PowerVRs ray tech just further proves that Nvidias "ray casting" engine was not designed from the ground up for ray calculations. RTX cards doing 5-10 Giga rays a second... at 300+ watts is crap when you consider PowerVR was doing 300 million rays a second at 2 watts 5+ years ago now. A 300 watt power VR part would be pulling 10x as many rays as Nvidias... even counting for extreme diminishing returns it would easily do 20-30 giga rays a second.

Tensor units are no doubt better suited to ray calculation vs x86 hardware. But its also not custom built for the task as PowerVRs asic was/is.
 
PowerVRs ray tech just further proves that Nvidias "ray casting" engine was not designed from the ground up for ray calculations. RTX cards doing 5-10 Giga rays a second... at 300+ watts is crap when you consider PowerVR was doing 300 million rays a second at 2 watts 5+ years ago now. A 300 watt power VR part would be pulling 10x as many rays as Nvidias... even counting for extreme diminishing returns it would easily do 20-30 giga rays a second.

Tensor units are no doubt better suited to ray calculation vs x86 hardware. But its also not custom built for the task as PowerVRs asic was/is.
From what I understand is that while ASIC is better for Ray Tracing it has limitations like the inability to access a lot of memory and needing specialized code. Correct me if I'm wrong on this cause no Ray Tracing expect has come out and explained it. But Ray Tracing is decades old and has been used by the industry for movies and photos and everyone uses the CPU to do it, not an ASIC. I believe Imagination Technologies made a workstation/server like hardware with dual ASICs that slightly outperformed Xeons, but cost more and were again specialized. Tensor cores are still specialized because what else can they do?

If making an ASIC is better at Ray-Tracing then why hasn't the industry migrated over?
 
Happy to hear about this and even more so because it will help to drive competitors to continue to develop their own solutions. Interesting stuff from Crytek. If they're not gonna do a Crysis 4 then at least do a remake of 1 with this show it all off.
 
meanwhile crytek sues its developers who use the crytek engine lol good luck getting someone to use ya raytracing after that lol
 
From what I understand is that while ASIC is better for Ray Tracing it has limitations like the inability to access a lot of memory and needing specialized code. Correct me if I'm wrong on this cause no Ray Tracing expect has come out and explained it. But Ray Tracing is decades old and has been used by the industry for movies and photos and everyone uses the CPU to do it, not an ASIC. I believe Imagination Technologies made a workstation/server like hardware with dual ASICs that slightly outperformed Xeons, but cost more and were again specialized. Tensor cores are still specialized because what else can they do?

If making an ASIC is better at Ray-Tracing then why hasn't the industry migrated over?

There are many issues... first accuracy. GPUs are faster at quick and dirty but are not great at hyper accurate. A company like Pixar rendering a movie want 100% per pixel accuracy they won't get from GPU or even ASIC acceleration. (when the double and triple check math comes into play your asic would end up being a long instruction set processor anyway) So a program like Pixars Renderman can use CUDA for pre vis work... but final renders are going to be done on CPUs, as the GPU output is just not going to be perfectly pixel accurate. So GPUs make great artistic tools as artists can see real time output that is good enough to compose scenes ect... but final renders still get sent to CPU render farms. (also keep in mind most pre vis work is done at 1080p or perhaps 1440 or 4k.... but most pro stuff is rendered at 8k and has been for awhile now. no GPUS are doing that real time and they run into sever RAM issues)

So the next issue becomes RAM... a CPU render farm can be feed all the RAM they can throw at it. Where as GPU based render solutions are going to be capped in terms of addressable ram. THIS is why the Radeon SSG parts are so popular with Television / Video Commercial type companies... where they get into complicated pre vis work and run into memory issues. Companies like say SpinVFX who have won Emmy awards for work on Game of thrones do AMD PR work for EPYC and have used Radeon SSGs for pre vis work. For the most part if your rendering 8k full CGI scenes its still much more accurate and faster (and cheaper) to use CPU farms.

I guess bottom line is building a Tracing ASIC that could do the high quality work required would need to address massive amounts of ram cleanly... and would have to include a lot more instructions then your typical ASIC. So the advantages over stock Epyic/Xeons with their very good memory controllers wouldn't likely be that great.

(For the record I'm not expert on tracing either)
 
If these results can be competitive with RTX, Nvidia is gonna have a very bad day.
Hell just give me the option to buy a Non-RTX flagship and I'll bite. (Something that beats the 1080ti by a significant margin and doesn't double on price) Extra $500-$600 for current jump is steep IMO. Can Keep the Raytracing..
 
....People keep talking about tensor cores like they are the same thing as the RT cores. I was under the impression that Tensor cores are low precision matrix computational 4x4 FP16 with FP32 output. Designed specifically for AI processing. RT cores are something new, designed to accelerate ray tracing by handling some of the more intricate computations needed for calculating ray bounces and intersections. Are they the same unit or separate? I thought separate, maybe I read wrong???

You are right, they are different.
 
nVidia has a big launch event, RTX, real-time raytracing.

Everyone wants it.

Everyone who doesn't have it is jealous... be it other hardware vendors, game engine dev's, or gamers.

AMD says "we are gonna do that too... soon" (getting on the bandwagon)

Microsoft, Sony start talking about how the next-gen consoles will include it. (more on the bandwagon)

Game dev's are all interested too, how can I get this in my game??

Unreal engine has support. (the caravan has needs, we got you)

(at this point, the ball is rolling and isn't likely to be stopped anytime soon)

What's the rest of the game engine developers going to do? You guessed it. Add it to their game engines as well!

Crytek: "We gotta get a ray tracing demo out, sell more licenses for our engine!" And to make it even more attractive, make it appear that even 2 year old GPU's can do it! This is the sweet spot for game dev's.. they target that hardware for their games to try and maximize the hardware audience that can run it.

This is all we really know for sure.

Vega56 doing ray tracing... we already know how big of a performance hit a 2080Ti takes (granted, the frostbite engine has a shit DX12 pipeline...) when enabling ray tracing. What we do not know?? : How many rays were in this demo? How does that compare to the few other raytraced demos or games we've seen so far? How does this demo perform on other GPU's? Including those with RTX cores?

Without that information, all we got is a pretty sales pitch (that isn't directed at us..).
 
You know what we need? We need that Quake 2 Ray Tracing mod to use this method to see how it compares to Nvidia's RTX.


you're in luck- a couple years ago someone made a proprietary OpenGL implementation of a pathtraced engine for Quake 2 that runs entirely on traditional shader cores and can be configured to render entirely using pathtracing, or use hybrid techniques with rasterized ambient occlusion
https://amietia.com/q2pt.html

i've tried both that and the RTX-accelerated Vulkan Quake 2 engine on my 2080 and aside from the Vulkan version benefiting visually from "free" denoising on the Tensor cores, there's a huge performance difference when the engines are configured similarly (# of rays/bounces)

that's for an entirely pathtraced implementation though, which is a bit extreme and not representative of actual games that use raytracing just for a few effects.
 
nVidia has a big launch event, RTX, real-time raytracing.

Everyone wants it.

How about reality....

- Nvidia realizes they have lost another round of consoles... that MS and Sony are both going to be powering their consoles with Navi.
- Nvidia realizes they just lost the Google Game streaming GPU server contract to AMDs 7nm Vega parts. MI60 powered the Project stream test... and likely AMD has the contract for all the upcoming Google GPU server clusters that will power the full service when they announce it.
- Nvidia with their close ties to the game developer industry KNOW that Ray tracing is being worked into the MS and Sony APIs... for games coming to AMD powered PS5 / MS Nextgen

Nvidia has designed their new GPUs for AI server compute first... volta added tensor. The market however at the time gave Nvidia no reason to put those chips into consumer GPUs. So when its follow up came up... they decided to strike first and start talking about Ray tracing. Even though we all know it was half baked and not ready. Zero games used RTX at launch... and even now support is a couple games. Because reality is... the tracing games are coming, and they have been in development for a good while already, but they are aimed directly at PS5 / Xboxnext hardware. (Navi)

The RTX launch was nothing but a marketing move intended to preempt a bit upcoming AMD thunder. Yes NV is still the most popular end user gamer card. But Nvidia is loosing all the upcoming streaming contracts. If streaming takes off NV is going to feel the burn.
 
How about reality....

- Nvidia realizes they have lost another round of consoles... that MS and Sony are both going to be powering their consoles with Navi.
- Nvidia realizes they just lost the Google Game streaming GPU server contract to AMDs 7nm Vega parts. MI60 powered the Project stream test... and likely AMD has the contract for all the upcoming Google GPU server clusters that will power the full service when they announce it.
- Nvidia with their close ties to the game developer industry KNOW that Ray tracing is being worked into the MS and Sony APIs... for games coming to AMD powered PS5 / MS Nextgen

Nvidia has designed their new GPUs for AI server compute first... volta added tensor. The market however at the time gave Nvidia no reason to put those chips into consumer GPUs. So when its follow up came up... they decided to strike first and start talking about Ray tracing. Even though we all know it was half baked and not ready. Zero games used RTX at launch... and even now support is a couple games. Because reality is... the tracing games are coming, and they have been in development for a good while already, but they are aimed directly at PS5 / Xboxnext hardware. (Navi)

The RTX launch was nothing but a marketing move intended to preempt a bit upcoming AMD thunder. Yes NV is still the most popular end user gamer card. But Nvidia is loosing all the upcoming streaming contracts. If streaming takes off NV is going to feel the burn.
But Turing was 10 years in the making...
 
  • Like
Reactions: ChadD
like this
you're in luck- a couple years ago someone made a proprietary OpenGL implementation of a pathtraced engine for Quake 2 that runs entirely on traditional shader cores and can be configured to render entirely using pathtracing, or use hybrid techniques with rasterized ambient occlusion
https://amietia.com/q2pt.html

i've tried both that and the RTX-accelerated Vulkan Quake 2 engine on my 2080 and aside from the Vulkan version benefiting visually from "free" denoising on the Tensor cores, there's a huge performance difference when the engines are configured similarly (# of rays/bounces)

that's for an entirely pathtraced implementation though, which is a bit extreme and not representative of actual games that use raytracing just for a few effects.
Whatever Crytek did it seems to denoise the imagine, otherwise it would look like this Quake 2.
 
RT cores is rather a misnomer, as I don't believe there is any such thing. There are FP32, FP16, and tensor cores. The FP units may be used for anything, the tensor cores are used for DLSS and denoise.

RT isnt exactly cores, it is fixed function hardware, much like the texture or geometry units, that does quite different stuff than the rest of the hardware on a GPU and is definitely different silicon.

That is also why NV doesn't say there are this many RT cores, they say this card can do this many rays per second.
 
  • Like
Reactions: Nobu
like this
There are many issues... first accuracy. GPUs are faster at quick and dirty but are not great at hyper accurate. A company like Pixar rendering a movie want 100% per pixel accuracy they won't get from GPU or even ASIC acceleration. (when the double and triple check math comes into play your asic would end up being a long instruction set processor anyway) So a program like Pixars Renderman can use CUDA for pre vis work... but final renders are going to be done on CPUs, as the GPU output is just not going to be perfectly pixel accurate. So GPUs make great artistic tools as artists can see real time output that is good enough to compose scenes ect... but final renders still get sent to CPU render farms. (also keep in mind most pre vis work is done at 1080p or perhaps 1440 or 4k.... but most pro stuff is rendered at 8k and has been for awhile now. no GPUS are doing that real time and they run into sever RAM issues)

GPUs support same IEEE double precision floating point math standard as CPU (NVIDIA or AMD alike). Claim that GPUs are not as accurate as CPU is not accurate for several major GPU generations.

Oops:
https://nvidianews.nvidia.com/news/...logy-for-accelerating-feature-film-production
 
GPUs are faster at quick and dirty but are not great at hyper accurate.)
why would GPU not be pixel perfect compared to CPUs? do you mean because of memory limitations the engine has to throw less stuff at it or limit the floating point arithmetic or something? just curious, I always thought switching between CPU and GPU rendering resulted in the same image...
 
Whatever Crytek did it seems to denoise the imagine, otherwise it would look like this Quake 2.
The crytek demo is showing ray traced reflection... not lighting like in that quake 2 demo... evidence is the bollards on the right side of the street near the end, they would be casting two shadows from the two green facade light features but instead they cast what looks like a single shadow map coming from an invisible light source near the center of the entryway
 
why would GPU not be pixel perfect compared to CPUs? do you mean because of memory limitations the engine has to throw less stuff at it or limit the floating point arithmetic or something? just curious, I always thought switching between CPU and GPU rendering resulted in the same image...

Because rendered geometry that a GPU is designed to do is not ray tracing. They are 2 very different types of renders. Pixar Renderman is one of the best known Ray tracing engines used for Pixar movies. They have another program called flow which uses GPUs to provide real time rendering. The issue is flow renders don't look like Renderman renders. There are features renderman can do that that they haven't so far been able to fake with a GPU render. To put it simply a full ray tracing engine like Renderman has features that are just not going to be faster or even possible on a GPU they require a general purpose processor.

https://renderman.pixar.com/news/renderman-xpu-development-update

Having said that Pixar is working on a project called XPU... which is intended to take advantage of CPU and GPU assets for final renders. But there are a lot of barriers... not least of all being that for something like a Pixar level quality movie a single frame can easily require 70GB of ram to render.

However to your specific question as to why GPUs aren't good for final renders (and even with XPU they won't be used for the entire pipe) It comes down to the way GPUs calculate math. I know math should be math but it isn't. IEEE 754 is the standard for float point math. GPUs haven't always conformed to this standard. Its only the last few generations of cards that are somewhat compliant.

Here is some NV documentation on the subject. The bottom line is GPUs are simply not accurate in the same way. The GPU folks haven't come up with some wonder cores capable of running circles around X86 programmable cores. They have instead taken a compute core and simplified them. They work faster... but they have never done that work at the same level of precision. Nvidia and AMD have both been adding higher precision modes the last few generations with things like FP64 ect... but their implementations still mostly cut a few corners, which are unimportant for Games, or Pre vis work... but for final 70 GB renders that demand zero errors, they simply are not going to be capable of doing the work alone. (it will be interesting to see Pixars XPU stuff in action though... if they have found away to use those GPU cores for setup or something of final renders it might be interesting. But people expecting 1000 fold speed increases in final renders vs CPU farms are dreaming.)
https://docs.nvidia.com/cuda/floating-point/index.html

4.5. Differences from x86
NVIDIA GPUs differ from the x86 architecture in that rounding modes are encoded within each floating point instruction instead of dynamically using a floating point control word. Trap handlers for floating point exceptions are not supported. On the GPU there is no status flag to indicate when calculations have overflowed, underflowed, or have involved inexact arithmetic. Like SSE, the precision of each GPU operation is encoded in the instruction (for x87 the precision is controlled dynamically by the floating point control word).

5.4. Verifying GPU Results
The same inputs will give the same results for individual IEEE 754 operations to a given precision on the CPU and GPU. As we have explained, there are many reasons why the same sequence of operations may not be performed on the CPU and GPU. The GPU has fused multiply-add while the CPU does not. Parallelizing algorithms may rearrange operations, yielding different numeric results. The CPU may be computing results in a precision higher than expected. Finally, many common mathematical functions are not required by the IEEE 754 standard to be correctly rounded so should not be expected to yield identical results between implementations.

When porting numeric code from the CPU to the GPU of course it makes sense to use the x86 CPU results as a reference. But differences between the CPU result and GPU result must be interpreted carefully. Differences are not automatically evidence that the result computed by the GPU is wrong or that there is a problem on the GPU.

Computing results in a high precision and then comparing to results computed in a lower precision can be helpful to see if the lower precision is adequate for a particular application. However, rounding high precision results to a lower precision is not equivalent to performing the entire computation in lower precision. This can sometimes be a problem when using x87 and comparing results against the GPU. The results of the CPU may be computed to an unexpectedly high extended precision for some or all of the operations. The GPU result will be computed using single or double precision only.
 
Last edited:
The trick with this Crytek demo (and the older PowerVR stuff) is that it's all sharp reflections. The coherent rays are a whole lot faster per-ray, and noise isn't much of an issue even at 1 spp. Rougher materials either mean blurring the result after tracing (cheap but doesn't look great) or getting rid of the ray coherency and low-noise properties to some extent.

I think voxel cone tracing is usually just better, but for some aesthetics (like in the Crytek demo) raytraced reflections can be reasonably cheap and very high quality.
 
  • Like
Reactions: Youn
like this
GPUs support same IEEE double precision floating point math standard as CPU (NVIDIA or AMD alike). Claim that GPUs are not as accurate as CPU is not accurate for several major GPU generations.

Oops:
https://nvidianews.nvidia.com/news/...logy-for-accelerating-feature-film-production

Welcome to [H] btw. :) GPUs are mostly IEEE 754 compliant. Pixar does no doubt use CUDA for flow... which does pre vis work, and is awsome. I'm not saying GPUs are useless... just that for final renders, CPUs are still going to have to do the heavy lifting unless the GPU add some super accurate compute modes. If they do I'm not so sure they would still have any advantage over simply adding more CPU cores. Pixar should be releasing their XPU stuff this year... it will be interesting to see if they can make CPU+GPU work for final renders and how much performance there is to find there.
 
i found a comparison of Titan V & Titan RTX in Battlefield V-
https://www.overclock3d.net/news/gp..._rtx_2080_ti_when_ray_tracing_battlefield_v/1

the Titan V is still using the Tensor cores for denoising, but lacking acceleration from the RT cores limits performance to around RTX 2070 levels with RTX on in BF:V

so obviously offloading BVH calculations (figuring out what the rays hit) to the RT cores does free up quite a lot of shader resources in this instance, but we don't know how good Nvidia's DXR fallback layer for Volta is.

certainly by the numbers, RTX-accelerated cards can cast more rays in less time but as Crytek's demo shows, rays/second matters less than engine implementation when only a few effects are actually raytraced in a scene that's mostly regular shader effects.

Success of raytracing in games will be down to developers finding ways to get the most visual "pop" for the smallest # of rays cast while continuing to rely on traditional techniques for the bulk of the rendering.
 
How about reality....

- Nvidia realizes they have lost another round of consoles... that MS and Sony are both going to be powering their consoles with Navi.
- Nvidia realizes they just lost the Google Game streaming GPU server contract to AMDs 7nm Vega parts. MI60 powered the Project stream test... and likely AMD has the contract for all the upcoming Google GPU server clusters that will power the full service when they announce it.
- Nvidia with their close ties to the game developer industry KNOW that Ray tracing is being worked into the MS and Sony APIs... for games coming to AMD powered PS5 / MS Nextgen

Nvidia has designed their new GPUs for AI server compute first... volta added tensor. The market however at the time gave Nvidia no reason to put those chips into consumer GPUs. So when its follow up came up... they decided to strike first and start talking about Ray tracing. Even though we all know it was half baked and not ready. Zero games used RTX at launch... and even now support is a couple games. Because reality is... the tracing games are coming, and they have been in development for a good while already, but they are aimed directly at PS5 / Xboxnext hardware. (Navi)

The RTX launch was nothing but a marketing move intended to preempt a bit upcoming AMD thunder. Yes NV is still the most popular end user gamer card. But Nvidia is loosing all the upcoming streaming contracts. If streaming takes off NV is going to feel the burn.
This post seems to be rooted in some alternate reality. Really think MS and Sony wouldn't prefer more powerful Nvidia GPUs in their crapboxes if Nvidia was willing to stoop as low as AMD is in giving their chips away at breakeven? Notice how the chips supplied to the consoles are never even a blip on AMDs financials?

As for raytracing in the context of consoles its also wishful thinking - more buzzword surfing ala "4K" and "VR", when the crapboxes were too underpowered to pull off legit 4K60, or anything beyond low poly VR at 90FPS. And PS5 / Xbox Two will be too inept for legit raytracing at anything beyond 320p30. Reality is that the next consoles will be struggling just to finally render legit 4K60 without checkerboarding and uprezzing cheats - nevermind raytracing on top.

All the wishful thinking over the years about AMD chips in the consoles would somehow translate to AMD GPUs on PC gaining a performance edge on console ports? Never amounted to anything beyond fan fiction on forums. And for the simple reason that hardware level stuff is abstracted away from developers and they don't care about hardware underneath or low level optimizing for it.
 
Last edited:
This post seems to be rooted in some alternate reality. Really think MS and Sony wouldn't prefer more powerful Nvidia GPUs in their crapboxes if Nvidia was willing to stoop as low as AMD is in giving their chips away at breakeven? Notice how the chips supplied to the consoles are never even a blip on AMDs financials?

As for raytracing in the context of consoles its also wishful thinking - more buzzword surfing ala "4K" and "VR", when the crapboxes were too underpowered to pull off legit 4K60, or anything beyond low poly VR at 90FPS. And PS5 / Xbox Two will be too inept for legit raytracing at anything beyond 320p30. Reality is that the next consoles will be struggling just to finally render legit 4K60 without checkerboarding and uprezzing cheats - nevermind raytracing on top.

All the wishful thinking over the years about AMD chips in the consoles would somehow translate to AMD GPUs on PC gaining a performance edge on console ports? Never amounted to anything beyond fan fiction on forums. And for the simple reason that hardware level stuff is abstracted away from developers and they don't care about hardware underneath or low level optimizing for it.

Being in the next gen consoles your right gives AMD no advantage in PC GPU terms. Your right APIs are APIs and console ports will run equally well or not so well on every GPU Nvidia AMD or Intel. (assuming the next gen console APUs are not going to bake something unique into their chiplet design)

The point is however. Nvidias Tensor core powered tracing lives or dies by how tracing is implemented in the next gen consoles. Like it or not AAA developers don't duplicate work unless it opens a new market in a major way. No one is going to code a shader based traced lighting component into their console games in 2021 and then go and code a PC focused NV tensor flow implementation. Ok a few might if NV throws money at them. More likely however if AMDs cosnole part doesn't include some form of tensor and instead any tracing marketing or not is going to be done via shaders... well NV is selling a lot of silicon that is going to sit and spin. Sure their cards will still be able to do the shader method of "ray tracing" raster hybrid rendering... they might even be faster then AMD and Intel doing it. Still if the consoles don't include tensor... then game uses for tensor flow is basically DOA. Like it or not developers are going to target the consoles as they have always done.
 
Sad thing is that it seems a lot of people think that RT cores are required for ray tracing or that Nvidia said they were so they are getting way too excited about this demo and declaring RT cores a scam. If the technique Crytek is using can get good results and run decently on all hardware then that would be fantastic but it's also reasonable to think that it won't look as good as what RTX cards can produce and still cause a performance hit.
From GamerNexus's videos on the topic, I think Nvidia has itself to blame for that.
 
the Titan V is still using the Tensor cores for denoising, but lacking acceleration from the RT cores limits performance to around RTX 2070 levels with RTX on in BF:V.

Interesting. If not having RTX-like fixed-function hardware has the overall performance impact of "going down" to the next lower performance tier, I would much prefer that hit on standard hardware than better performance on fixed-function. If all we can expect (and this is speculation) is a 1-tier-downgrade, we're simply talking about a 1 year product roadmap difference: the low performance from 2018 would become the medium from 2019 on standardized hardware, equivalent to low fixed-function 2019 performance; and so on each year. That's a "small" tax to pay that would be recovered each year, for the benefit of everyone being able to run it and not just benefiting Nvidia's fixed-function hardware.

Of course, that's the "for now" scenario. Chances are AMD will come up with their own RTX-like, DXR accelerating fixed-function hardware and by 2021 we'll have 2 hardware architectures that work with the DXR standard. Fast forward to 2025 or so, and by then, after a few generations, both AMD and Nvidia will have converged in mostly similar hardware that will eventually sincretize into a "new raytracing shader" type.

This post seems to be rooted in some alternate reality. Really think MS and Sony wouldn't prefer more powerful Nvidia GPUs in their crapboxes if Nvidia was willing to stoop as low as AMD is in giving their chips away at breakeven?

So, what you're really saying, is that AMD had the more attractive product offering for the intended market. Therefore, MS/Sony wouldn't prefer anything Nvidia offers, because they can't compete with AMD to build satisfactory hardware for these "crapboxes". Also, past evidence doesn't dictate future performance. The advantage potential is there, in millions of users worth of install-base. Just because it hasn't materialized so far doesn't mean it never will.

It's all about perspective, pal.
 
I think crytek is pushing back a little... I'm not saying RTX -ray tracing is a slight of hand trick... but crytek just pulled a proverbial rabbit out of nvida's ass... lol
 
TBH, when I first saw the demo, my mind immediately compared it to BFV's raytracing reveal and Crytek's demo looks better IMO.

I agree, but the Crytek demo is a canned demo while the BFV ray tracing reveal was from the actual game if I recall correctly. I want to see how the CryEngine ray tracing looks in a game and how it performs on a variety of hardware. Crysis 3 is still quite demanding today without any kind of ray tracing of course and it uses the CryEngine. Their ray tracing will have to be seriously optimized to look good and be playable on a wide variety of hardware.
 
How about reality....

- Nvidia realizes they have lost another round of consoles... that MS and Sony are both going to be powering their consoles with Navi.
- Nvidia realizes they just lost the Google Game streaming GPU server contract to AMDs 7nm Vega parts. MI60 powered the Project stream test... and likely AMD has the contract for all the upcoming Google GPU server clusters that will power the full service when they announce it.
- Nvidia with their close ties to the game developer industry KNOW that Ray tracing is being worked into the MS and Sony APIs... for games coming to AMD powered PS5 / MS Nextgen

Nvidia has designed their new GPUs for AI server compute first... volta added tensor. The market however at the time gave Nvidia no reason to put those chips into consumer GPUs. So when its follow up came up... they decided to strike first and start talking about Ray tracing. Even though we all know it was half baked and not ready. Zero games used RTX at launch... and even now support is a couple games. Because reality is... the tracing games are coming, and they have been in development for a good while already, but they are aimed directly at PS5 / Xboxnext hardware. (Navi)

The RTX launch was nothing but a marketing move intended to preempt a bit upcoming AMD thunder. Yes NV is still the most popular end user gamer card. But Nvidia is loosing all the upcoming streaming contracts. If streaming takes off NV is going to feel the burn.

A marketing stunt to take away... what exactly, from AMD? 6 months and a new part launched later, and they still have no competitor in the ring in the heavy-weight class... regardless of raytracing. nVidia's RTX is only competing with themselves...

nVidia GPU's for compute vs gaming are not the same silicon, you are theorizing on thin ice.
 
Back
Top