AMD Comments on DirectX Raytracing Support

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
Earlier today, WCCFtech spotted something interesting on 4gamer. Following the "Next Horizon" event, the Japanese site interviewed David Wang, Senior Vice President for the Radeon Technologies Group. While he didn't answer specific questions about AMD's "Navi" GPU, he did comment on AMD's raytracing efforts. For now, it appears that AMD will focus on offline CG production through ProRender. But he also said that "Utilization of ray tracing games will not proceed unless we can offer ray tracing in all product ranges from low end to high end." Some of the finer details in the interview were lost in translation, but the site seems to think there's a good chance DXR support will show up in AMD's Navi GPUs, and WCCF thinks DXR support is related to AMD's currently supported DirectX 12 levels. The rough Google Translate snippet is below:

In the interview Wang did not answer the question about the next generation GPU, but about the realtime ray tracing technology "DirectX Raytracing" (hereinafter referred to as DXR) for the game promoted by NVIDIA and Microsoft is "a personal opinion He told me with a preface to say. "For the time being, AMD will definitely respond to Direct Raytracing," for the moment we will focus on promoting the speed-up of offline CG production environments centered on AMD's Radeon ProRender, which is offered free of charge It is said that. Mr. said, "Utilization of ray tracing games will not proceed unless we can offer ray tracing in all product ranges from low end to high end," if you interpret this straightforwardly, Although the possibility of DXR's correspondence in the latest AMD GPU seems to be low, Navi which is the latest next-generation GPU can not deny the possibility of rebounding "to respond from the top to the bottom". Certainly.
 
  • Like
Reactions: N4CR
like this
Well considering No games support ray tracing at all right now, sounds like a good idea? It's not like they are in a huge rush to get it out.
AMD staying out of the game may make developers think it's not worth implementing, though. It's still the chicken and egg problem.
 
makes sense, most of their sales are low-mid range where performance doesn't allow ray tracing, maybe when vega 56 equivalent is around 200$ then it would make sense.
 
AMD staying out of the game may make developers think it's not worth implementing, though. It's still the chicken and egg problem.
Given there's no 1050/1060 equivalent RTX card I'd argue it's DOA for now.
Raytracing is a feature for the top 10-15% (maybe even less).
Yes it's nice and everything but it costs money to implement and a large part of your customers wont even be able to turn it on and given the teasers it will either not really look that good or tank your FPS so hard you might as well play on a HD TV with 30Hz.

I consider myself [H]ard with a strong sense for price/performance, and even if I could easily afford a 2080Ti I'm not seeing why I should.
I'm holding out for a decent 21:9/32:9@100+Hz and a GPU to drive it, I don't see me splash $1200 on a GPU that won't be able to get those FPS at UWQHD/4k "megawide" when I turn on DXR.

Until then I'll make my 390 work as hard it can.
 
For the next few years atleast, raytracing seems like it will be in the same category as SLI or Crossfire. It will only work in a small handful of games that supports it, when it is supported it will be pretty amazing, and in order to experience it in those handful of titles you have to pay a massive premium.
 
The big thing is that given AMD's GPU lethargy, AMD simply cannot afford to do ray tracing yet. Nvidia could have released a non-RTX Turing GPU that would have clocked in at approaching twice the performance of the 1080Ti in today's raster-based games. AMD can't even approach the performance of the 1080Ti.

For the next few years atleast, raytracing seems like it will be in the same category as SLI or Crossfire. It will only work in a small handful of games that supports it, when it is supported it will be pretty amazing, and in order to experience it in those handful of titles you have to pay a massive premium.

One generation, probably. While it's possible for a hypothetical 'RTX 2060' and lower to be released, we haven't yet seen the full effects of developers' efforts to integrate ray tracing into their engines and soon to be released games. That makes it hard to say for sure.

However, I will say that I'm remaining optimistic, because on the other hand it would make sense for Nvidia to wait to release an RTX 2060 which should clock in at GTX 1070 (Ti) performance levels until stock of the older cards has dwindled, and also until they have hard numbers from released games that show said RTX 2060 could actually be useful with DLSS and RT turned on.
 
For the next few years atleast, raytracing seems like it will be in the same category as SLI or Crossfire. It will only work in a small handful of games that supports it, when it is supported it will be pretty amazing, and in order to experience it in those handful of titles you have to pay a massive premium.

What a weird comparison. SLI/crossfire never worked "amazing" imho. Not with all the latency and micro stuttering. It was never going to work, really.
 
What a weird comparison. SLI/crossfire never worked "amazing" imho. Not with all the latency and micro stuttering. It was never going to work, really.

I'd say that it did, on and off. But it was certainly never consistently amazing.
 
I'd say that it did, on and off. But it was certainly never consistently amazing.

Pretty sure the only consistent thing was the latency hehe. What good are more FPS if it feels sluggish? Games are not movies
 
The big thing is that given AMD's GPU lethargy, AMD simply cannot afford to do ray tracing yet. Nvidia could have released a non-RTX Turing GPU that would have clocked in at approaching twice the performance of the 1080Ti in today's raster-based games. AMD can't even approach the performance of the 1080Ti.

Ok this is pure BS. No they could not have. Its funny how much people are willing to drink the marketing coolaid. The 2080 it NOT a unique gaming only design. No GPU company Nviida, AMD, Intel or anyone else that comes along is designing a "gaming" GPU. Never again not going to happen.

The "RTX" cores are nothing more then google tensor cores. The "Ray tracing" cores nvidia claims just happen to be a divisable number of the tensor cores on the chip for a reason. Nvidia have slightly redesigned the standard tensor cluster detailed by google in their tensor flow API. The "tracing cores" are simply a tensor block unit. The controlling block redesign is for big iron not games.

Nvidia AMD Intel are all taking their Big Iron designed GPU co processors and dropping them into consumer parts... they are binned. The best go to the server parts the rest go into cards like the 2080ti and down. RTX is a marketing invention. Which I will hand it to NV was wise... finding a consumer use for tensor cores means they don't have to fuse them off.

If anyone is still buying the we created "RT cores" BS... just read some architectural papers on the never consumer released Volta chips. Look at that Tensor cores... err I mean RT cores. lmao

AMD will Navi will no doubt have the same capabilities.... everyone will be slapping Tensor hardware into their cards from this point forward. Intel will do the same when their chips see the light in a year or two. Google has done a good job making the tensorflow API the defacto standard.
 
Ok this is pure BS. No they could not have. Its funny how much people are willing to drink the marketing coolaid. The 2080 it NOT a unique gaming only design. No GPU company Nviida, AMD, Intel or anyone else that comes along is designing a "gaming" GPU. Never again not going to happen.

...so you're refuting the existence of the 1080Ti?

The "RTX" cores are nothing more then google tensor cores.

Well, since the RTX cards have separate 'Tensor' and 'RTX' cores, I can't see this being true...
 
...so you're refuting the existence of the 1080Ti?

Well, since the RTX cards have separate 'Tensor' and 'RTX' cores, I can't see this being true...

The chip the 1080 is based on was also designed for a different market. It simple predates tensor cores. Nvidia put those on Volta. Which never got put in a conumser card cause 30% of the chip would have sat there useless.... accept for all the budget AI users who would have found a way to unlock them. Cutting into the real profits.

The 2080 chip has made its way to only the top end of the consumer pool.... and Nv is willing to let the AI guys buy them now. Cause they no full well they are about to get a lot more competition. From AI and other companies (including Intel) producing lower cost Server class tensor hardware.

Again have you read any volta papers... there is no such thing as a "RTX" core. Its simply the master unit of tensor hardware.

The proof is in Nvidias own marketing material;
"Tensor Cores are specialized execution units designed specifically for performing the tensor / matrix operations that are the core compute function used in Deep Learning. Similar to Volta Tensor Cores, the Turing Tensor Cores provide tremendous speed-ups for matrix computations at the heart of deep learning neural network training and inferencing operations. Turing GPUs include a new version of the Tensor Core design that has been enhanced for inferencing. Turing Tensor Cores add new INT8 and INT4 precision modes for inferencing workloads that can tolerate quantization and don’t require FP16 precision. "

72 RT Cores
576 Tensor Cores

Do some math 576 / 72 = 8. That is the number of Tensor unit matrix blocks on the chip.

If you go and read the AI focused documents on using Turing there is zero mention of any new fantastic RT core. If if it was something new and cool they would be all over giving those users access. (they don't talk about some new RT core... they talk about added percision modes)
 
Last edited:
The chip the 1080 is based on was also designed for a different market.

You can support this assertion with evidence?

The rest of your complaint is also based on wild speculation. Please support your bullshit.
 
It already has one. Its called Volta.

I get your point, but Volta still has the HPC-focused Tensor units. I'm specifically using the idea of a GP102 scaled up to TU102 size, with updated SPs. That should clock in at about twice the raster performance, and Nvidia could certainly have produced such a GPU.

My bet as to why they didn't revolves largely around the lack of outside competition (thing would approach three times the performance of Vega 64), that the HPC market for a massive low-precision-only GPU with GDDR probably isn't that big, and that the market for that level of gaming performance probably also isn't that big. Combined, the ROI looks like it probably just isn't there for Nvidia.
 
So its tensor cores and tensor cores?

No its tensor cores in a block as always. Nvidia added the ability to do INT8 and INT4 math. Its has less persicion thne FP16... but for a lot of newer AI cases that is fine. The speed up makes up for less accuracy.

Turns out if you are using the matrix to calculate Rays... you also see huge gains doing twice as much lower accuracy math.

Matrix math that powers AI and tensor cores... is the EXACT same type of math used to calculate rays.
 
AMD staying out of the game may make developers think it's not worth implementing, though. It's still the chicken and egg problem.


Like AMD makes an impact. With Nvidia's 70% marketshare you know if AMD had ray tracing first everyone would be blowing it off like Mantle, Trueaudio, TressFX etc. Nvidia has it, all the sudden its a pot of friggin gold and AMD is hampering development. Please...…


Give me a balls to the wall raster card with the ability to support ray tracing, I can give a shit less about image quality and "photo realistic graphics". If I want that, I'll go outside.
 
Then you claim that Nvidia doesn't design gaming GPUs anymore?

Well its sort of like saying Ferrari designs a great work runner. I mean no reason you can't use it for that and they will even sell you a cheapo version... but the that wasn't what the designers had in mind.
 
Like AMD makes an impact. With Nvidia's 70% marketshare you know if AMD had ray tracing first everyone would be blowing it off like Mantle, Trueaudio, TressFX etc. Nvidia has it, all the sudden its a pot of friggin gold and AMD is hampering development. Please...…

Nvidia supported the DirectX standard on all of these, except the audio wackiness of course. AMD also supported these. They just failed to win developer support, and that's on them (and their driver guy ;) ).

Give me a balls to the wall raster card with the ability to support ray tracing, I can give a shit less about image quality and "photo realistic graphics". If I want that, I'll go outside.

That would be called a 2080Ti. Unless you're waiting for a red-team version, then you might as well just check out now. Intel will catch up to Nvidia in the GPU space before AMD does.
 
Like AMD makes an impact. With Nvidia's 70% marketshare you know if AMD had ray tracing first everyone would be blowing it off like Mantle, Trueaudio, TressFX etc. Nvidia has it, all the sudden its a pot of friggin gold and AMD is hampering development. Please...…


Give me a balls to the wall raster card with the ability to support ray tracing, I can give a shit less about image quality and "photo realistic graphics". If I want that, I'll go outside.
Agreed, but nobody blew off mantle, they just implemented it and waited to release their implementations until it was part of a khronos standard (vulkan).
 
Well its sort of like saying Ferrari designs a great work runner. I mean no reason you can't use it for that and they will even sell you a cheapo version... but the that wasn't what the designers had in mind.

Nvidia puts almost all of their GPUs in both GeForce and Quadro parts, and they put most into Tesla parts. I can't see them being designed without many different applications in mind, one of the primary of which would be gaming, which is still the majority of their revenue.
 
Ok lets say for the sake of argument that your theory is true and there are not really RTX cores just tensor cores.

why is Turing orders of magnitude faster in raytracing than volta when it has less tensor cores?
 
Ok lets say for the sake of argument that your theory is true and there are not really RTX cores just tensor cores.

why is Turing orders of magnitude faster in raytracing than volta when it has less tensor cores?

Nvidia explains it right in their marketing material,
"add new INT8 and INT4 precision modes for inferencing workloads that can tolerate quantization and don’t require FP16 precision."

Volta tensor cores only allow the higher precision FP16 modes. Which you can use for Ray tracing... but INT8 and INT4 are a lot faster if not as accurate. But if your talking about accelerating games it hardly matters if a small percentage of rays are miscalculated, better to calculate twice as many per clock.

Their are cases where "AI" type uses don't require extremely accurate results as well. The byproduct is NV can use INT8 and 4 to calculate more rays less accurately in real time.

As I have admitted its great marketing... and a good use long term for tensor cores as they become more widely adopted in general. NV may play a role in popularizing tensor core use for games. But "RTX" is hardly some patented NV only tech. RTX like volta before it is just hardware designed to accelerate a Google Open source AI API. So bringing it back to the topic. Yes AMD will likewise accelerate Tensor... they have already stated their Server Navi parts will conform to Tensor Flow 1.11 API so the speculation is pointless AMD has already confirmed.
 
Nvidia explains it right in their marketing material,
"add new INT8 and INT4 precision modes for inferencing workloads that can tolerate quantization and don’t require FP16 precision."

Volta tensor cores only allow the higher precision FP16 modes. Which you can use for Ray tracing... but INT8 and INT4 are a lot faster if not as accurate. But if your talking about accelerating games it hardly matters if a small percentage of rays are miscalculated, better to calculate twice as many per clock.

Their are cases where "AI" type uses don't require extremely accurate results as well. The byproduct is NV can use INT8 and 4 to calculate more rays less accurately in real time.

As I have admitted its great marketing... and a good use long term for tensor cores as they become more widely adopted in general. NV may play a role in popularizing tensor core use for games. But "RTX" is hardly some patented NV only tech. RTX like volta before it is just hardware designed to accelerate a Google Open source AI API. So bringing it back to the topic. Yes AMD will likewise accelerate Tensor... they have already stated their Server Navi parts will conform to Tensor Flow 1.11 API so the speculation is pointless AMD has already confirmed.

You are confusing DLSS math with Raytracing math...this is from Volta:
http://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf


FP32
FP16
INT8
INT4

All in Volta's Tensor cores.

The new thing in Turing is RT cores...not Tensor cores.

Or to put it visually.

Volta:
image3.png


Turing:
image11-601x1024.jpg


EDIT:
Think hard...your claim would make it virtually impossible too implement Raytraycing and DLSS at the same time...You migth wanna tell that the to the develeopers of eg. Atomic heart (support Raytraycing and DLSS)
 
Last edited:
You are confusing DLSS math with Raytracing math...this is from Volta:
http://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf


FP32
FP16
INT8
INT4

All in Volta's Tensor cores.

The new thing in Turing is RT cores...not Tensor cores.

Or to put it visually.

Volta:
View attachment 119817

Turing:
View attachment 119818

EDIT:
Think hard...your claim would make it virtually impossible too implement Raytraycing and DLSS at the same time...You migth wanna tell that the to the develeopers of eg. Atomic heart (support Raytraycing and DLSS)

Damn you ruined it. I was curious to know how far he would go. :D:D:rolleyes::rolleyes:
 
You are confusing DLSS math with Raytracing math...this is from Volta:
http://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf


FP32
FP16
INT8
INT4

All in Volta's Tensor cores.

The new thing in Turing is RT cores...not Tensor cores.

Or to put it visually.

Volta:
View attachment 119817

Turing:
View attachment 119818

EDIT:
Think hard...your claim would make it virtually impossible too implement Raytraycing and DLSS at the same time...You migth wanna tell that the to the develeopers of eg. Atomic heart (support Raytraycing and DLSS)

https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/

No confusion at all.

"
Turing Tensor Cores
Tensor Cores are specialized execution units designed specifically for performing the tensor / matrix operations that are the core compute function used in Deep Learning. Similar to Volta Tensor Cores, the Turing Tensor Cores provide tremendous speed-ups for matrix computations at the heart of deep learning neural network training and inferencing operations. Turing GPUs include a new version of the Tensor Core design that has been enhanced for inferencing. Turing Tensor Cores add new INT8 and INT4 precision modes for inferencing workloads that can tolerate quantization and don’t require FP16 precision."

I can't believe anyone can look at that diagram and not see the "RT Core scribbled on the bottom is clearly added by a marketing type."

"RT Cores" are a marketing myth.

As for DLSS and Raytracing using the same hardware. Yes they will be using the exact same tensor hardware. Yes they will effect performance of each other. The proof will be in the putting when we have actual games that use both. My guess would be turning both on will see a larger % performance hit then running one or the other. I fully expect if either hits performance say 20% turning both on will see much more then a simple addititve performance hit.
In the months to come when someone actually gets a decent game that can be tested out we'll have to see. Everything I have read in their marketing material... and from their Ray tracing programming demos that define Matrix variables which are clearly doing matrix math it seems pretty obvious their tracing stuff is using tensor cores.

If you don't think ray tracing would benifit from Tensor cores just do some basic Google searching and reading on Ray tracing and amtrix math. Its very easy to find hand calculated matrix math going back to the 60s for things like optical lens design.

All Nvidia has added over volta is the ability to calculate with lower precision... which although it has AI and deep learning uses. No doubt is perfect for accelerating real time RT.... it just wasn't the main or at least only reason Nvidia added int8/4 to their tensor cores.
 
I can't believe anyone can look at that diagram and not see the "RT Core scribbled on the bottom is clearly added by a marketing type."

"RT Cores" are a marketing myth.

Right now, it's your myth brother.
 
Ray Tracing is the future, we know that (well, maybe even Path Tracing) but we are far from being there 100%. Games are made with rasterization in mind, first, period. Until games are 100% from the ground up a "tracing" game, and until video cards can run it in the midrange, it won't be but a niche of a niche thing, simple as that. Remember, the first batch of "Ray Tracing" in games is a hybrid method, and it only Ray Traces things like shadows and reflections. It doesn't Ray Trace the entire everything just yet. It's not fully Ray Traced, it's hybrid, and it's contained. So we are far, far off from truly Ray Trace engined games.

I'm glad the hardware sorta exists now, someone has to get the ball rolling, but one should not expect this to go mainstream for many years.
 
Ray Tracing is the future, we know that (well, maybe even Path Tracing) but we are far from being there 100%. Games are made with rasterization in mind, first, period. Until games are 100% from the ground up a "tracing" game, and until video cards can run it in the midrange, it won't be but a niche of a niche thing, simple as that. Remember, the first batch of "Ray Tracing" in games is a hybrid method, and it only Ray Traces things like shadows and reflections. It doesn't Ray Trace the entire everything just yet. It's not fully Ray Traced, it's hybrid, and it's contained. So we are far, far off from truly Ray Trace engined games.

I'm glad the hardware sorta exists now, someone has to get the ball rolling, but one should not expect this to go mainstream for many years.
Kinda makes me sad...one day a couple decades of games will be unplayable except on legacy hardware because somebody decided raster+hack was better (less work) than ray/path tracing, and now we're phasing out raster hardware–it'll still be required for certain things (desktop, image manipulation, etc.), but probably the amount of die space dedicated to it will be much less except for workstation raster cards.
 
Kinda makes me sad...one day a couple decades of games will be unplayable except on legacy hardware because somebody decided raster+hack was better (less work) than ray/path tracing, and now we're phasing out raster hardware–it'll still be required for certain things (desktop, image manipulation, etc.), but probably the amount of die space dedicated to it will be much less except for workstation raster cards.

By that time it's possible rasterization could move off of the GPU entirely and part of CPUs as innate processors. The GPU could transform to an entirely different rendering method beyond rasterization. Rasterization will always be needed, but I wonder what will be doing that in the future as all this evolves. I'm talking like 10+ years out.
 
Kinda makes me sad...one day a couple decades of games will be unplayable except on legacy hardware because somebody decided raster+hack was better (less work) than ray/path tracing, and now we're phasing out raster hardware–it'll still be required for certain things (desktop, image manipulation, etc.), but probably the amount of die space dedicated to it will be much less except for workstation raster cards.

It wasn’t that someone “just decided” it’s that consumer real time 3D graphics wouldn’t have been possible any other way. See the 3dfx history video I’ve posted before.
 
It wasn’t that someone “just decided” it’s that consumer real time 3D graphics wouldn’t have been possible any other way. See the 3dfx history video I’ve posted before.
I know, but it still makes me sad.
Thankfully gems like this should still be possible to emulate even on stripped down GPUs.

Though I prefer F-Zero X.
 
Given there's no 1050/1060 equivalent RTX card I'd argue it's DOA for now.
Raytracing is a feature for the top 10-15% (maybe even less).
This is exactly what happened back in 2006 when the G80 was released along side of DX10 - only the 8800GTX and 8800GTS supported it, and even in 2007 when the 8400GS through the 8600GTS were released, even though those GPUs supported DX10, they all ran DX10 games (Bioshock - 2007) like complete crap.

Granted, ray tracing is a bit different, but when only the top GPUs support it, and those GPUs cost an extreme premium, developers and customers alike are not going to be jumping up and down for that feature, and will be more interested in general performance rather than an esoteric feature which has yet to prove itself.
 
Last edited:
Back
Top