Rumor : AMD Bringing Ray Tracing Support to Navi in December

Joined
Jan 18, 2008
Messages
1,019
Know you all love the Wccftech stuff...

https://wccftech.com/rumor-amd-bringing-ray-tracing-support-to-navi-in-december-driver-release/

"AMD has announced recently that it will be introducing a plethora of new driver features come December, as part of the company's annual major driver release for the holidays. This, combined with some very interesting chatter in the techsphere by industry insiders has sparked rumors that AMD is in fact finally bringing DirectX Ray Tracing support to Radeon Navi graphics cards this December.

Ray Tracing Code Spotted in AMD Drivers, Full Support Rumored for December
Officially, AMD's Navi supports ray tracing in software but does not have specialized hardware built-in for ray tracing acceleration. However, while specialized hardware can help accelerate ray tracing it is not a requirement to support Microsoft's DirectX Ray Tracing feature."
 
We'll see if it turns out to be true.

However I am not so sure its 100% true that AMD doesn't have "hardware" on board to accelerate tracing. AMD is using a very different shader design with navi.

The cool thing about RDNA is how the cores are split to basically perfectly mimic their previous generation, while retaining the new layout which really has yet to be exploited. I don't know the workings of AMDS closed source windows driver... I assume however so far the Navi drivers are mostly operating at wave64 and running basically in GCN legacy mode. If their driver update adds ray tracing I would imagine those bits of math will be crunched using RDNA wave32 and the performance may well be very close if not better then what NV is cooking with their overly complicated RT/Tesnor core stuff. There has to be some major efficiency wins if you can manage to get shaders to crunch the math in the same logic core/cache space areas. I'm not sure wave32 gets them there... but I imagine it should make RT on Navi a lot less FPS busting then people may be expecting. As I understand the way AMD has their cores setup... they should be able to calculate 4 RT bits per clock at wave32 vs 1 on previous gen AMD (which we can assume would suck very hard) or NV without RT core stuff.

Cool if true... and I think it pretty much has to be. I really don't believe AMD is working on some other hardware for PS5/Xboxnextone, if RDNA can run RT stuff on shader cores 2-4x faster then current shaders that is good enough to claim its viable. (and hell it might even be)
 
We'll see if it turns out to be true.

However I am not so sure its 100% true that AMD doesn't have "hardware" on board to accelerate tracing. AMD is using a very different shader design with navi.

The cool thing about RDNA is how the cores are split to basically perfectly mimic their previous generation, while retaining the new layout which really has yet to be exploited. I don't know the workings of AMDS closed source windows driver... I assume however so far the Navi drivers are mostly operating at wave64 and running basically in GCN legacy mode. If their driver update adds ray tracing I would imagine those bits of math will be crunched using RDNA wave32 and the performance may well be very close if not better then what NV is cooking with their overly complicated RT/Tesnor core stuff. There has to be some major efficiency wins if you can manage to get shaders to crunch the math in the same logic core/cache space areas. I'm not sure wave32 gets them there... but I imagine it should make RT on Navi a lot less FPS busting then people may be expecting. As I understand the way AMD has their cores setup... they should be able to calculate 4 RT bits per clock at wave32 vs 1 on previous gen AMD (which we can assume would suck very hard) or NV without RT core stuff.

Cool if true... and I think it pretty much has to be. I really don't believe AMD is working on some other hardware for PS5/Xboxnextone, if RDNA can run RT stuff on shader cores 2-4x faster then current shaders that is good enough to claim its viable. (and hell it might even be)

There’s nothing special about “wave32” or “wave64”. It’s basically Pascal level tech. Games generally run off FP32.

They have to perform RT through DX just like nVidia. While I hope they have a more efficient way I’m skeptical until I actually see it, and would not lean that way by default.
 
Last edited:
Amd RT will probably have to wait until a mid 2020 hardware release. Not saying December won’t happen. Just not likely
 
It is possible. Remember the next Xbox and ps5 suppose to have hardware Ray tracing. Those probably be released around November next year. So you know AMD had Ray tracing hardware at least ready to go if not soon.
 
There’s nothing special about “wave32” or “wave64”. It’s basically Pascal level tech. Games generally run off FP32.

They have to perform RT through DX just like nVidia. While I hope they have a more efficient way I’m skeptical until I actually see it, and would not lean that way by default.

GCN did one 64bit wave every 4 clock cycles.
RDNA can perform 1 64bit or 2 32 bit wave every cycle.
RDNA also moves to a WGP (work group processor) replacing the previous compute unit setup. WGP have 2 CUs increasing memory bandwidth available for compute computation.

Waves Warps ya it all sounds the same on the surface, and in many ways it is. No doubt RDNA is a massive improvement over GCN and Vega. There is no way turning on shader based RT on those cards would go well. On Navi however if they implement it properly it is potentially passable. The way AMD has built their work group clusters it should be possible in their code to run a lot of low precision FB16 or FP8 math in combination with a higher precision shader in the same cache space within one clock cycle... where as older generation NV cards that have had shader RT turned on are needing to use multiple cycles... and having to create soft cache space to do the same.

Its not that shaders can not be good at doing RT work... its that the versions of the CUs we have gotten until now are not really designed to do it. Lower precision math still takes a cycle. Yes the latest NV chips as well as Navi can perform wave/warp 32 math in a cycle which is a bump up. NV was earlier to that specific idea hence them being able to turn shader RT on for at least one generation back. AMD has still made some interesting changes to the way its cache operates.

I have no idea if AMDs implementation will work better... we'll just have to wait and see if they do infact add RT code for current Navi. There is also a possibility all this driver entailment relates to Big Navi which is due to be released soon. I doubt it is the case but it is possible big navi does in fact have some secondary RT hardware.
 
Hope there’s some truth to this. Time for AMD to get in the game.

IMO, it's in AMD's best interest to ignore/downplay ray tracing until they have a part with HW support.

You notice in all the Navi, and Super reviews, no one tested Ray Tracing. RT is largely ignored.

But if AMD supported it in drivers, it would invite testing and comparison against NVidia cards with HW support, which will no doubt be a poor showing for AMD, and more of selling point to go buy an NVidia card.

OTOH, if AMD releases a high end Navi with RT HW around the same time, that would change, as they would have a serious RT capable part on offer.
 
It's obvious AMD has something cooking with ray tracing (PS5 has confirmed RT support) the only question is when and on what hardware.

I don't think December is crazy, considering PS5 comes out in 1 year and developers need some runway to develop their games (most likely there are already NDA'd dev kits w/ RT working).

And Nvidia enabled RT on Pascal so we know a software/shader based approach is at least feasible, even if not with the best performance.

So I can believe Navi in December. Maybe AMD will release big Navi first with competitive (or even winning) RT support and then patch small Navi a bit later like they did with RIS.
 
It would be nice to eventually get DXR support on Vega too even though performance wouldn't be fantastic, seeing as Pascal & Volta got it. It would be enough for light rt effects- not everything needs to be pushing full GI/AO all the time. Fwiw Vega 64 can do like 45-60fps in Metro @ 1080 Extreme with screenspace PT GI/AO in Reshade so Vega can trace but with compromises in fidelity.

regardless, cant wait to see how Navi does whenever AMDs DXR driver drops
 
Last edited:
AMD announces full raytracing support. Requirements: Qualifying AMD GPU and add-on RTX capable card from Nvidia.
 
It is possible. Remember the next Xbox and ps5 suppose to have hardware Ray tracing.

Stahp. Proper RT will require a powerful dedicated GPU on PC, just like proper 4K60 required it.

Every new console gen, MS and Sony just latch onto the PC gaming buzzwords of the moment and throw them in the bulletpoint list, but the hardware can never actually deliver. I guess it's working since people keep repeating this nonsense. Those peashooter boxes will still struggle just to finally drive 4K60 properly without compromises or checkerboarding b.s. in anything above low settings.
 
Last edited:
IMO, it's in AMD's best interest to ignore/downplay ray tracing until they have a part with HW support.

You notice in all the Navi, and Super reviews, no one tested Ray Tracing. RT is largely ignored.

But if AMD supported it in drivers, it would invite testing and comparison against NVidia cards with HW support, which will no doubt be a poor showing for AMD, and more of selling point to go buy an NVidia card.

OTOH, if AMD releases a high end Navi with RT HW around the same time, that would change, as they would have a serious RT capable part on offer.

Sure, but it's not clear that ignoring RT is currently working for them either. I want them to get involved so we can get some actual info on their implementation. All we have now is a patent and wild speculation.
 
Sure, but it's not clear that ignoring RT is currently working for them either. I want them to get involved so we can get some actual info on their implementation. All we have now is a patent and wild speculation.

But a software implementation isn't going to provide real info on anything.

They really need to deliver Big Navi with RT HW ASAP.
 
Man has NV really snowed people. Separate "RT" hardware is a terrible idea. For the same reasons we don't buy math co processors for our CPUs anymore. The main advantage of co processor is to solve a deficiency in existing chip designs. NV needed something consumer related to do with the AI co processor they have on their GPUs these days.

Ultimately we DO NOT want seperate RT hardware. As long as it functions as co processor its going to be too slow to be any good. This is clearly already the case with only NVs absolute top of the line GPUs really offering viable RT performance.

Shader based RT computation is the future. Caches are shared, communication lines are shared. The problem currently is shader cores are not designed to process tons of low resolution information, and they waste cycles. Imagine calculating a bunch of rays and needing to transport that data across a road. Current shaders are like pickup trucks they can carry that data across the road, but it doesn't matter if the data is a little or maxes out, the truck still crosses the road at the same speed full or empty. What AMDs patents hint at and what is the real solution to calculating 100s of thousands of very small bits of data like rays... is to say instead of running 4 trucks down this road at a time we are going to run 16 bikes. They can't carry as much but they can all carry a bit of ray data no issue.

We'll see what AMD cooks up... we know something is coming next year without a doubt. Even if it sucks we know they have something for the next gen consoles. Long term though having one uniform set of cores doing all the calculation is the only real path forward. Trying to graft bigger and bigger RT co processors onto GPUs at the expense of standard shaders is a terrible solution. At some point you extend your standard cores to handle what the co processor does... it is the life cycle of every co processor the industry has ever had.
 
Man has NV really snowed people. Separate "RT" hardware is a terrible idea. For the same reasons we don't buy math co processors for our CPUs anymore. The main advantage of co processor is to solve a deficiency in existing chip designs. NV needed something consumer related to do with the AI co processor they have on their GPUs these days.

Ultimately we DO NOT want seperate RT hardware. As long as it functions as co processor its going to be too slow to be any good. This is clearly already the case with only NVs absolute top of the line GPUs really offering viable RT performance.

Shader based RT computation is the future. Caches are shared, communication lines are shared. The problem currently is shader cores are not designed to process tons of low resolution information, and they waste cycles. Imagine calculating a bunch of rays and needing to transport that data across a road. Current shaders are like pickup trucks they can carry that data across the road, but it doesn't matter if the data is a little or maxes out, the truck still crosses the road at the same speed full or empty. What AMDs patents hint at and what is the real solution to calculating 100s of thousands of very small bits of data like rays... is to say instead of running 4 trucks down this road at a time we are going to run 16 bikes. They can't carry as much but they can all carry a bit of ray data no issue.

We'll see what AMD cooks up... we know something is coming next year without a doubt. Even if it sucks we know they have something for the next gen consoles. Long term though having one uniform set of cores doing all the calculation is the only real path forward. Trying to graft bigger and bigger RT co processors onto GPUs at the expense of standard shaders is a terrible solution. At some point you extend your standard cores to handle what the co processor does... it is the life cycle of every co processor the industry has ever had.

If it's inside the die itself is it really separate hardware? If so, that's like calling L1, L2 and L3 cache separate hardware.

No offense, but your post just sounds like you hating on NV RTX.
 
If it's inside the die itself is it really separate hardware? If so, that's like calling L1, L2 and L3 cache separate hardware.

No offense, but your post just sounds like you hating on NV RTX.

He has all kinds of crazy theories about RTX HW that seem rooted much more in his dislike of NVidia, than in reality.

Back in Reailty, AMD patents show the same kind of Ray Intersection HW that NVidia has. This is the critical piece that anyone serious about Real Time Ray Tracing requires. It does about a 10x speedup in the expensive Ray Intersection/Bounding calculations.

Without purpose built Intersection Detection HW, you bottleneck hard at this step.
 
If it's inside the die itself is it really separate hardware? If so, that's like calling L1, L2 and L3 cache separate hardware.

No offense, but your post just sounds like you hating on NV RTX.

L1 L2 and L3 all serve the same core.

What NV is offering with RT/Tesnor cores is no different then having an integrated GPU on a CPU. They are not working on the same data sets. Sharing a die doesn't make it the same physical calculation unit.
 
L1 L2 and L3 all serve the same core.

What NV is offering with RT/Tesnor cores is no different then having an integrated GPU on a CPU. They are not working on the same data sets. Sharing a die doesn't make it the same physical calculation unit.

It strikes me as much more like adding AES instructions in the CPU. It can of course be handled with existing instructions, but cam be further improved with dedicated hardware. Now whether or not you get the balance of die cost correct is a real challenge, but it’s not inherently bad to solve fixed problems with fixed hardware.
 
I think it should be noted that fixed function hardware is usually (read: almost always) faster than generalized hardware when performing the same task.

NVs tensor cores are not fixed function. Is simply an alternate processor intended to accelerate AI. Its not like their RT/Tensor bits are fixed function. They are AI processors being re purposed to calculate a shit ton of simple math (rays). Yes shader cores today are not really designed to do that type of math (low precision stupid fast) in fact for years both NV and AMD have been going the opposite direction building cores for general computation. They have stopped cutting corners so they could do more precise math... adding more precision is what is required if they are to used in things like high end Ray tracing in say something like Pixars Renderman, which as an example are just bringing GPU computation online in any real way now.

People thinking I'm hating on NV trust me I'm not. Their current implementation is a very very smart way to sell their current hardware. I know saying they aren't designing GPUs for everyday consumers and gamers sounds like I'm ragging on them but I'm not. They have pushed and really driven the AI market for years now. Yes around the time they started designing Volta it really looked like they would have NO real competition in the legacy GPU space likely ever again. AMD was all but spent and Intel seemed to not really be all that concerned beyond igpus, and forget a completely new player entering the market good luck. So why not design one Chip that they could riff on and turn out products for all markets with one design. Design, and proofing are stupid expensive. It makes perfect sense for NV to design one chip to rule all markets. And frankly even no doubt AMD lover or not right now NV has the fastest GPUs no freaking doubt... and they are also on top in the AI market. With basically the exact same part.

Now Volta they could just skip the consumer market... but Turing was needed to replace their very old consumer line up. It was also not really all that much better then the previous generation for consumers. So its clear they put together a skunk work software program and said figure out things we can do with our second gen tensor unit. That hardly makes them evil... and just look at the other news they are also founding their own game developing studio to remaster old games to use those hardware bits as well. They came up with a handful of things... and one of those was partial Ray tracing. Its an ingenious use of those cores.... that doesn't change that long term shader cores are the better option. My feeling is that at some point NV will be making enough money from AI alone that they will design a true AI only chip that will likely still have GPU style shader cores but focus more on the tensors. Both consumer GPU and AI chips today from NV suffer in that they could both be a lot faster / and or cheaper if NV split them, which I expect at some point they will. I could be completely wrong but suspect 4 or 5 years down the road Nvidia, Intel and AMD will all be doing RT and it will all be on shader cores. With Intel entering the game and NV having a reason to focus on consumer hardware in a way they haven't had to in years frankly they will go with the most efficent easiest to fab in large numbers solution which is one chip with uniform hardware. Having to fab a massive die with 2 completely different processors on board both of which are subject to fabrication defects is expensive and not something you can afford to do when you have lots of competition.

Any way I'm not trying to replay with a book on this. I think I have made my point... consider it or toss it, up to you. I'm not a pure NV hater but you can judge that if you like as well.
 
It strikes me as much more like adding AES instructions in the CPU. It can of course be handled with existing instructions, but cam be further improved with dedicated hardware. Now whether or not you get the balance of die cost correct is a real challenge, but it’s not inherently bad to solve fixed problems with fixed hardware.

No I agree its not bad. Hey back in the day a math co processor was the shit. That doesn't mean it wasn't better done on the CPU when Fab technology caught up and the chip designers built the co processors in.

NV had the hardware there anyway for their AI clients. Finding ways to use it to calculate rays frankly was ingenious. Credit where its due... ray tracing was coming down the pike and NV found away to jump in on it way early.

In the end though imo tracing is going to actually work when its on the same hardware bits... in the same way that those early co processors where, way way way better then having no co processor but still far inferior to the integrated units that came after. And I'm not claiming Navi is for sure going to be a killer RT part... still a good chance this rumor relates to big navi. At some point though the designers will solve this issue by finding ways for their current cores to calculate a ass load of rays on the same hardware. Its possible even NV themselves will be going that way sooner then people realize. Post Amp it seems NV is planning to go chiplet which solves their dual market one chip issues nicely. The next couple years are going to be interesting no doubt.
 
NVs tensor cores are not fixed function. Is simply an alternate processor intended to accelerate AI. Its not like their RT/Tensor bits are fixed function. They are AI processors being re purposed to calculate a shit ton of simple math (rays). Yes shader cores today are not really designed to do that type of math (low precision stupid fast) in fact for years both NV and AMD have been going the opposite direction building cores for general computation. They have stopped cutting corners so they could do more precise math... adding more precision is what is required if they are to used in things like high end Ray tracing in say something like Pixars Renderman, which as an example are just bringing GPU computation online in any real way now.

People thinking I'm hating on NV trust me I'm not.

LOL! He we go again.

Not only are you ragging on them.

But you have made up your own utterly bonkers theory to do it.

You are claiming that NVidia is lying to everyone about the existence of fixed function RT cores.

By your reckoning, NVidia is so blatantly bankrupt of scruples that in order, to sell Tensor cores, they are secretly pretending that they have specific RT cores when in fact they don't, they are just reusing Tensor cores to do the work.

Now, of course you are the only one brilliant enough to see through NVidias nefarious lies.

But sure, trust you, you aren't hating on NV... :rolleyes:
 
Is there a different reason you’ve been spreading misinformation for months about raytracing running on tensor cores?

Yes it just happens that the only GPUs they sell that do ray tracing happen to have tensor cores... I'm sure the 2 are completely unrelated. I am looking forward to that 1650 super coming with ray tracing. (I'm joking of course ya they officially do denoise)

There is nothing nefarious about it... just some marketing mumbo jumbo. I don't really care if they want to call reduced precision tensor cores RT cores. They can market them that way all they like. I'm not knocking them... it was smart to find a consumer use for a big chunk of their die that would otherwise be doing nothing at all.

NV uses the tensor cores proper for denoise. They use a "RT core" for ray generation.... but like everything else in Tech people slap marketing names on all sorts of things. RT cores are nothing more then a tensor CCX unit. The data is feed in crunched and then noise is removed with a matrix math algorithm which is what google designed tensor to do, solve matrix math problems, which is exactly what ray tracing is. Optics designers have been using tensors to calculate optic paths since the 60s. Now with tensor core processors its possible to calculate 100s of thousands of rays at a time.

NV marketing suggesting RT core hardware was designed for gamers first are ya full of it... but what marketing dept isn't. Their software guys have done a good job... even if you aren't NVs #1 fan you can recognize they have done some good work. But no the hardware guys didn't sit down and design a GPU with a big chunk of the die consisting of tensors for gamers. They have done a good job at finding one potentially useful thing to do with them. DLSS was an interesting idea but in reality is beating by much simpler sharpening, unless some future version really improves things. I do believe Ray tracing will be better achieved with shader hardware vs tensors rt cores or whatever the fuck NV wants to call their bit of silicon. I could be completely wrong and I guess we'll know when AMD and Intel start talking about their actual hardware. Its possible they will both have tensor hardware for denoising and special magic RT cores as well.... we'll find out soon enough I'm sure. lol
 
Last edited:
Yes it just happens that the only GPUs they sell that do ray tracing happen to have tensor cores... I'm sure the 2 are completely unrelated. I am looking forward to that 1650 super coming with ray tracing. (I'm joking of course ya they officially do denoise)

There is nothing nefarious about it... just some marketing mumbo jumbo. I don't really care if they want to call reduced precision tensor cores RT cores. They can market them that way all they like. I'm not knocking them... it was smart to find a consumer use for a big chunk of their die that would otherwise be doing nothing at all.

NV uses the tensor cores proper for denoise. They use a "RT core" for ray generation.... but like everything else in Tech people slap marketing names on all sorts of things. RT cores are nothing more then a tensor CCX unit. The data is feed in crunched and then noise is removed with a matrix math algorithm which is what google designed tensor to do, solve matrix math problems, which is exactly what ray tracing is. Optics designers have been using tensors to calculate optic paths since the 60s. Now with tensor core processors its possible to calculate 100s of thousands of rays at a time.

NV marketing suggesting RT core hardware was designed for gamers first are ya full of it... but what marketing dept isn't. Their software guys have done a good job... even if you aren't NVs #1 fan you can recognize they have done some good work. But no the hardware guys didn't sit down and design a GPU with a big chunk of the die consisting of tensors for gamers. They have done a good job at finding one potentially useful thing to do with them. DLSS was an interesting idea but in reality is beating by much simpler sharpening, unless some future version really improves things. I do believe Ray tracing will be better achieved with shader hardware vs tensors rt cores or whatever the fuck NV wants to call their bit of silicon. I could be completely wrong and I guess we'll know when AMD and Intel start talking about their actual hardware. Its possible they will both have tensor hardware for denoising and special magic RT cores as well.... we'll find out soon enough I'm sure. lol

How does your conspiracy theory explain why the fastest tensor GPU cant hold a candle to the mighty 2060?

I know this isn't the first time you've been shown hard evidence that you're completely offbase but who knows... maybe this time it'll stick!

fp16-final-1.png


https://lambdalabs.com/blog/titan-rtx-tensorflow-benchmarks/

Port-Royal.png


https://bjorn3d.com/2019/01/3dmark-...g-benchmark-is-here-every-capable-gpu-tested/
 
How does your conspiracy theory explain why the fastest tensor GPU cant hold a candle to the mighty 2060?

I know this isn't the first time you've been shown hard evidence that you're completely offbase but who knows... maybe this time it'll stick!

Does the Titan V's Ray Tracing performance actually prove that the RT cores and Tensor cores in Turing are different?
 
Their pretty lengthy white paper on Turing also shows RT cores as separate.

The real test is when AMD enables ray tracing on Navi which doesn’t have specific hardware RT. If a 5700XT matches a 2070 in Ray tracing or not we’ll know for sure.
 
Does the Titan V's Ray Tracing performance actually prove that the RT cores and Tensor cores in Turing are different?

Well there’s also the fact that the company that made the chip says they’re different. A basic understanding of how RT works and how tensors work would also tell you they’re different.

It’s funny that the random guy on the internet making things up doesn’t need to provide proof of anything but actual data is questioned.
 
Does the Titan V's Ray Tracing performance actually prove that the RT cores and Tensor cores in Turing are different?

Quite definitively. His whole argument is that NVidia is using Tensor cores to do Ray Tracing (AKA intersection testing), and that RT cores don't actually exist. Yet in the Ray Tracing benchmark, the lowly 2060 FE (240 Tensor cores) outscores the Titan V which is an Absolute Tensor core beast (640 Tensor cores).

If his theory were true, then the Titan V should trample the 2060FE into the dust on RT performance.

Their pretty lengthy white paper on Turing also shows RT cores as separate.

Chad has been shown the white papers before, and he was shown the performance difference where a lowly 2060FE benchmarks higher than the Tensor Beast Titan V.

Also there is the fact that NVidia would for no reason at all, be telling huge detailed lies to an entire industry full of experts and would have been found out long ago.

Completely pointless lies I might add. If NVidia was actually using Tensor cores for Ray Intersection testing, they would just state that directly and their reason for including them would be secured without needing to fabricate a sham about separate RT cores.

From every angle this weird conspiracy like theory, has zero evidence, ample counter evidence, and "flat earther" level rationalizing.


The real test is when AMD enables ray tracing on Navi which doesn’t have specific hardware RT. If a 5700XT matches a 2070 in Ray tracing or not we’ll know for sure.

See above. We know for sure right now, and this will do nothing to convince a Zealot that already believes something that has no evidence, and ample counter evidence.

If the 5700xt can't handle real time ray tracing, he will claim it's because it lacks Tensor cores. So the bonkers theory can hold. If by some miracle (not happening) a 5700xt did manage to do real time ray tracing. He would just claim AMD is smarter than NVidia, and figured out how to do without Tensor cores.

AMD has been clear, their slide is used in this story.

It shows RDNA being used for Non-Realtime Shader based Ray Tracing.

Next it shows "Next Gen RDNA" having Hardware based Ray Tracing being used for real time gaming.
11052655607s.jpg
 
Last edited:
How does your conspiracy theory explain why the fastest tensor GPU cant hold a candle to the mighty 2060?

I know this isn't the first time you've been shown hard evidence that you're completely offbase but who knows... maybe this time it'll stick!

Not sure what your point is.... volta is first generation Tensor and as you can see it does just fine for AI training with its high clocks and massive amount of ram. And most of the AI benchmarks still don't fully utilize NVs tensor enhancements to lower precious modes. In the future without a doubt the Turing consumer parts will likely start beating the previous volta parts.

As it is RTX turing mostly =s volta with 10% fewer tensor cores, GDR6 over Voltas faster HBM and a slightly smaller ram pool. At the end of the day faster ram is a big advantage when it comes to going though 100s of images a second.

fp16-final-1.png


Their Actual data
Model / GPU Titan RTX Titan V
ResNet50 540 539
ResNet152 188 181
InceptionV3 342 352
InceptionV4 121 116
VGG16 343 383
AlexNet 6312 6746
SSD300 248 245

If your point is look the Volta part sucks at RT gaming... ya it does its tensor cores are first gen and the CCX setup doesn't allow for proper ray calculation at extreme low precision. Turing can calculate tensors at FP8.
 
Last edited:
Not sure what your point is.... volta is first generation Tensor and as you can see it does just fine for AI training with its high clocks and massive amount of ram. And most of the AI benchmarks still don't fully utilize NVs tensor enhancements to lower precious modes. In the future without a doubt the Turing consumer parts will likely start beating the previous volta parts.

As it is RTX turing mostly =s volta with 10% fewer tensor cores, GDR6 over Voltas faster HBM and a slightly smaller ram pool. At the end of the day faster ram is a big advantage when it comes to going though 100s of images a second.

View attachment 192909

Their Actual data
Model / GPU Titan RTX Titan V
ResNet50 540 539
ResNet152 188 181
InceptionV3 342 352
InceptionV4 121 116
VGG16 343 383
AlexNet 6312 6746
SSD300 248 245

If your point is look the Volta part sucks at RT gaming... ya it does its tensor cores are first gen and the CCX setup doesn't allow for proper ray calculation at extreme low precision. Turing can calculate tensors at FP8.

For understanding this response, I suggest this background material:
https://arstechnica.com/science/2018/12/why-does-flat-earth-belief-still-exist/
"Our latest video looks at what can motivate people to believe the impossible."

https://arstechnica.com/science/201...-whats-inflated-flat-earth-believers-in-2019/
Every fringe theorist needs an amplifier—used to be the penny press; today it's the Web.

...

But flat Earthism requires more than this. It also demands a deep distrust of the scientific establishment, and of authority in general. For its adherents, mainstream science isn’t just wrong; rather, it’s part of a vast, malicious conspiracy. To be a flat Earther is to believe that governments and the media are not to be trusted, while a small number of podcasts and YouTube videos dare to tell it like it is. (Marshall, in his Guardian article, noted that the majority of attendees at the Birmingham event joined the flat Earth bandwagon because of YouTube.) The link to conspiratorial thinking makes sense; after all, you’d need a conspiracy to explain the overwhelming dominance of the mainstream message, and the suppression of the “truth.” This gives it a certain kinship, perhaps, with creationism, climate change denial, and the anti-vaccine movement.
 
Last edited:
If your point is look the Volta part sucks at RT gaming... ya it does its tensor cores are first gen and the CCX setup doesn't allow for proper ray calculation at extreme low precision. Turing can calculate tensors at FP8.

Yes that’s exactly the point. Volta does suck at RT. Because it doesn’t have hardware acceleration.

Now back to your theory. What in the world do low precision tensor matrix operations have to do with BVH traversal or ray intersection calculations? Maybe provide some technical basis for your speculation.
 
Well there’s also the fact that the company that made the chip says they’re different. A basic understanding of how RT works and how tensors work would also tell you they’re different..

The point I was making is that the Titan V's Ray Tracing Performance been lower than the 2060 Ray Tracing performance isn't any kind of proof that the Tensors core are different than the RT cores. Architecture changes, the way the cores are placed in the SM unit and the Tensor cores been a lot better in Turing vs Volta could easily account for the differences. (Nvidia themselves have said the Turing Tensor cores are a lot better than the Volta tensor cores)

There are better ways to prove the differences, some of which you have mentioned.
 
Quite definitively. His whole argument is that NVidia is using Tensor cores to do Ray Tracing (AKA intersection testing), and that RT cores don't actually exist. Yet in the Ray Tracing benchmark, the lowly 2060 FE (240 Tensor cores) outscores the Titan V which is an Absolute Tensor core beast (640 Tensor cores).

If his theory were true, then the Titan V should trample the 2060FE into the dust on RT performance.
View attachment 192904

I am not saying Chad is right or wrong. But I am saying the performance doesn't tell us if the cores are different or not. Nvidia themselves have said that the Tensor core are way better in Turing than Volta. Little changes in Architecture, the way the SM units are built and addressed would make a huge difference in performance depending on the workload and if the software was written to take advantage of said changes.
 
  • Like
Reactions: N4CR
like this
Back
Top