AMD Ray Tracing

We'll never know because the devs of RTX titles will not allow RT to run on non-RTX cards.
 
I wonder how Google TPU's will fair with "ray tracing", and gaming. They are streaming games now. I would imagine they don't want to buy the competitions GPU's
 
The best way forward for Ray Tracing is for AMD to get onboard. It's the future and they should do pretty well at it, considering how strong the compute performance is on AMD hardware.
 
Microsoft's blog post on MSDN about DXR indicates the tech is a fundamentally a compute load, thus technically doesn't require any extra hardware or GPU engines to run. They even went as far as encouraging developers to pick it up and use it with any in-market GPUs to see what they could do with it.

This is also evidenced by the first BF1 RTX demo actually being developed on Titan Vs, which didn't have any RT hardware (but did have Tensor Cores for denoising, if it was used at the time). When DICE first demoed Rotterdam with RT, the build wasn't using any of the new hardware of RTX GPUs.

I imagine with AMD having some spare unrealized compute, they may be able to create an okay implementation. We currently can't gauge how much RT Cores are accelerating the tech, so there's no way to tell how much performance can be realized with both Raster and RT running on general-purpose shaders.
 
AMD has been working with Microsoft for some time now to implement partial ray tracing as part of the DX suite. All RT for games is just partial. AMD just refers to processing needed for RT as just "Compute", so far no marketing terms issued as consumer catch phrases. There's a bunch of stuff about RT like this video from last March: or
 
Partial Raytracing has been trying to be used since long time ago, anyone remember Watch Dogs 2 and the "NVIDIA Hybrid Frustum Traced Shadows" which is no more than just a form of RayTracing helped by conservative rasterization and voxel rendering both required Hardware Implementation (was launched with GTX 900 series Tier 1 and expanded with GTX 1000 series to Tier 2) and was added then to DX12 features, and is the base of DXR so in any instance it require more than just "compute" it require hardware implementation to be used correctly which is what Nvidia is doing right now with the tensor cores enabled to Tier 3 for max compatibility/performance

Theoretically VEGA is the only current AMD GPU with Conservative rasterization capable of Tier 1 - 3 with rasterized ordered view (however for any reason is not implemented via driver YET) and WDDM 2.5 (required for Raytracing). NVidia have support to this since Maxwell, and even Intel have support for this since skylake at Tier 3, is AMD keeping this for Professional market only? waiting more time for driver refinements? waiting to be launched as a Navi feature? it maybe due the performance penalty?, who know, but they should be able to raytrace with Vega64. however Nvidia is far ahead in this department since years ago, and they has been trying to push it, there are couple of games which utilize those technologies, The Division, Shadow of the tomb raider as examples.
 
Partial Raytracing has been trying to be used since long time ago, anyone remember Watch Dogs 2 and the "NVIDIA Hybrid Frustum Traced Shadows" which is no more than just a form of RayTracing helped by conservative rasterization and voxel rendering both required Hardware Implementation (was launched with GTX 900 series Tier 1 and expanded with GTX 1000 series to Tier 2) and was added then to DX12 features, and is the base of DXR so in any instance it require more than just "compute" it require hardware implementation to be used correctly which is what Nvidia is doing right now with the tensor cores enabled to Tier 3 for max compatibility/performance

Theoretically VEGA is the only current AMD GPU with Conservative rasterization capable of Tier 1 - 3 with rasterized ordered view (however for any reason is not implemented via driver YET) and WDDM 2.5 (required for Raytracing). NVidia have support to this since Maxwell, and even Intel have support for this since skylake at Tier 3, is AMD keeping this for Professional market only? waiting more time for driver refinements? waiting to be launched as a Navi feature? it maybe due the performance penalty?, who know, but they should be able to raytrace with Vega64. however Nvidia is far ahead in this department since years ago, and they has been trying to push it, there are couple of games which utilize those technologies, The Division, Shadow of the tomb raider as examples.
or maybe they realize that there's currently no point in putting the man hours into developing drivers for it yet on the consumer side.
 
If AMD wants to do ray tracing in hardware I would hope they just leave it alone until they can fix their other problems first. Ray tracing is such a popular feature that there are exactly 0 games (not tech demo) .
 
or maybe they realize that there's currently no point in putting the man hours into developing drivers for it yet on the consumer side.

AMD has historically spent man hours on features that are either unsupported or didn't work right. I think this is more of a case that they don't want a feature to be enabled on their hardware when it wouldn't work correctly.

They have this feature for their professional market, it would be easy for them to enable it for their consumer level products. The question for them is how would it look? and wouldn't this cause them more issues then its worth?

Yes, but Nvidia pays good money.

False

If AMD wants to do ray tracing in hardware I would hope they just leave it alone until they can fix their other problems first. Ray tracing is such a popular feature that there are exactly 0 games (not tech demo) .

If they want to sell professional level cards, they might want to hurry up and release hardware support for it soon. Consumer stuff I don't think they can afford to do both right now, who knows, maybe the CPU division will start leeching some of their money to the GPU division.
 
So here's the thing.

DICE is not enabling non-RTX owners to use RT in BFV. Even though DXR is completely fine being used on any DX12 compatible GPU, DICE is using an RTX-specific codepath. This means if you own a 1080Ti or Titan V, you will not be able to use RT, even if you have the power to run it (albeit much slower than RTX cards). This also means No direct comparisons are possible between RTX cards and existing GTX cards. I'm not sure if this is the case with ALL the RTX games demo'd so far, but DICE has come out and said this specifically.

So Nvidia is treating this like PhysX. It's exclusive to specific products, even though it's more than capable to run on other hardware.

Source

Read it and weep.

Even though Direct X Ray Tracing is an simple software extension that can be run on existing hardware, you won't be able to enable it on non-RTX cards. Why? Money.
 
If they want to sell professional level cards, they might want to hurry up and release hardware support for it soon. Consumer stuff I don't think they can afford to do both right now, who knows, maybe the CPU division will start leeching some of their money to the GPU division.

Not at all until they figured out a lot more variables they should stay away from it far away. AMD can not sell high end cards so even if AMD has a superior product it is not going to sell.
The dynamic of no games makes it a no brainer.
Here is the thing that is really bothersome to me and why AMD prolly not going win anything here even if they did support ray tracing in hardware.

When your competitor has their flagship gpu at 754mm2 you struggle with your power consumption are you really going to make hardware that competes ?

And if they do decide to start a gpu doing ray tracing it is going to take 3 years before it is done.
 
Not at all until they figured out a lot more variables they should stay away from it far away. AMD can not sell high end cards so even if AMD has a superior product it is not going to sell.
The dynamic of no games makes it a no brainer.
Here is the thing that is really bothersome to me and why AMD prolly not going win anything here even if they did support ray tracing in hardware.

When your competitor has their flagship gpu at 754mm2 you struggle with your power consumption are you really going to make hardware that competes ?

And if they do decide to start a gpu doing ray tracing it is going to take 3 years before it is done.

I'm not going to pretend to know how long it takes to engineer the core ASIC design that NVidia uses, their marketing claims 10 years but then again 10 years ago we had a vastly different process.

I'm not talking about gaming cards, I'm talking about the Pro market, at this time RT is big with professional cards. What is AMD's selling point for their pro cards versus the competition, price? They can sell high end cards, their mining craze IMO hurt their image more than it did to help, everyone now just assumes AMD's cards are over priced and under-competitive (which isn't 100% true) they just need to fix their image now that the craze has died down a bit.
 
Read it and weep.

Even though Direct X Ray Tracing is an simple software extension that can be run on existing hardware, you won't be able to enable it on non-RTX cards. Why? Money.

DICE said they would implement for a competitor to nVidia if it existed. It took 4 Titan Vs to run at 1/3 the fps of a single 2080ti.

You’re right, it does take labor (money), and there’s no reason to implement it for cards that will run at 5 fps. Every architecture takes some tweaking.
 
Read it and weep.

Even though Direct X Ray Tracing is an simple software extension that can be run on existing hardware, you won't be able to enable it on non-RTX cards. Why? Money.

Did NV just put a nail in it's own RTX line coffin?
 
Read it and weep. Even though Direct X Ray Tracing is an simple software extension that can be run on existing hardware, you won't be able to enable it on non-RTX cards. Why? Money.

NOPE.

DICE said they would implement for a competitor to nVidia if it existed.

This. I can't point to where the heck I read it, but I remember this piece of news well - they did state that they only have one path for NV now, because only NV cards can process DXR code. When competitors have cards that process DXR, they'll add their special code to process DXR too. This is in no way different to how currently and for the past years games have had specific AMD and NV graphics paths that accelerate things their own way, exploiting their hardware's different capabilities. We don't just run games in DX9, or DX11, or DX12 mode. Both AMD and NV run that standard code with their own proprietary code. That's what RTX is: NV's way of processing DXR. AMD's is Radeon Rays currently, I'm sure they'll rebrand it for the consumer market.
 
NOPE.



This. I can't point to where the heck I read it, but I remember this piece of news well - they did state that they only have one path for NV now, because only NV cards can process DXR code. When competitors have cards that process DXR, they'll add their special code to process DXR too. This is in no way different to how currently and for the past years games have had specific AMD and NV graphics paths that accelerate things their own way, exploiting their hardware's different capabilities. We don't just run games in DX9, or DX11, or DX12 mode. Both AMD and NV run that standard code with their own proprietary code. That's what RTX is: NV's way of processing DXR. AMD's is Radeon Rays currently, I'm sure they'll rebrand it for the consumer market.

DXR can run on any DX12 capable hardware. AMD (as well as pascal and Volta by Nvidia's own admission) can run DXR. AMD can as well run their own form of Raytracing called "Radeon Rays". It may be slow, but then again, it may not. we'll never know.
 
DXR can run on any DX12 capable hardware. AMD (as well as pascal and Volta by Nvidia's own admission) can run DXR. AMD can as well run their own form of Raytracing called "Radeon Rays". It may be slow, but then again, it may not. we'll never know.

They use nVidia tool sets that are easier to implement RT and better utilize nVidia hardware. It’s not like turning on a bit. They would have to go through the entire process of tuning the software for AMD.

There’s absolutely no reason AMD couldn’t use the DXR paths to prove they are just as capable if they wanted to with a demo.

Why does everything have to be a conspiracy anymore?
 
They use nVidia tool sets that are easier to implement RT and better utilize nVidia hardware. It’s not like turning on a bit. They would have to go through the entire process of tuning the software for AMD.

There’s absolutely no reason AMD couldn’t use the DXR paths to prove they are just as capable if they wanted to with a demo.

Why does everything have to be a conspiracy anymore?

Because Nvidia has a very real history of this sort of crap.
 
Just run ProRender on an AMD card using Blender or a host of other 3D programs and see how fast indeed it is, except it isn't fast or should I say fast enough for games. I am sure AMD is working on better Hardware Acceleration for RT, they have ProRender in a number of top tier 3D programs as it is.
 
Because Nvidia has a very real history of this sort of crap.

Well...

I see it as something similar. I don’t think they are technically blocking anyone else. I see it as nVidia’s version of Mantle. Can it run on other hardware? Sure! Will it run so shitty it isn’t worth it? Yes sir!

At least until AMD catches up. Personally I think they should rush 7nm all rasterized cards since that’s the majority of consumers... the 2070 is a 455mm^2 die with the rasterized performance of a 300mm^2 die.
 
  • Like
Reactions: N4CR
like this
Well...

I see it as something similar. I don’t think they are technically blocking anyone else. I see it as nVidia’s version of Mantle. Can it run on other hardware? Sure! Will it run so shitty it isn’t worth it? Yes sir!

AMD for the most part has only ever developed open APIs that they want the industry at large to use and not only doesn't block entry but also readily gives it away for free.
In the exact case you're bringing up (Mantle), AMD only designed it because there wasn't an industry used low-level API for GPUs. It wasn't until DX12 came about that there was any other standard that at least included some of the features in Mantle. At which point AMD themselves dropped Mantle.
For further proof, AMD gave Mantle to the Khronos group, and much of what Mantle was, is now the basis for Vulkan, a 100% free and open API. nVidia has never done anything remotely like that. They purchase or create APIs to use themselves or to lock others into instead of either creating or fostering open formats they willfully and intentionally create their own standards specifically for the purpose of creating monopolies. Ageia and Phys-X being one of the major deaths they're responsible for.
Need another example? You can look at nVidia and their choice to create G-Sync, while AMD partnered with, well basically everyone else to create the open, royalty free, format Freesync.

Is AMD pure? No, because none of these companies are, and I'm cynical enough to believe that even if I don't have evidence, I'm sure they've done something that wasn't friendly to the consumer. But it is very clear that they have done significantly/tangibly less than nVidia.
 
Last edited:
Because Nvidia has a very real history of this sort of crap.

The only example I can think of is when Batman:AA disabled anti-aliasing when it detected an AMD card. It makes no sense for nVidia to try to make raytracing vendor locked because it will smother adoption.
 
The only example I can think of is when Batman:AA disabled anti-aliasing when it detected an AMD card. It makes no sense for nVidia to try to make raytracing vendor locked because it will smother adoption.

Yeah, the same with Gsync, PhysX, CUDA, etc. It makes no sense for Nvidia to try to vendor-lock these things as it will smother adoption!



/S
 
Oh wow, OK... here we go. I don't know how some of these things are getting lost in communication:

There’s absolutely no reason AMD couldn’t use the DXR paths to prove they are just as capable if they wanted to with a demo.

AMD is already one of the parties that support DXR. They've been confirmed by Microsoft as working with them on it. There's 0% chance they won't support DXR, because DXR is not owned by Nvidia. It's a Microsoft technology like DX. You think either of the GPU manufacturers is not going to support a standard like DXR? Like DX? Nope. Not happening. So it's not just no reason to not support it, more like there's no chance.

Because Nvidia has a very real history of this sort of crap.

NV does have a history of that sort of crap, yes. But this is not an NV technology. It's a Microsoft open-ish standard (only to Windows, and to the versions they decide, which is Win10 mainly). NV can't really limit where Microsoft chooses to make DXR work, and it's in its best interest for DXR to work everywhere. Microsoft doesn't answer to NV.

AMD gave Mantle to the Khronos group, and much of what Mantle was, is now the basis for Vulkan, a 100% free and open API. nVidia has never done anything remotely like that.

This is %100 true - Vulkan is mostly Mantle with extras and some retooling. Don't paint AMD as a saint though - their DX11 driver is complete garbage compared to Nvidia's (overhead: thy name is AMD DX11), so it was in their own best interest to popularize a low-level API where they could excel. Mantle was %100 AMD based. Once it caught Khronos' attention, they gave it away knowing they stood a better chance. Vulkan then happened, and that promoted Microsoft's DX12 to actually happen. So yeah, they definitely kicked this evolution into high gear, but don't for a second pretend they did it for an altruistic betterment of the market. They did it because it was their best shot at having good performance VS NV.

Yeah, the same with Gsync, PhysX, CUDA, etc. It makes no sense for Nvidia to try to vendor-lock these things as it will smother adoption!

I don't understand why you don't get this, KazeoHin. DXR IS A MICROSOFT TECHNOLOGY. Not an Nvidia one. Nvidia would definitely love to lock AMD out of it, but they can't. Because it's not theirs to control. DXR is a Microsoft DirectX standard. Nvidia's RTX is just their middleware layer to talk to DXR. From game code, to DXR, to RTX, to the actual GPU. RTX just translates DXR for NV's hardware. AMD will do the same thing either with RadeonRays or whatever they make in the end for consumers.
 
Last edited:
Yeah, the same with Gsync, PhysX, CUDA, etc. It makes no sense for Nvidia to try to vendor-lock these things as it will smother adoption!

Gsync and PhysX are side shows. Raytracing fundamentally changes the foundations of 3D rendering. That’s like saying nvidia tried to vendor lock unified shaders.
 
So yeah, they definitely kicked this evolution into high gear, but don't for a second pretend they did it for and altruistic betterment of the market. They did it because it was their best shot at having good performance VS NV.

Whether they did or they did not do things for altruistic reasons is secondary. Secondary to fully reading what I said. Especially if you're going to question what I said.
 
Whether they did or they did not do things for altruistic reasons is secondary. Secondary to fully reading what I said. Especially if you're going to question what I said.

Um...

upload_2018-10-22_22-50-33.png


I was agreeing with you. Not questioning what you said. AMD has certainly done more good for the market than NV, the latter has been quite consumer-hostile for a few years now. I was only pointing out that it doesn't mean they're not doing it for selfish reasons.
 
Um...

View attachment 114095

I was agreeing with you. Not questioning what you said. AMD has certainly done more good for the market than NV, the latter has been quite consumer-hostile for a few years now. I was only pointing out that it doesn't mean they're not doing it for selfish reasons.

And then you still couldn't be bothered with reading my entire post, in which I state:

Is AMD pure? No, because none of these companies are, and I'm cynical enough to believe that even if I don't have evidence, I'm sure they've done something that wasn't friendly to the consumer. But it is very clear that they have done significantly/tangibly less than nVidia.

So, if you're going to state my beliefs, read the whole post. This wasn't an issue about "factual information". It had to do with how your represented me and my position. So, good job quoting twice and then still missing the information?
 
I don't understand why you don't get this, KazeoHin. DXR IS A MICROSOFT TECHNOLOGY. Not an Nvidia one. Nvidia would definitely love to lock AMD out of it, but they can't. Because it's not theirs to control. DXR is a Microsoft DirectX standard. Nvidia's RTX is just their middleware layer to talk to DXR. From game code, to DXR, to RTX, to the actual GPU. RTX just translates DXR for NV's hardware. AMD will do the same thing either with RadeonRays or whatever they make in the end for consumers.

DXR is indeed a microsoft technology, and it can run on any DX12 capable GPU. I'll repeat myself. DXR is a feature that will work on any DX12 capable GPU. So why does DICE limit the RT to just RTX Cards?
 
So, if you're going to state my beliefs, read the whole post. This wasn't an issue about "factual information". It had to do with how your represented me and my position. So, good job quoting twice and then still missing the information?

I read your post and still agree with what you said. I don't see where your animosity comes from, I never misrepresented your facts, just elaborated on how Vulkan came to be, supporting what you originally said. But ok, you keep trying to argue if that is what you want! #whiteflag
 
Last edited:
DXR is indeed a microsoft technology, and it can run on any DX12 capable GPU. I'll repeat myself. DXR is a feature that will work on any DX12 capable GPU. So why does DICE limit the RT to just RTX Cards?

They never said they limited DXR to NV hardware, they said RTX wouldn't work on vendors other than Nvidia. That's quite logical. DXR would still work on AMD gpus but it'll be slower until AMD provides them with their api layer. DICE can't add a Raytracimg layer specific to AMD if they don't provide their middleware to talk to their GPU properly or even have cards to process it in the market. Don't forget DXR is a dx12 technology, and DX12 isn't just plug n go like dx11: it's a standard, but you have to code specifics for your hardware since it's close to the metal.
 
Last edited:
DXR is indeed a microsoft technology, and it can run on any DX12 capable GPU. I'll repeat myself. DXR is a feature that will work on any DX12 capable GPU. So why does DICE limit the RT to just RTX Cards?

Has AMD released a DXR capable driver yet?
 
comparing Nv based RT to AMD "flavor" is like comparing apples to eggs, completely different things, if Nv does a certain way and MSFT "once again" backs them to do the way they please while systematically "crippling" AMD hardware directly or indirectly because they will ensure Nv gets "precedence" for primary support then for AMD to put any real effort towards DX based RT feature is already a lost cause IMO.

AMD with Vulkan to do RT type stuff is very much their own "ball game" AMD TRIES to do things that are "open for everyone" but Nv constantly undermines anything that is not something they basically have full control over (which I understand to a point)

Vulkan is "neat" because they have some stuff now that can do DX12 within Vulkan, not native is not greatest thing, but, at least AMD has the door "open" but everything AMD has a "hand in" Nv basically wants nothing to do with it, at least this is the way it appears to me, and of course MSFT wants full control over everything as well so they can dole out "favored partners" or at least.so they can clothe what they are doing in the "for security reasons" BS blanket.

with Vulkan they have much less "say" on how it should be done, Kronos tries to be much less vendor biased (compared to what OpenGL was) and you either use as it was designed to be used or "F off" compared to MSFT for many number of years they do not take the hardware/software that shows the best usage of features but "cater" to specific companies (like Intel and Nv)

I honestly doubt there ever will be a true direct comparison of RT (pathfinding, voxels etc) or most any feature because primarily Nv wants to do "parlour tricks" to always appear "the fastest" where AMD tries to be more "full fat" which obviously has its deterimants of various types (look as Tessellation as a prime example) AMD was the "first" to build hardware wise for discrete GPU for mass market for many years, Nv "rushes in" because MSFT says they will put into next DX version, problem is they allowed Nv to do one way automatically crippling AMD in the way it was/is done.

Nv does not want to appear "slow" when more then once they have been slower than AMD (or others) even at their proprietary tech, not good for business obviously.

Nv wants to do one way, AMD tries to do a different fashion regardless of the API being used, hard to play ball when they all want their own ballfield, AMD wants others to play on same field for the "love of the game" Nv wants entire stadium to themselves because "why should we share anything you pricks"
 
AMD does not have hardware to do RT and until they do any DXR support will be reduced to watching slide-show
 
Back
Top