What if Nvidia made a RT accelerator card?

Stoly

Supreme [H]ardness
Joined
Jul 26, 2005
Messages
6,713
By now its clearly apparent that Raytracing is a HUGE performance killer.

It seems to me nvidia needs to at least double the RTX/Tensor performance to bring it to acceptable levels.

But even with a die shrink, there may not be enough space available to cram enough RTX/Tensor cores.

So I was thinking, why not a RT accelerator card?

About a 3rd of Turing die space is used (wasted?) in RTX/tensor cores. So having a separate RT card advantage would have several advantages


1. Turing would have either more room for extra CUDA cores or could be made smaller and therefore cheaper.
2. People won't have to pay for features they don't want
3. People that do want RT could have the performance they need.
4. nvidia could cram at least twice the RTX/Tensor cores for much needed RT performance increase. And still be smaller and cheaper.
5. you could SLI RT cards for even more performance.

While this may not be very atractive to gamers, I'm sure 3d content creators would jump into it right away.
 
It wouldn't work, for the same reason that dedicated PhysX cards failed.

Developers won't spend time/money supporting a niche market, just as they haven't really supported mGPU at all (with few exceptions like Tomb Raider).
 
Ask NovodeX about their PPU PhysX...no ask Ageia and BFG about their PhysX accelerator...no ask Nvidia what happen to GPU PhysX.

But seriously, If they were to make one, it would be obsolete in a few years. Because if the tech takes off, it will be in GPU's, not stand alone cards. Who wants to pay for 2 pieces of hardware when you can by 1?

Now If it would of started off as a stand alone RT accelerator.....
My 2¢.
 
It wouldn't work because you need tight integration of RT cores with GPU itself.
Besides RT in games is slow not because these cards cannot calculate enough ray-intersections or that they can not do de-noising fast enough but because each ray that hits surface need to run shader program and there is simply not enough CUDA cores so not enough shader performance. The same so many people complain is lacking on these cards for rasterization...

In eg. case of BF5 you first rasterize whole scene which is already ridiculously shader heavy and then on top of that you add tons of additional shader workload. No wonder performance drops like flies.
This is also because Titan V performs so well in this game: there is simply not that much ray intersection calculation needed.

So NV could add much less RT cores with little performance impact. Why they didn't? Simple: they needed to make it this way (put one RT core per CUDA block) because RT core is large and it would increase complexity through the roof if they made some block with these cores and some without, both on hardware level and driver level. Also the idea is to eventually abandon rasterization completely and use path tracing to render everything so they are actually very forward thinking.

Does this answer and put to rest your ideas once and for all?
 
Last edited:
  • Like
Reactions: Rizen
like this
Maybe a better idea, what if Nvidia dropped rasterization completely and made a ray-trace *only* card.
Similar to my previous post: it wouldn't make much sense because RT is mostly shader based.
Now removing rasterization parts would not make much sense because GPUs are mostly tons shaders + some other logic, here ROPs for rasterization. It might even be that these hardware resources are used to speed up RT performance. Process of loading geometry for RT require a lot of calculations on GPU side. If they somehow could use existing resources to speed up then they surely did.

Also at this time RT only card wouldn't sell at all for gamers and removing all unnecessary for RT parts of GPU would probably not free much space.

It is amazing some people cannot imagine and accept that NV actually know what they are doing... they are not AMD :ROFLMAO:
 
Actually separate card for PhysX was good and working idea and if this took off it could be developed into separate core type inside GPU's or separate chip on package or even mounted to motherboards...

NV bought Ageia, killed off real PhysX which had more effects than their GPU PhysX and then they ran physics calculation on shaders which completely tanked shader performance... and people complained it reduced their framerates and about exclusivity of PhysX for NV cards...

So comparing RTX to PhysX is stoo... silly :)
 
and people complained it reduced their framerates and about exclusivity of PhysX for NV cards...
So comparing RTX to PhysX is stoo... silly :)
Well ray-tracing at launch tanked performance (in BFV) and also only works on Nvidia cards. Maybe there are parallels.
 
Well ray-tracing at launch tanked performance (in BFV) and also only works on Nvidia cards. Maybe there are parallels.
There are four types of people:
1. do not know but think they know and yell
2. do not know and do not yell because they know they do not know
3. do not know, then do research and when they know a lot start yelling
4. do not know, then do research and do even more research to be sure and then explain what they know when they see that what they know is enough to be helpful despite not knowing everything yet

I try my hardest to be only the 4th type: Arhat, the one who has gained insight into the true nature of existence and has achieved (technological) nirvana :>

Comparing RTX to PhysX on this scale is level 0 XD
 
Actually separate card for PhysX was good and working idea and if this took off it could be developed into separate core type inside GPU's or separate chip on package or even mounted to motherboards...

NV bought Ageia, killed off real PhysX which had more effects than their GPU PhysX and then they ran physics calculation on shaders which completely tanked shader performance... and people complained it reduced their framerates and about exclusivity of PhysX for NV cards...

So comparing RTX to PhysX is stoo... silly :)
I wasn’t implying they are similar technologies. Stand alone cards for not yet implemented or supported features, happened once. And it failed.
 
There are four types of people:
1. do not know but think they know and yell
2. do not know and do not yell because they know they do not know
3. do not know, then do research and when they know a lot start yelling
4. do not know, then do research and do even more research to be sure and then explain what they know when they see that what they know is enough to be helpful despite not knowing everything yet

I try my hardest to be only the 4th type: Arhat, the one who has gained insight into the true nature of existence and has achieved (technological) nirvana :>

Comparing RTX to PhysX on this scale is level 0 XD
And then you have people who think the more they type, the better the point is they are trying to make.
 
XoR_ I love ray-tracing and I want it to succeed. I think eventually in some form it will, since it's basically the Holy Grail of computer graphics.

The point I was making was currently it is an Nvidia only feature, and that doesn't bode well for wide market adoption.

You can look at any of those Nvidia features over the years, which still work okay and get some base level of support but aren't a focus for developers: PhysX, SLI, Surround, 3D Vision, etc.

I know Microsoft has implemented ray-tracing in DirectX12 and there is a Vulkan extension, but we really need it on all cards from all vendors if it's going to last.
 
XoR_ I love ray-tracing and I want it to succeed. I think eventually in some form it will, since it's basically the Holy Grail of computer graphics.

The point I was making was currently it is an Nvidia only feature, and that doesn't bode well for wide market adoption.

You can look at any of those Nvidia features over the years, which still work okay and get some base level of support but aren't a focus for developers: PhysX, SLI, Surround, 3D Vision, etc.

I know Microsoft has implemented ray-tracing in DirectX12 and there is a Vulkan extension, but we really need it on all cards from all vendors if it's going to last.
Nvidia has a history of introducing things that do not succeed.
 
XoR_ I love ray-tracing and I want it to succeed. I think eventually in some form it will, since it's basically the Holy Grail of computer graphics.

The point I was making was currently it is an Nvidia only feature, and that doesn't bode well for wide market adoption.

You can look at any of those Nvidia features over the years, which still work okay and get some base level of support but aren't a focus for developers: PhysX, SLI, Surround, 3D Vision, etc.

I know Microsoft has implemented ray-tracing in DirectX12 and there is a Vulkan extension, but we really need it on all cards from all vendors if it's going to last.


should see it with navi on the AMD side.. i still think ray tracing has another 2-3 years before even considered a viable graphic option but at least we're finally getting there.
 
So there are really 5 types of people? :)
No, only four
Then you have masses of mindless zombie, trolls and AI generated comments

Very soon to be DLSS
DLSS is actually very good example of NV feature to soon replace GPU PhysX in NV-locked feature people will make fun out of XD

XoR_ I love ray-tracing and I want it to succeed. I think eventually in some form it will, since it's basically the Holy Grail of computer graphics.

The point I was making was currently it is an Nvidia only feature, and that doesn't bode well for wide market adoption.

You can look at any of those Nvidia features over the years, which still work okay and get some base level of support but aren't a focus for developers: PhysX, SLI, Surround, 3D Vision, etc.

I know Microsoft has implemented ray-tracing in DirectX12 and there is a Vulkan extension, but we really need it on all cards from all vendors if it's going to last.
Ray-tracing is gonna succeed eventually, in this or some other form. It is kinda like tesselation but this time with MS support right from the start which make me more optimistic for wider adoption.

AMD could also work on DXR compliant hardware but reality is that they at least 2 years behind NV in GPU development so they simply cannot afford it at this time. I hope they will support it eventually.

And I really hope PlayStation 5 will support some form of ray-tracing.
 
I think you have to look at RTX in general as BETA at the moment. Nvidia needed to get it in the hands of developers and consumers as data collection for optimization purposes prior to a node shrink. Getting the ball rolling on adoption was also important. I think RTX at 7nm is going to be pretty awesome.
 
There are four types of people:
1. do not know but think they know and yell
2. do not know and do not yell because they know they do not know
3. do not know, then do research and when they know a lot start yelling
4. do not know, then do research and do even more research to be sure and then explain what they know when they see that what they know is enough to be helpful despite not knowing everything yet

I try my hardest to be only the 4th type: Arhat, the one who has gained insight into the true nature of existence and has achieved (technological) nirvana :>

Comparing RTX to PhysX on this scale is level 0 XD


Let's just agree to disagree about that one...
 
I think you have to look at RTX in general as BETA at the moment. Nvidia needed to get it in the hands of developers and consumers as data collection for optimization purposes prior to a node shrink. Getting the ball rolling on adoption was also important. I think RTX at 7nm is going to be pretty awesome.

More or less- it shows massive improvements with limited implementations, i.e. those that stop well short of 'full ray-tracing'. Big thing is that the improvements are real, the hardware is real, and every engine is tackling it, and well, AMD and Intel are both onboard with respect to hardware support.

3D Vision

...was the only way to roll. It's only 'dead' because it has been replaced by VR, which is superior, but wasn't at all possible during the time that 3D Vision was used.


You got me.


Ran a bit hot, and was the last time Nvidia pushed compute-focused GPUs down into mainstream tiers. It was also wildly successful, and AMD's efforts at the time were about average. Hard to complain really.

Gameworks

Works extremely well, and is done in API so it works on other vendors' hardware too.


Was brilliant, but was killed off essentially by Intel 'closing' their chipset compatibility. Still probably the best audio implementation until HDMI came along and obviated the need for DD5.1 encoding.

Very soon to be DLSS

I only see DLSS getting better, and it already works very well. I expect that it will transition toward something we use all the time, particularly with super high resolution display outputs like VR where consistency is more important than periphery details.


As for the OP: what an 'RT Accelerator' card misses is that such a solution adds a level of latency to the rendering pipeline that is simply unacceptable for real-time graphics. Think SLI, but even worse than the poorest SLI modes. SLI can still be relevant, but an RT Accelerator would require another level of interconnect because you're not just sharing per-frame data, but per-pixel data instead.
 
3D Vision
G84
Fermi
Gameworks
nForce
Very soon to be DLSS

Just off the top of my head...

You can thank 3dvision for contributing to the developement of VR
G84 One of nvidias best sellers
Fermi, well maybe the GTX480, but nvidia quickly reacted with GTX580
Gameworks The most popular developer middleware.
Nforce, I guess you mean the original nforce but nvidia stroke back with a vengeance with nforce 2
DLSS Ok I'll give you this one, but I really hope nvidia delivers eventually
 
Because if they offloaded it to a separate card no one would buy it and they would have no way to force it down people's throats and force them to pay the added premium for graphics cards like this cycle.
 
giphy.gif
 
Back
Top