Why does raytracing suck so much, is it because of nvidia's bad hardware implementation?

Status
Not open for further replies.

Peppercorn

Limp Gawd
Joined
Dec 8, 2016
Messages
259
I think everyone agrees that adding 50% bloat to the die with next to 0 return for the end user for their money is a brutal design choice. If/when the time is right, I think AMD's implementation will be more elegant and efficient. Maybe some sort of CPU/GPU hybrid approach?
 
Note: It's been some years since I've written any 3d rendering engine, and I did so as a hobby, so do your own research on anything I say.

There is nothing wrong with Nvidia's implementation of Raytracing and AMD is not going to release a version that magically performs too much faster. Throwing in CPU is likely not going to help at all due to the nature of computations (afaik). Multiplying the amount of shader cores available would likely be far more efficient.

Trying to explain while avoiding a computer science lecture:

Ray Tracing is inherently extremely computationally demanding. It is not a new technology and has been known for decades. In fact, it's much easier to program (to me at least) compared to other techniques such as rasterization (the old familiar triangles and polygons) or scanline rendering. In the 90's Raytracing was called "The Holy Grail of 3D Rendering"..

The problem is in how raytracing works. Rather than baked in static lighting that we're used to with rasterization, ray tracing is essentially breaking a scene all the way down to the level of photons. Every single pixel you see is being traced from every light source to the object right down to the pixel, EVERY SINGLE FRAME. When you move in real life, the angle between the objects you're seeing and the light source (light bulbs, the sun, etc) is constantly changing causing a shift in perceived brightness and color. Multiply your screen resolution (1920x1080) and you'll be at ~2,000,000, consider how many rays are being cast per frame, and then multiply that by how many frames per second you expect to game to run, and you should now see the problem. That's just scratching the surface without thinking about how light interacts and scatters or moving and static objects casting shadows and reflections. Rasterized games don't have that problem because the lighting doesn't change as you move through a scene - or if it does it's happening in a pre-calculated (baked in) manner, rather than being computed in real time.

Hopefully that makes sense. There are many great technical references available online if you want to learn how it actually works.
 
Last edited:
I think everyone agrees that adding 50% bloat to the die with next to 0 return for the end user for their money is a brutal design choice. If/when the time is right, I think AMD's implementation will be more elegant and efficient. Maybe some sort of CPU/GPU hybrid approach?
The Tensor and RT cores only take up about 17% of the die space. And I've been getting plenty of return for it in enjoying the games that are including ray tracing in addition to the 35% boost in rasterized performance over my Pascal Titan X.
 
If/when the time is right, I think AMD's implementation will be more elegant and efficient.

AMD's implementation... isn't. They'll do ray tracing in hardware eventually, they might even enable it to run on their current GPUs (we're waiting on their drivers, as usual), but it's going to be at least a year out. By then we can expect Nvidia's second generation of hardware ray tracing.
 
Now was a perfect time for them to experiment with RTX cards. They have enough of a performance advantage that it was safe to waste a bunch of die space on playing with new technology.

It obviously wasn't a great choice if your goal was to make the fastest card humanly possible. As of right now they have the fastest cards on the market and a bunch of real world ray tracing results. They got the best of both worlds and I'm sure the experience they are getting will help when\if they decide to push ray tracing mainstream.

I don't think any recent tech developments \ choices are in the least bit surprising when you start looking at products from a CEO's standpoint instead on an enthusiast's.
 
I would say that ray tracing in current Nvidia hardware is a failure for the following reasons and most of them are related to Nvidia proofing that the money they make is more important then usable features for the gaming community.

1. segmented performance across all of the current models
2. pricing of the hardware is segmented as well which stops developers dead in their tracks.
3. Nvidia failure to implement ray tracing in software/gaming engines.
4. by the time ray tracing is used on most titles the hardware is already obsolete

The last point is all Nvidia's fault it slow adaptation of their hardware means that they are responsible for software implementation (games). That might be all right if you can get excited by Quake 2 but I guess that does not draw in the crowds to change things around (mid 20 fps in 4K on 2080 TI and 9900k is in it self pretty telling about what ray tracing is for Nvidia users).
 
Now was a perfect time for them to experiment with RTX cards. They have enough of a performance advantage that it was safe to waste a bunch of die space on playing with new technology.

It obviously wasn't a great choice if your goal was to make the fastest card humanly possible. As of right now they have the fastest cards on the market and a bunch of real world ray tracing results. They got the best of both worlds and I'm sure the experience they are getting will help when\if they decide to push ray tracing mainstream.

I don't think any recent tech developments \ choices are in the least bit surprising when you start looking at products from a CEO's standpoint instead on an enthusiast's.

I would wager that customers don't appreciate having to foot the bill for their prototype experiment and bloated die, nor have i witnessed very many at all advocating from the CEO's pov rather than the consumers. Yes, "they" got the best of both worlds, that doesn't translate at all to the interests of consumers (at least they thought they would, but as the poor RTX sales have illustrated, consumers don't seem to be as willing to be taken advantage of as much as the CEO calculated). I bet fewer than 1% use it in the handful of games on the market. If current games are too demanding, then future games will be even more so. It looks like the wrong time for raytracing and the wrong implementation.
I would like to see the majority of the work done on the CPU where there is an abundance of resources already available with the new era of multicore CPU's, and not with a large chunk of unnecessary silicon for consumers to pay for. Sure it makes sense for nvidia to go that route since that's their only option, but it doesn't make sense for consumers. I think we'll see some much more efficient and cost effective solutions in the near future.
 
Now was a perfect time for them to experiment with RTX cards. They have enough of a performance advantage that it was safe to waste a bunch of die space on playing with new technology.

It obviously wasn't a great choice if your goal was to make the fastest card humanly possible. As of right now they have the fastest cards on the market and a bunch of real world ray tracing results. They got the best of both worlds and I'm sure the experience they are getting will help when\if they decide to push ray tracing mainstream.

I don't think any recent tech developments \ choices are in the least bit surprising when you start looking at products from a CEO's standpoint instead on an enthusiast's.

This. I've actually been pleasantly surprised by the ray tracing features of my 2080 Ti. Like, everyone complained about how slow it is, but I find that even at 4K, Shadow of the Tomb Raider, which is the only game I care to play that supports it, currently, is actually playable with everything dialed up to the max, and the ray traced shadows do actually look pretty amazing. It's not blazing fast, but it's not unbearably slow, either, for a single player game.

Also, the Quake II demo, which is apparently 100% ray traced, was pretty amazing in terms of how much better it makes the game look, without changing any assets at all. Imagine what it would be like if they did this to other games - Half Life, for instance.
 
segmented performance across all of the current models

So you're new to consumer electronics...

pricing of the hardware is segmented as well which stops developers dead in their tracks.

Developers are putting DXR in every major engine.

Nvidia failure to implement ray tracing in software/gaming engines.

I can't imagine you being more hypocritical- should I dredge up comments from you about Nvidia's 'The Way It's Meant To Be Played'? You're saying you want more of that?

by the time ray tracing is used on most titles the hardware is already obsolete

It's likely that it'll be slower, but we're also seeing developers get better at their hybrid approaches. Which is good, because that will be needed for the next decade to maintain support across hardware generations, most specifically the upcoming consoles that will lack significant hardware ray tracing grunt- if they have any at all!- and will start looking very under-spec'd compared to the visuals available on PCs soon after release.
 
Also, the Quake II demo, which is apparently 100% ray traced, was pretty amazing in terms of how much better it makes the game look, without changing any assets at all. Imagine what it would be like if they did this to other games - Half Life, for instance.

That's a ridiculous statement. Quake 2 came out in December of 1997. You could put it on any modern game engine and be pretty amazed in terms of how much better it makes the game look with or without overpriced hardware.
 
The Tensor and RT cores only take up about 17% of the die space. And I've been getting plenty of return for it in enjoying the games that are including ray tracing in addition to the 35% boost in rasterized performance over my Pascal Titan X.

The problem is that there are less than 10 games with support out of the tens of thousands of video games. Generation to generation, Nvidia used to boost the same 35% (give or take) and charge near (sometimes slightly higher and sometimes slightly lower) pricing. This time the price is absurdly higher for essentially the same 35% you would normally expect after 2 years with Pascal. Granted I'm not the CEO, but this sure seems like a money grab to me.
 
The problem is that there are less than 10 games with support out of the tens of thousands of video games. Generation to generation, Nvidia used to boost the same 35% (give or take) and charge near (sometimes slightly higher and sometimes slightly lower) pricing. This time the price is absurdly higher for essentially the same 35% you would normally expect after 2 years with Pascal. Granted I'm not the CEO, but this sure seems like a money grab to me.

I think without RT GPUs would become like CPUs. Lasting 5+ years with no real reason to upgrade. That was always my guess as to why they pushed it out. I kind of feel that way regardless with my current GPUs. For the first time I am actually content lol.
 
I think without RT GPUs would become like CPUs. Lasting 5+ years with no real reason to upgrade. That was always my guess as to why they pushed it out. I kind of feel that way regardless with my current GPUs. For the first time I am actually content lol.

I don't think that at all... There's always a market for a faster video card. If you can push higher refresh rates with modern game engines at 1080p into mainstream pricing (sub-$250), people will upgrade. There's been considerable movement generation to generation in video cards. The fact that the Turing generation has 35% more rasterization performance shows that there is still room to move. Nobody is complaining about the performance. It's the price that Nvidia is trying to extract for that performance that's the issue.
 
So you're new to consumer electronics...



Developers are putting DXR in every major engine.



I can't imagine you being more hypocritical- should I dredge up comments from you about Nvidia's 'The Way It's Meant To Be Played'? You're saying you want more of that?



It's likely that it'll be slower, but we're also seeing developers get better at their hybrid approaches. Which is good, because that will be needed for the next decade to maintain support across hardware generations, most specifically the upcoming consoles that will lack significant hardware ray tracing grunt- if they have any at all!- and will start looking very under-spec'd compared to the visuals available on PCs soon after release.

Deflection and whataboutism does not change the facts .

You change it around you don't discuss any points , your first reply is a personal attack does not discuss the point.

Your second reply is not addressing it either because developers can and will do things but does that cover the load , it does not, in any way shape or form.

Your 3rd reply is another personal attack it does not discuss the failure to implement software where as software that uses Gameworks is so widespread it makes you wonder if you even know what you are talking about.

The bold part is as much Nvidia marketing speak for were slow because you forgot to pay us stupid money for technology were not implementing in any real fashion soon.

4 games you are describing 4 games being there for people that can run in low enough resolution to get decent framerates if you didn't shell out $1200. Talk about fucking slow , this does not mean that you can't enjoy them if you purchased the card but if you look at Gameworks that never was held back.
 
Last edited:
If AMD rather than Nvidia were the first to offer RT dedicated HW and managed to convince a few game studios to implement RT in their games, they would have been subject to the same difficulties Nvidia had, if not worse. Their HW would also have to be encumbered with massive, expensive dies with dedicated RT functions yet would still fall short on the performance front. Only difference is, their defenders would be cheering them on as predictably (and hypocritically) as can be, and all the arguments in this thread would be reversed. This thread would likely be titled, 'Ray Tracing, AMDs bold move forward'. Its a pattern here. Its not what the criticisms are (legit or otherwise) but WHO they may involve, AMD or Nvidia, that determines how and where the criticisms are directed.
 
It is laughable when they use a 22 year old game to demonstrate RTX, this late in it's cycle. And it runs like a modern game in terms of fps, safe to say running proper rt on a modern game is prety far of. While i can see it beeing important in the future maybe, but not now. End user pay 40% premium for a feature that is pushed on them, atleast release a non rt line.
 
I would bet if there was a poll around here that asked if you would have bought a 1180Ti for $799 or the 2080Ti for $1199 the only difference being the RTX features added on the 2080 and the 1180 having the same 35% bump in everything else, very few people would have bought the RTX line.
 
It is laughable when they use a 22 year old game to demonstrate RTX, this late in it's cycle. And it runs like a modern game in terms of fps, safe to say running proper rt on a modern game is prety far of. While i can see it beeing important in the future maybe, but not now. End user pay 40% premium for a feature that is pushed on them, atleast release a non rt line.
What makes you think hardware DXR have anything to do with price?
And where does this 40% comes from?
Another number pulled straight out of the a*s in this topic...
 
I would bet if there was a poll around here that asked if you would have bought a 1180Ti for $799 or the 2080Ti for $1199 the only difference being the RTX features added on the 2080 and the 1180 having the same 35% bump in everything else, very few people would have bought the RTX line.
2080Ti can be bought for 999$ for a long time
And where does your price reduction comes from anyway?

You have absolutely no idea how prices are calculated and why they are as they are.
Price increase happened at the same generation hardware DXR was added and you immediately jump to conclusions.
Better go see neurologist or something because I fear your tensor cores are malfunctioning...
 
2080Ti can be bought for 999$ for a long time
And where does your price reduction comes from anyway?

You have absolutely no idea how prices are calculated and why they are as they are.
Price increase happened at the same generation hardware DXR was added and you immediately jump to conclusions.
Better go see neurologist or something because I fear your tensor cores are malfunctioning...

My pricing is founders edition to founders edition. I'm assuming that Nvidia has some reason other than sheer greed.
 
I would bet if there was a poll around here that asked if you would have bought a 1180Ti for $799 or the 2080Ti for $1199 the only difference being the RTX features added on the 2080 and the 1180 having the same 35% bump in everything else, very few people would have bought the RTX line.
Absolutely. We all knew RT performance was crap since day one. But people still needed upgrades, esp for higher res gaming purposes. What other options did we have? What could I have bought as an upgrade to my 1070? Would have preferred a 1080ti at or near its $650 original MSRP, but these were no longer around at less than 2080 prices. Waited for the R VII to see what it could do, but did not want a loud vaccum cleaner humming near me. So had to opt for a 2080 which was around same price. I could not care less for its RT features, only bought it for its higher performance over last card.
 
What makes you think hardware DXR have anything to do with price?
And where does this 40% comes from?
Another number pulled straight out of the a*s in this topic...
1080ti vs 2080ti is or atleast was a 40% increase in price. prety sure u can just look at prices of both cards and see it is more or less 40% +- can only guess i 980ti -> 1080ti was more similar in price, while the 1080ti were more expensive for me it was not by alot. idk what other reason they get to near double price of same tier graphics card in 1 generation, seems rather special. at best i can get a 2080ti for 14-1500 usd as lowest price in norway... importing is pointless cause the gov. screw u over on tax and processing. that is prety much just the same as release unfortunately.
 
Deflection and whataboutism does not change the facts .

None of that here.

You change it around you don't discuss any points , your first reply is a personal attack does not discuss the point.

I hit all of your 'points', and added that your first point shows an extreme naivete toward basic consumer electronics markets.

Your second reply is not addressing it either because developers can and will do things but does that cover the load , it does not, in any way shape or form.

'Cover the load'- perhaps your inspecificity is the root of the problem.

Your 3rd reply is another personal attack it does not discuss the failure to implement software

It's bold-faced hypocrisy, so yes, I called that out- especially since the software is implemented in many engines and is being implemented in the rest.

The bold part is as much Nvidia marketing speak for were slow because you forgot to pay us stupid money for technology were not implementing in any real fashion soon.

Modern games running well with hybrid ray tracing is 'not implementing in any real fashion soon'?


And you wonder why you're being called out?
 
brutally honest, I would have expected Nvidia to do a much more hybrid focused on the software side of things for Raytracing and AMD to be a much more hardware side of the coin, going by their combines histories and AMD "habit" of wanting to keep every tool in every product they can (so it can be a race car, a work truck, a battle tank etc)

Giving an example, Tessellation.....AMD/ATI had the hardware in their own product for years prior to MSFT putting into the DX spec and allowing Nv (others as well) to do a hybrid approach if they would like, where AMD was "forced" to play with Nvidia's ball, instead of Nvidia and the others using what AMD had workd/used for I believe was 4-5 years ahead of time (very very deep design) compared to the eventual one Nv have used which is a mucho bleeek cheap ass way of doing it, where the Radeons still can do the full out "proper" Tessellation (the few games that use the not "bias" one) the level of detail difference is truly night and day "this is a Radeon" (in a very positive way)

Then you get things like Nv "helping" with Square's Final Fantasy benchmark where a bunch of crap is running that should not be, so Nv is able to use "shit on the card that serves no other purpose" to handle all that crap while the rest of the hardware/software runs the actual game, the Radeons are "forced" to run the "shit" they were not/cannot be built to "understand" and so robs of a bunch of performance AND increases power use.

AMD/ATI has//done Raytracing just as long if not longer than Nvidia...mainly because they are "older" and seem to be a "trailblazer" where Nv for the most part (to my mind) tend to be the ones who burst into the party to show off their 1/2 assed redneck way of doing something somebody else is doing/will be doing, convincing everyone that everything else is

:not that impressive: (even when their product is more or less EXACTLY that performance level just the same A and/or B the other is actually once again overbuilt instead of built "just enough" (higher grade capacitors, Vreg design, drop speed of certain things to promote extra long service life...

anyways.....

LOL......

Raytrace is not "new" however is very "expensive" to use the feature, Nv way of doing it is of course :cheating: just like they do with Tessellation, or that stupid FF benchmark, that is, they only comput X of the image as raytrace the rest of rasterize to keep performance up and hopefully at the same time power/compute cost as low as possible even though the final image is much "richer"

Nv only stuff X of the hardware in, just like APU initially started with a very small % of the core for graphics is now up to 75% of the core in some cases....I believe a hybrid approach is the "right way" as otherwise likely the raytrace hardware would have to be made much like they do with "shaders" in which they can act as pixel or vertex shader, the RT hardware likely (most likely?) would become at some point able to "cross-compute" as either a ray or a "shade" (rasterize)

maybe that is all that is needed, a chip to program the shaders on the fly to be either a Raytracer or a Rasterizer without requiring X of the "core" to be made into a 100% can only be one or the other.

I see Nvidia as "cheaping out" via the hybrid RT they are doing (however, there is very logical reason behind the why they did such) I believe the next logical step would be a new class of shader that could do Rasterize, Pathfinding, Raytracing, Voxels or whatever "on the fly"

GPU are afterall ASIC so all they need is another chip (or a few) to tell the ASIC what it must do (the reason why GPU are wicked wicked faster for mining/hashing is this very ability) ...... due to the amount of stuff Nv has "chopped and played" since around their GTX 500 generation, AMD is the "most likely" to do the approach of a "multi-shader" or whatever "properly" (maybe) then again Nv is "known" to butt in line to show off their new "just buy it" toy.
 
The Tensor and RT cores only take up about 17% of the die space. And I've been getting plenty of return for it in enjoying the games that are including ray tracing in addition to the 35% boost in rasterized performance over my Pascal Titan X.

I think its closer to 30% of die space.

Anyway for being the 1st iteration, its remarkably good. I recall seeing larrabee demos back in the day and it performed pretty bad at the "high resolution" of 1024x768. I mean even prior to Turing, 4 TitanXp were needed to run the SW reflection demo at 20 something fps. Where a single RTX2080Ti can run it at 4k at 30+ fps
 
AMD's implementation... isn't. They'll do ray tracing in hardware eventually, they might even enable it to run on their current GPUs (we're waiting on their drivers, as usual), but it's going to be at least a year out. By then we can expect Nvidia's second generation of hardware ray tracing.
That comment alone about " I think AMD implementation..blah blah" is the clear flag that this is just another rant from an AMD fanboy, nothing to see here.
 
I would say that ray tracing in current Nvidia hardware is a failure for the following reasons and most of them are related to Nvidia proofing that the money they make is more important then usable features for the gaming community.

1. segmented performance across all of the current models
2. pricing of the hardware is segmented as well which stops developers dead in their tracks.
3. Nvidia failure to implement ray tracing in software/gaming engines.
4. by the time ray tracing is used on most titles the hardware is already obsolete

The last point is all Nvidia's fault it slow adaptation of their hardware means that they are responsible for software implementation (games). That might be all right if you can get excited by Quake 2 but I guess that does not draw in the crowds to change things around (mid 20 fps in 4K on 2080 TI and 9900k is in it self pretty telling about what ray tracing is for Nvidia users).

1. I guess, but you really can't expect them to go mainstream on RTX, its 1st gen after all
2. see above
3. Its already on the most popular engines, like Unity, Unreal, frostbite, (who uses Cryengine?) so I expect several titles being RTX enabled. Granted RTX kind of sucks on many of them, but then again, its 1st gen.
4. That's the price to pay for being an early adopter.
 
I would say that ray tracing in current Nvidia hardware is a failure for the following reasons and most of them are related to Nvidia proofing that the money they make is more important then usable features for the gaming community.

1. segmented performance across all of the current models
2. pricing of the hardware is segmented as well which stops developers dead in their tracks.

Segmentation is how the GPU business always works. How else do you think it could work?

3. Nvidia failure to implement ray tracing in software/gaming engines.
4. by the time ray tracing is used on most titles the hardware is already obsolete

3: False. Support is being added to major gaming engines. Huge changes like this don't occur overnight. You have to start somewhere.
4: That might be 5 or more years from now. Most cards are "obsolete" for leading edge by that time.

This entire post seems (willfully?) ignorant to the reality of making a major technology change.
 
#1
You talk about raytracing in the title but then you topic is about nvida GPU...
it seems like you are a bit confussed about tge concepts

Ray tracing is a rendering method it can be done in a CPU and a GPGPU og on dedicatated hardware. The GPU you are talking about has dedicted hardware with th eintention to make ray tracing faster

Ray tracing is very math heavy which is why it was never used before for gaming as the old rasterization method was faster. at least for simple scenes.
but rasterization has a limitation on itself and we have through th years added many features into it to help it grow well without running crazy on ressource demand.


Anyway what the the implamentation suck?
Everytime i see thsi question or peopel not understanding that the top end graphics card cn pull newest game at ultimate settings, it shows how young this person is in tech
This is not a new unique thing.
it was the very exact thing when the first voodoo cards game out

Normalr quake running on a fast CPU was a lot faster then Glquake running on the same CPU with a voodoo card.
but the later look better.
would you still want to run you games sole on the CPU today ? noo that would be crazy
but if ppl back then had the same now and here focus like you are presenting that would have you believe the 3d accelration cards are stupid and a waste of money because its not the fastest solution.

This is imply the fist generation of something new comming out having benefits and drawbacks. and down the time the drawbacks are going to be less and less and the benefits bigger and bigger.

and we will all remmeber back to the same to all the peopel freaking out on forums dishing out on Raytraching technology thiking it woul never pull off. as the same kind of crazy ppl that believede 3d acceleration would never pull off



its simply said a chicke nand egg game. the first seed has been planted. let it grow before you get yourself all rilled up
 
I would bet if there was a poll around here that asked if you would have bought a 1180Ti for $799 or the 2080Ti for $1199 the only difference being the RTX features added on the 2080 and the 1180 having the same 35% bump in everything else, very few people would have bought the RTX line.

I think the question could answer itself.

There's the GTX1660Ti vs RTX2060. It's too early on the stearsurvey to know which has a bigger share (the 2060 is higher on the list for now), but in a few months we'll have a better picture.
 
3: False. Support is being added to major gaming engines. Huge changes like this don't occur overnight. You have to start somewhere.

Can't wait to see RTX Temple run ...

Overnight or not, RTX one year anniversary is coming up and no one but Unity and Unreal has showed any interest...
 
Yeah but you can stretch it any way you can. Console only, newer revision hardware (not Navi). Just a few more days for that event after that we might know more.
All that AMD said that if they would implement ray tracing it would not be a segmented feature that all of the cards would be able to do the same amount.

Clearly some kind of absurd misinterpretation on your part.

Higher RT performance will require more transistors, thus more die area, and thus a higher price.

Trying to devote the same amount of die area to RT performance across a product line, would make low end cards too expensive, and high end cards under-powered.

RT performance will be segmented just like Raster performance is. It really couldn't be otherwise.
 
It "sucks" cause the price of entry is more then I want to spend on it. Call me whatevs, but if one could buy a RTX card in low to mid 200s, the market would forget who AMD was. However, majority of buyers still are at the price point maybe lower.
 
It "sucks" cause the price of entry is more then I want to spend on it. Call me whatevs, but if one could buy a RTX card in low to mid 200s, the market would forget who AMD was. However, majority of buyers still are at the price point maybe lower.
Most people already did forget
A4_Semiconmductors_NVDA-AMD-GPU-msarket-share-2.png


That is the result of rehashing the same GPU architecture over and over again for 7 years straight... when it was pretty much beaten first time it appeared...
 
Status
Not open for further replies.
Back
Top