Why does raytracing suck so much, is it because of nvidia's bad hardware implementation?

Discussion in 'Video Cards' started by Peppercorn, Jun 7, 2019.

Thread Status:
Not open for further replies.
  1. Peppercorn

    Peppercorn Limp Gawd

    Messages:
    259
    Joined:
    Dec 8, 2016
    No need to compare it to anything. The added bloat is another nvidia tax on their already overpriced cards. If raytracing was usable without tanking performance then an argument could be made to how useful the implementation was. I guess. But as it stands the ballooning of the die is something consumers are paying for even if raytracing on all but the highest end card is virtually useless without serious compromise to the experience. And even then, it's really only good for screenshots and stills. Nvidia's attempt at raytracing made it clear that the industry is not ready, nor are they.
     
  2. Aireoth

    Aireoth 2[H]4U

    Messages:
    2,918
    Joined:
    Oct 12, 2005
    Its expensive and I don’t own a 4k monitor? Or maybe you just wish AMD had a 1080ti killer laying around, let alone a 2080ti.

    RT was always a first gen gimmick, it will be something one day, but your looking for the result of the bottom of the ninth when the first pitch has just been thrown.

    No need to compare my butt, if your the first out, you are the one that gets to show how it can be done, your the only one that can really do it, and everyone else gets to stare at your rear for a while.
     
    Maddness and amenx like this.
  3. idiomatic

    idiomatic [H]Lite

    Messages:
    72
    Joined:
    Jan 12, 2018
    Its got to keep you up tonight if you are designing consoles... when will everything shift to ray tracing? Will it ever? Will it occur the year after we launch our hardware and look like idiots? If we do a full ray trace console will it be a bit to early and not hold up?

    Will VR screw us?
     
  4. noko

    noko [H]ardness Supreme

    Messages:
    4,342
    Joined:
    Apr 14, 2010
    AMD implementations? How will AMD address DXR - huge big chips like Nvidia or something chiplet like DXR coprocessors? There are different ways of designing a bridge so to speak, as long as it gets the intended traffic across all is good. I do not see AMD going to the big die route. Now if AMD can give options, let say you can buy the GPU for $400 and if you want DXR (ray tracing) another $200 or whatever for a coprocessor card or on the card itself. You can have your cake and maybe eat with some extra frosting later. Some other interesting options is an interposer with 3 HBM (768 bit bus) and one DXR coprocessor stack, as in multiple ASICs for DXR. Who thinks AMD will shift to very large dies? I don't.

    Smaller dies you get more per wafer which means you don't have to consume as much wafer production for you product which TSMC may not have the capacity to fulfill chips needed when large but can with small size chips. DXR coprocessors can be manufactured at different fabs, node process etc. Flexible design that can apply way more calculational ability.

    Intel was able to do Ray Tracing using their CPU's last decade:
    https://everipedia.org/wiki/lang_en/Quake_Wars:_Ray_Traced/

    All the coprocessor has to do is how much intensity the light has per pixel, color bounce/shift from other object, shadows/GI information, caustics, refraction . . . -> send to rasterizer to weigh the texture/colors and the light information. It is not rendering the scene, not looking at textures other than normal map/bump map information for calculation meaning the bandwidth does not have to be that great. As far as the rasterizer is concern, it is a super dynamic lightmap, shadow map etc.

    There is something I though Nvidia would do, render the lighting let say at 30FPS and interpret to the actual FPS. Won't be as accurate but probably would be hard to tell the difference as well. Just some thoughts.
     
    amenx likes this.
  5. Alienslare

    Alienslare Limp Gawd

    Messages:
    160
    Joined:
    Jan 23, 2016
    It will definitely take more time to implement, the way i see it directx is behind the schedule.
     
  6. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    YOU!!!

    Do you STILL claim (falsely) that RTX is 50% of the die?!


    Either you LIED...or you are so ignorant about the topic you should not post...so which is it?


    And you even posted this:
    If you think On-screen-reflections are better than Raytraycing...you must be trolling?
    This is FarCry 5:


    This is Battlefield V:


    Postting crap is easy....backing it up...much harder...


    You might want to read up....RTX is not what is making the Turin dies "large"...
     
    Last edited: Jun 9, 2019
    GoldenTiger and amenx like this.
  7. dgz

    dgz [H]ardness Supreme

    Messages:
    5,320
    Joined:
    Feb 15, 2010
    Is that video supposed to be impressive? To me ray tracing is more about global illumination. Everything else is secondary.
     
  8. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    Nope, just debunking OP's wild claims...read the thread.
     
    GoldenTiger likes this.
  9. noko

    noko [H]ardness Supreme

    Messages:
    4,342
    Joined:
    Apr 14, 2010
    To have RTX it is more then just tensor and ray tracing cores, you also have to have logic, communication lanes etc. to support the new features.

    GP102 (used in 1080Ti), 16nm node, surface area 471mm, 3584 shader units
    TU102 (2080Ti), 14nm node, surface area 754mm, 4352 shader units

    4352/3584 = 1.21 -> The 2080Ti has 21% more, basically shaders, texture units etc. over the 1080 Ti

    Die size wise the 2080Ti/1080Ti is 754/471 = 1.61 times the size. 21% more shaders but 61% bigger overall. The memory controller and support for 11gb of memory should be about the same. So where did all the increase come from in die size?

    Then TU102 on the 12nm node has a potential transistor density of 20% higher than the 16nm of the GP102. In other words if the 1080 Ti or GP102 was using the 12nm process you would be looking at a die size of 377mm making the real difference 754/377 a factor of 2!

    If one would to add shaders/texture units to the GP102 GPU or 1080Ti to equal the 2080Ti, no tensor or Ray tracing core crap, and place it on the 12nm node from TSMC, what size would it be around?
    • 1.21 x 4.71 = 570nm on the 16nm process
    • 12nm process would be 570 - (.2 x 570) = 456mm - smaller than the current 1080Ti GPU
    Of course Nvidia improved the processing ability of the shaders for Turing so nothing is exactly proportional - but Raytracing ability added a hell a lot of complexity to the chip and is way above just 16% overall increase in size needed. Yes if one just looks at the TCU, RTC only it may not appear to increase the size that much but those are not just isolated blocks of transistors, the whole chip will need to support their functions as well adding even more transistors, Caches, buffers etc. to make it all work.

    Now the math indicates a 50% increase in die, factor of 2 overall, to support all the new features.
     
  10. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    Look at what they changed in Turing...the SM's really got reworked, unified cache, double the cache size...I know poeple like to whine about everything today...but the die-size impact of raytraining...that is retarded whine...the Turing CUDA cores are 20% more efficient that Pascals (at the same Hz)...I guess that happen via "free transistors"?
    Besides, look at and look eg. at the TU116-400-A1 (It has no RT cores, only Tensor cores)...RT cores are pretty small compared to the rest of the stack.
    And we are no where near the retarded claim of "50% die space"...
     
    Armenius and GoldenTiger like this.
  11. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,348
    Joined:
    Apr 22, 2006
    Another way to look at it. Shaders/mm2

    I did a comparison of 10 series, 16 series, 20 series.

    There appears to be about a 15% penalty for Turing (16 series without RTX), about 30% penalty for Turing RTX...

    So I get to approximately 15% penalty for the RTX units, over the 16 series without.

    It seems no matter how we look at it, the RTX "Tax" is not that large.

    I am not sure where the assumption that RTX units used something like half the die came from.

    Edit: Extra note about 16nm-12nm process change. Ignore it. It's marketing. Check transitors/mm2, its ~25 million/mm2 for both Pascal and Turing.
     
    Last edited: Jun 9, 2019
    Armenius and GoldenTiger like this.
  12. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    Same place as "Raytracing looks bad" came from...some cesspool of FUD...I have seen claims like this before, ranging from 1/3 of the die to 1/2 the die...ignorance is the new black it seems.
    And those 15% do seem to speed up RT significantly more than 15% ;)
     
    Armenius and GoldenTiger like this.
  13. noko

    noko [H]ardness Supreme

    Messages:
    4,342
    Joined:
    Apr 14, 2010
    Yes indeed, Nvidia did some cool stuff no doubt, the chip size is way bigger than the performance gained for regular non raytraced games but there are a number of new features with ability to do floating point math and interger math at the same time, plus half precision and so on. Add in the raytracing, AI and you have one expensive chip to make but it works for the most part.

    I don't see AMD going the big chip route, chiplet route with some screaming fast separate chips which potentially can keep the cost down. I think it will be very interesting how big Navi does it if at all.
     
    Armenius and Maddness like this.
  14. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    I am curious on why AMD cannot (it seems) do what NVIDIA did for Pascal SKU's....enable shader based DXR on any of their current GCN architechtures.
     
    Maddness likes this.
  15. noko

    noko [H]ardness Supreme

    Messages:
    4,342
    Joined:
    Apr 14, 2010
    They probably could except it might be rather embarrassing if they did :D. If they can allow multiple cards to interact better to do Raytracing, like dedicating one card to ray tracing, that would be cool. Even Pro-Render, AMD ray tracing software runs better on Nvidia :LOL: without using the ray tracing hardware, ouch! Still the bottom line is yes raytracing or some of it you can do with Nvidia Turing but it does come with a price tag in cost as well as performance.
     
    Armenius likes this.
  16. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,348
    Joined:
    Apr 22, 2006
    I agree. I don't think AMD should offer DXR drivers until they have something with usable DXR performance.

    NVidia does it because it helps up-sell RTX cards, not because it is much real use to have DXR drivers on Pascal, so for NVidia poor DXR performance of Pascal doesn't hurt.

    AMD has nothing to up-sell, and in fact, DXR drivers on Vega would probably only help sell RTX cards, so for AMD, poor DXR performance does hurt.
     
  17. blackmomba

    blackmomba n00b

    Messages:
    53
    Joined:
    Dec 5, 2018
    What about PhysX, better comparison?

    Bits and pieces demoed here and there..
     
  18. Pulsar

    Pulsar Gawd

    Messages:
    793
    Joined:
    Jan 16, 2001
    What about when "3D accelerators" came into the picture? Hardware anti aliasing? Hardware T&L? Bits and pieces demoed here and there too until it became widely implemented. I don't own an RTX card and honestly, I wasn't expecting stellar ray tracing performance from a 1st gen product. Somewhere down the line, they'll figure out how to do ray tracing even faster and more efficiently, but someone has to get the ball rolling first.
     
  19. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    You do know what Raytracing is don't you?
    AFAIR Physx is one of the most used physics API's?
    (Most people only think PhysX is GPU-aceelerated, but that only means they need to read up)
     
    Armenius, GoldenTiger and Araxie like this.
  20. MangoSeed

    MangoSeed Limp Gawd

    Messages:
    507
    Joined:
    Oct 15, 2014
    How exactly is the industry supposed to get ready without taking a first step? No matter when raytracing is introduced it will always be expensive compared to rasterization. No magic in the future will change that so I rather they invest early so I can experience RT nirvana before I die.
     
  21. kirbyrj

    kirbyrj [H]ard as it Gets

    Messages:
    24,460
    Joined:
    Feb 1, 2005
    Who says it has to be expensive? You got brainwashed by Nvidia to pay out the nose.
     
  22. Aireoth

    Aireoth 2[H]4U

    Messages:
    2,918
    Joined:
    Oct 12, 2005
    I hate the price increase, but there is no valid argument on your position. This isn’t some brainwash job.

    All new tech is expensive for early adopters as companies look to recover their R&D costs, we pay now for your future enjoyment of better hardware. You are welcome.
     
  23. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    Really?
    The sole reason WHY we have used hacks is that we have lacked the COMPUTIONAL power to do true RayTracing.
    I hope you are trolling...the alternative is that you are too ignorant to debate about the topic?
     
    GoldenTiger and DooKey like this.
  24. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    It like rayTracing is making the fanboys post even more retarded stuff than normally...this thread is is fine example...I sense much fear for the DXR ;)
     
    GoldenTiger, Aireoth and Araxie like this.
  25. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,348
    Joined:
    Apr 22, 2006
    Expensive is a relative term. I don't think the RTX 2060 is that expensive.

    Though I would have phrased it as "There will always be a cost". There would also always be a shortage in initial software.

    So I agree with MagoSeed, best get started as soon as reasonable. That way, when the second generation arrives, more refined at a better price, there will actually be some games to use it, and developers will have stepped further into the RT optimization learning curve.
     
    Armenius likes this.
  26. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    If you follow what developers are posting one thing is clear: RayTracing (DXR) is here to stay.
     
  27. kirbyrj

    kirbyrj [H]ard as it Gets

    Messages:
    24,460
    Joined:
    Feb 1, 2005
    Expensive compared to launch prices of:

    GTX 460 - $229
    GTX 560Ti - $240
    GTX 660Ti - $300
    GTX 760 - $249
    GTX 960 - $199
    GTX 1060 6GB - $300

    So you're looking at $50 more expensive than their previous mainstream parts right out of the gate. Granted that's not as egregious as the high end pricing (looking at you $1199 2080Ti FE), but $250-300 is really a cut off for people looking at mainstream cards. The 1660Ti is a much better card in that space. If the 2060 was a $299 card and the 1660Ti was a $249 card I think it would sell a lot better.

    Also, the used pricing of Pascal cards still puts a damper on sales. A $250 1080 runs circles around the 2060 and the RTX features of the 2060 aren't fast enough to actually be used in any meaningful way.

    At the end of the day, I'm pro new technology. I just don't think that I should have to pay a premium to be a beta tester though...
     
  28. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    I think
    You post like someone unaware of the rising cost of die the smaller we get?

    The curve has reached the bottom a couple of generations ago...muliti-masking, tooling, design...all are going up.

    So you are whining about the wrong metric...good job!
     
    Armenius likes this.
  29. kirbyrj

    kirbyrj [H]ard as it Gets

    Messages:
    24,460
    Joined:
    Feb 1, 2005
    I'm whining about the only metric....$$$$$. Thanks for playing though.
     
  30. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    And it will keep going up...due to physics....so keep whining forever from now on...but don't blame the false reason :rolleyes:
     
  31. kirbyrj

    kirbyrj [H]ard as it Gets

    Messages:
    24,460
    Joined:
    Feb 1, 2005
    If the price keeps going up, nobody is going to buy it eventually. I don't think I'm way off base thinking that "mainstream" is around $250-300 on the top end and more realistically $150-200. It doesn't matter what the die size is. It doesn't matter the R&D cost. It doesn't matter if it shows you fancy lighting or not. When you get above $250-300 you're moving past what people will pay (usually in the form of whatever Acer, Dell, etc. puts in their OEM boxes). The market for the $800-1000 computer is a lot bigger than the $1500-2000.
     
  32. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    Sure, we can choose to try and ignore physics.
    The last time that happened was for Intel in the early 2000's...no 10GHz P4's
     
    Armenius and GoldenTiger like this.
  33. kirbyrj

    kirbyrj [H]ard as it Gets

    Messages:
    24,460
    Joined:
    Feb 1, 2005
    I don't know what you're arguing about anymore. Physics isn't the consumer's problem. Consumer wants to spend $X and when Nvidia provides product that's well above $X for whatever reason, consumer won't buy or will buy in smaller quantities.
     
  34. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    You post like someone who thinks manufactors are above the laws of physics...just saying.

    We didn’t get 10Ghz Pentiums, Moore’s law is dead and Fab-cost are rising...those are facts you have to follow...like it or not.
     
    Armenius and GoldenTiger like this.
  35. kirbyrj

    kirbyrj [H]ard as it Gets

    Messages:
    24,460
    Joined:
    Feb 1, 2005
    Once again, PHYSICS IS NOT THE PROBLEM OF THE CONSUMER. If Nvidia or AMD can't put a compelling product in place of what already exists within a price point they are willing to pay, they will stick with what they already have. For comparison, see the slowdown in mobile phone sales with the rising prices. People don't care WHY cell phone prices are rising (BOM, etc....similar to your argument). They only care that they are rising and choose not to upgrade as often.
     
  36. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    Again, this is how the world looks...if you are AMD, Intel or NVIDIA.

    You keep whining...see what good that will do...you cannot divide by zero no matter how many fluffy feelings you feel entitled too...it doesn’t matter what you FEEL is the rigth price...reality trumps your fluff.

    Now whine on....
     
    GoldenTiger likes this.
  37. MangoSeed

    MangoSeed Limp Gawd

    Messages:
    507
    Joined:
    Oct 15, 2014
    Sigh. Math and physics say so.
     
    Armenius and GoldenTiger like this.
  38. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    IMHO this thread is dead.
    The OP posted a load of BS and seems unwilling to correct his false claims...aka he ran away from the pile of feces he dumped.
     
    GoldenTiger likes this.
  39. MangoSeed

    MangoSeed Limp Gawd

    Messages:
    507
    Joined:
    Oct 15, 2014
    Phone sales are slowing down because they’ve hit a hard wall on diminishing returns. As a result people are upgrading less often and manufacturers are raising prices to milk the people who are still buying.

    We have many, many years to go before that happens with graphics. The upcoming console cycle will reset the bar again and people will upgrade.

    The current pricing situation is a little bit off the curve though. We need some killer midrange products to bring things back in line. And the only way that happens is through more competitive parts from AMD.
     
    Armenius likes this.
  40. Factum

    Factum [H]ard|Gawd

    Messages:
    1,863
    Joined:
    Dec 24, 2014
    Armenius likes this.
Thread Status:
Not open for further replies.