Why does raytracing suck so much, is it because of nvidia's bad hardware implementation?

Discussion in 'Video Cards' started by Peppercorn, Jun 7, 2019.

Thread Status:
Not open for further replies.
  1. DooKey

    DooKey [H]ardness Supreme

    Messages:
    7,930
    Joined:
    Apr 25, 2001
    Pretty much par for the course when new rendering techniques/standards come out. You have to start somewhere.
     
  2. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,250
    Joined:
    Jun 13, 2003
    See: every DirectX release. Vulkan. Mantle never got that far.

    The chicken-and-the-egg problem is real; Nvidia is being railed for jumpstarting the process by making the investment and taking the risk.

    AMD... is just going to lose marketshare by choosing to not keep up.
     
    GoldenTiger, Armenius and Cr4ckm0nk3y like this.
  3. Furious_Styles

    Furious_Styles [H]ard|Gawd

    Messages:
    1,292
    Joined:
    Jan 16, 2013
    Eh just to be clear I'm not arguing for AMD or anything here. I'm about as neutral in that fight as one could be. I consider DX12 to pretty much be a failure as well.
     
  4. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,250
    Joined:
    Jun 13, 2003
    I'm not arguing against them- I just remain disappointed in them. We know they can do better.

    With respect to DX12, I feel that many including myself underestimated just how much of a change it represented (alongside Vulkan / Mantle). All three are really very similar, and are really very different than any DirectX or OpenGL (or other) API before them on the desktop. EA is on their fourth or fifth Frostbyte-based game that has an unflattering DX12 implementation, for example, while Doom (2016) shows us just what the potential of a low-overhead API can be.

    Like DX12 / Vulkan, it seems that we'll need to wait for the engine gurus to do near complete render rewrites to get the performance that the technology promises into shipping consumer applications.
     
    Armenius likes this.
  5. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,809
    Joined:
    Apr 22, 2006
    As said repeatedly, and should be obvious, is that this kind of change takes time. But it looks like going forward it will soon be the norm, to have it. It should be extremely clear by now that Ray Tracing is here to stay.

    Supporting it now: less than 10.
    Announced: more than 10.

    https://www.rockpapershotgun.com/2019/06/11/nvidia-rtx-ray-tracing-dlss-games-confirmed-2/
    Note list already incomplete by at least one. Doom Eternal (Vulkan).
    https://www.dsogaming.com/news/doom-eternal-will-support-ray-tracing/

     
  6. Furious_Styles

    Furious_Styles [H]ard|Gawd

    Messages:
    1,292
    Joined:
    Jan 16, 2013
    When do you think DX12 will really show its promise? In 2020 perhaps?
     
  7. Xero717

    Xero717 n00b

    Messages:
    54
    Joined:
    Feb 28, 2011
    I wasn't really impressed by the Quake II RTX demo, and at 4k, it was a stuttering mess on a 2080Ti. Turned it down to 1080p and it ran alright.

    I'm sure Raytracing is the future, it's just not ready yet. The 2080ti is a great chip, but the price and tech just aren't where they should be.
     
    Maddness likes this.
  8. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,250
    Joined:
    Jun 13, 2003
    In games that have been written for it, it already has. See Gears of War 4, for example, that ran better than expected years ago, even at 4k.
     
    Armenius likes this.
  9. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,250
    Joined:
    Jun 13, 2003
    If you were comparing it to literally anything else, you're doing it wrong.

    Quake II RTX is full ray tracing, there isn't any rasterization going on, and that's why it's impressive. AAA-releases will be doing hybrid ray tracing and rasterization for the forseeable future and will see much better performance when developed with ray tracing in mind, on top of much better visuals.
     
    noko, Armenius and Maddness like this.
  10. ChadD

    ChadD 2[H]4U

    Messages:
    3,930
    Joined:
    Feb 8, 2016
    Traced light paths involve a lot of low resolution float point math. (FP8 and FP16) You can calculate it on a higher precision part like a CPU or a standard GPU "core" "shader unit" but its not very efficient.

    The NV solution was to take the tensor hardware they where already baking in for AI. Using that to calculate the ray math. Which is very logcial. Optical lens making calculations going back 30+ years have used tensor matrix math. With their latest second generation tensor hardware they allowed for lower precision math calculation. Its logical to use that bit of hardware. Of course it means using it eats up a lot of internal chip cache bandwidth. Its a ton of math taxing not a ton of shared cache. I'm sure NV will tune it some more in their next chips, perhaps make some improvements to their cache system as AMD has done with rdna.

    AMD we are not exactly sure what their hardware solution will be... we know from their own slides. Their ultimate goal is to do full scene ray tracing in the cloud. They are betting on streaming to bring real true ray tracing to games. They do have a in between plan with the consoles and Navi+ next year. I believe we can already guess at what their solution will look like. RDNA introduces wave32 computation. Doing much the same thing NV is doing with tensors running at a lower float point to perform more "simple" math, such as ray calculation (but also other lower precision shader routines). Previous AMD cards vega/polaris use wave64 only... which in cases of branching math (such as ray calculation) becomes very inefficient potentially calculating very little per clock as the core waits on results to move forward.

    RDNA runs dual computation cores... that can run at wave32. So much like NV is running their tensor cores at FP 8 and FP16 to calculate 2-4x the ray paths vs a full tensor. AMD could use the standard shader method... and by running them at half precision 2 at a time, they can in theory calculate 4 bits of math per clock where their old cards would have been doing one. The advantage over the NV solution (at least on paper) is all this math is happening in the same engine that is calculating regular shaders. It should have a lot lower impact on cache performance.

    The main issue with using shaders to calculate rays (as NVs drivers allow on say a 1080) is the shaders bog down with lots of low precision ray calculation. It not efficient at all. AMDs RDNA may well solve that. AMD may well only need to supply proper drivers to allow hardware ray tracing on RDNA. (I suspect their Navi+ with ray tracing support won't be much more then a driver update.. right now why provide that which would only highlight how bad cards like Vega would be). I find this method much more realistic... considering Sony and MS are saying tracing is coming to their next consoles. I don't suspect they will be adding tensor hardware... its much more logical to allow the current shader cores to calculate rays. (the trick is speeding them up to make it happen.. and what AMD is talking about with dual computation pipes capable of running at half precision should do exactly that)
     
    Last edited: Jun 14, 2019
    noko, andrewaggb, 4saken and 3 others like this.
  11. cjcox

    cjcox [H]ard|Gawd

    Messages:
    1,132
    Joined:
    Jun 7, 2004
    Raytracing sucks because it's raytracing. There isn't a magic bullet to make this work fast. This has only become (barely) possible in real time (putting more hardware into it), and it's usually not "complete" ray tracing even so.
     
    Armenius and Araxie like this.
  12. MangoSeed

    MangoSeed Limp Gawd

    Messages:
    391
    Joined:
    Oct 15, 2014
    Please stop repeating this unless you can provide proof. Nvidia’s technical documentation clearly states that ray traversal and intersections are done using fixed function hardware.

    The internet really sucks.
     
    GoldenTiger and Snowdog like this.
  13. ChadD

    ChadD 2[H]4U

    Messages:
    3,930
    Joined:
    Feb 8, 2016
    Go do some reading of NV white papers. RT cores are just tensors running at lower precision. (if you need some further proof go find some benchamrks of Volta cards running with RTX on) I am not saying using tensor is a bad thing, I don't see anyone else shipping a consumer part with tensor cores. (also what difference does it make if we call the bit doing the math an "RT core" or a FP8/16 tensor matrix.... other then I guess having to admit NV doesn't design anything for games alone anymore... which shouldn't be a surprise to anyone NO one designs for games alone anymore)

    Marketing just really sucks.
     
    4saken and IdiotInCharge like this.
  14. Alienslare

    Alienslare Limp Gawd

    Messages:
    141
    Joined:
    Jan 23, 2016
    Same here alot is to be blamed directly to directx. Sometimes it seems as if microsoft does this on purpose because of its console xbox.
     
  15. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    10,250
    Joined:
    Jun 13, 2003
    It's mostly that supporting a platform as dynamic as the PC is just hard. Consoles are static and much easier to develop for, despite the lack of hardware resources.
     
    GoldenTiger and Maddness like this.
  16. ChadD

    ChadD 2[H]4U

    Messages:
    3,930
    Joined:
    Feb 8, 2016
    PC game development requires a crystal ball that you don't if your developing for a console.

    Game development takes 4-5 years. So early in development the game devs have to rub their balls and guess what a mid range PC is going to look like. Aim to high and you end up with a Cryis that looks insane and wonderful and no one buys cause they can't run it. Aim to low and you end up with a Daikatana that looks 2 or 3 years out of date.

    Developing for a console is much easier. Its also why we have gotten so many console ports the last number of years. Now that consoles are more PC like... their targets are set by their specs.

    One advantage to streaming (to sort of get back to ray tracing)... developers will be able to target some serious server performance targets. It may trickle down to high end PC game rigs. Getting games that really push limits on home hardware. Perhaps the average mass market gamer won't be able to play those games without turning to streaming, but high end gaming rigs might be able to pull off good performance local. A future of Cryis like make your rig cry games might be just a few years away. lol
     
    IdiotInCharge likes this.
  17. MangoSeed

    MangoSeed Limp Gawd

    Messages:
    391
    Joined:
    Oct 15, 2014
    Yes I’ve already read them and the raytracing gems paper which is how I know you haven’t. Please stop spreading misinformation.
     
    Factum, Trimlock, GoldenTiger and 3 others like this.
  18. Nenu

    Nenu [H]ardened

    Messages:
    18,723
    Joined:
    Apr 28, 2007
    Any chance you can do the list with expected dates?
    That would be nice to see.
     
  19. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,809
    Joined:
    Apr 22, 2006
    Oh great your personal pet conspiracy theory again.

    This is pure nonsense. You are arguing that NVidia is lying to everyone, pretending there are dedicated RT Cores, when in reality they don't exist and they are just re-using Tensor cores.

    WTF would they do that? They wouldn't be able to keep that hidden very long, and so far no industry experts or competitors have called them on it.

    NVidia describes in reasonable detail what the very specfic RT cores do on page 36 of this whitepaper:
    https://www.nvidia.com/content/dam/...ure/NVIDIA-Turing-Architecture-Whitepaper.pdf


    Note that they even break down the two different specialized units in each core.
    RT cores are Not Tensor processors. These are two completely different dedicated HW units. They are clearly separate in the SM as well:

    GeForce_EditorsDay_Aug2018_Updated090318_1536034900-compressed-010.png


    Note that a Turing SM has 4 independent Sub cores. Each sub core contains 2 Tensor cores.

    Subcores do NOT contain RT Cores.

    Instead it is clearly shown and documented that RT cores are outside the sub cores to let them operate independently in parallel.

    In short everything you claim is completely disproven by the Whitepaper you claim you read.
     
    Armenius, DooKey, Factum and 6 others like this.
  20. Dayaks

    Dayaks [H]ardness Supreme

    Messages:
    6,804
    Joined:
    Feb 22, 2012
    While I agree with you I am not sure why he even cares. They could use any sort of sorcery they want to and if it’s the best I’ll buy it.

    If AMD creates a card twice as good I’ll buy that too. I could care less it’s made from 7nm, dog shit, or pixie dust; all that matters is it’s performance / cost.
     
    Armenius, Algrim, GoldenTiger and 3 others like this.
  21. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,809
    Joined:
    Apr 22, 2006
    Exactly. I am not sure why he thinks it would matter so much that NVidia would need to hide it.

    If NVidia actually did get Tensor cores to do Bounding Box test, and Intersection tests, they would just say so and everyone would be fine with that.

    But there is no reason to lie, and is crystal clear from the white paper that these are completely different HW units specialized in different function, in different locations.

    It's rather baffling how/why he arrived at his unique conclusion.
     
  22. ChadD

    ChadD 2[H]4U

    Messages:
    3,930
    Joined:
    Feb 8, 2016
    Not sure why you guys think it bothers me... that NV is clearly using Tensor FP8 mode to calculate rays. YES read what you posted about their white paper... about BVD acceleration and how it works. I don't care what marketing word NV wants to use... AMD INTEL APPLE SAMSUNG and every tech company since forever has come up with really stupid marketing names.

    Whatever NV is doing behind the driver is anyones guess. Its not open source... no game developer no OS developer or anyone but NV knows what is happening after the high level (and some low level calls in vulcan and DX12) call the NV driver. Just because they slap a rectangle at the bottom of the core and Slap the name RT core on it. Hardly means its not part of the tensor unit. (Which it clearly is lol)

    Regaardless its off topic and isn't important. I am not sure where I threw any shade on NV at all... I simply pointed out why their soltuion isn't perfect for real time ray tracing... and unless someone can show me a benchmark where turning on Max Ray tracing in ANY shipping game doesn't tank performance by 30-50% I'll change my mind right now and bow down to their fantastic engineering. ;)

    My ONLY point brining up NVs solution using RT cores, low resolution tensor or full tensor for denoise or whatever mumbojumbo marketing speak we want to use. WAS that AMDs solution appears to potentially be different but much the same.

    The issue with RT is that it requires lots of fast low precision math calculations. We need to calculate tens and perhaps hundreads of thousands of rays in a scene... those do not need to be calculated at FP64. A CPU sucks at real time ray tracing because its designed to do more complicated math then ray tracing requires. Instead of doing 100 really complicated calculations tracing requires 10s of thousands of very simple ones. (CPUs don't have fancy ways to split their 4 or 8 or 32 real cores into double the number of half presicion units) Regardless of any of us agreeing on calling NVs solution RT cores or Tensor+ or WTF other name you like... their solution is clear. They are breaking the math up into small bits and throwing them on a FP8/16 matrix which they are calling BVD and doing a metric ton of simple math really quickly. I HAVE NEVER claimed their solution was terrible... its ingenious, there marketing however is pure BS.

    From what I have read about AMDs new RDNA stuff... it sounds to me like they are trying to solve the same issue. They are simply doing it with shader cores directly. By allowing those shaders to operate at a lower precison. So instead of calculation 1000 calculations at wave 64 they could potentially do 2000 calcualtions in the same clock at wave 32 precison. Which for ray tracing would still be more precison then is required. (however NV does use TENSORS as per their marketing and engieering folks for denoise work) so perhaps calculating at wave 32 AMD wouldn't need that much extra denoise work vs the even lower precision calcualtion NV is apparently doing up front.

    Its all speculation and we'll see when AMD starts talking about actual hardware ray tracing down the road. Looks to me like they will be using shaders that can operate at lower presion. Which solves the problem in different yet very similer way as NVs. I coould be wrong perhaps Tensor and or some other form of RT core is coming to Navi+ down the road... btu that doesn't make a ton of sense to me. As it would be more then a simple + part I would think.
     
    Last edited: Jun 14, 2019
    noko likes this.
  23. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,809
    Joined:
    Apr 22, 2006
    At least I didn't waste as much time reading that nonsense as you did writing it. You really don't have a clue about this.
     
    Dayaks, DooKey, Factum and 2 others like this.
  24. ChadD

    ChadD 2[H]4U

    Messages:
    3,930
    Joined:
    Feb 8, 2016
    Last I will post on this... but in my mind. NVs engeineering guys have been adding tensor cores to their parts for 2 generations now for AI. They where tasked with finding ways to make them work for games... so NV didn't have to just fuse them off for consumer parts wasting a ton of money. So they threw a ton of ideas up and Ray tracing and DLSS made the most sense to pursue as bits that could use tensor hardware. Great ideas and I don't hate NV their software/hardware engeineers are great clearly... and yes both those technoclogies are promissing even if both are not perfect today.

    I just think the NV marketing dept have heard the QQing about how NV chips are no longer designed for gamers first. I mean they didn't even sell gaming cards based on volta right. So calling the tensor mode doing BDV boundry math (which is a freaking math matrix its what tensors are designed to accelerate) a "RT core" makes it sound like turing was actually designed for gamers first. Which it wasn't. And it doesn't freaking matter. Its all marketing. Who cares if it was designed for servers and AI work or not. Clearly NVs marketing however wasn't wrong... it does seem to bother some to suggest Turing isn't a gamer first design. (and it doesn't matter)
     
    Last edited: Jun 14, 2019
  25. ChadD

    ChadD 2[H]4U

    Messages:
    3,930
    Joined:
    Feb 8, 2016
    We won't know for likely 6+months when AMD or Sony or MS or someone starts talking about AMDs ray tracing solution. :)
     
  26. Dadebraafsie

    Dadebraafsie n00b

    Messages:
    47
    Joined:
    Feb 27, 2017
    AMD will be doing rays whenever the market's up for it .
    even the 2080ti cant do current titles in full raytracing mode . it will have to get better by many factors .
    its being said , you have to start somewhere , just like AA was a huge performance hit when it came out .
     
    Maddness likes this.
  27. MangoSeed

    MangoSeed Limp Gawd

    Messages:
    391
    Joined:
    Oct 15, 2014
    This I can agree with but please don't pass it off as fact. The math behind raytracing is relatively easy to understand but it's murder on current gpus. If Nvidia or anyone found a way to use tenors to trace rays fast they would be shouting it from the heavens.
     
    Factum likes this.
  28. Imhotep

    Imhotep Gawd

    Messages:
    767
    Joined:
    Feb 12, 2014
    It is done exactly as described by ChadD. Marketing wants you to believe they spent 10 years developing it. Titan V does ray tracing with equal performance loss as Turing yet it has no RT cores. …:D
    That is because its done by Tensor just like Turing.
    Look at this https://wccftech.com/titan-v-high-fps-battlefield-v-rtx/
    Turing is faster simply because its newer tweaked arch. most likely few more tensor cores.
    Oh, and Radeon VII is much sexier than 2080 Ti or Titan RTX ;)
     
    Last edited: Jun 15, 2019
    ChadD likes this.
  29. Alienslare

    Alienslare Limp Gawd

    Messages:
    141
    Joined:
    Jan 23, 2016
    Its not as dynamic as we think. Feels like too much but it should not be. Developers follow the swing of Directx and that goes for Hardware manufacturers as well. Though there hasnt been anything revolutionary just pinches up gradually then dramatic.

    And one thing i know when a gpu series can support dx11 but not dx12 is feels rediculous that it isnt hardware ready. The best example for me was Intel’s project larabee where they explained just by updating drivers it started supporting directx newer version. Explaining its all software based not hardware.
     
  30. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,809
    Joined:
    Apr 22, 2006
    This is how conspiracy theories start. People don't understand something, so they make up stories that fit their world view, rather than challenge it. I Just don't get what challenged you so much, about how RTX actually works, that you had to retreat to the alternate story you are making up.



    The blind leading the blind. :rolleyes:

    Not that WCCF is any kind of source, but did you read your own link:
    "The Titan V is probably this capable in ray tracing even without RT cores due to its sheer power. It has more shader units than the Titan RTX "

    As explained everywhere, in the absence of RT cores, a large shader program is run, and nothing has more shaders than Titan V. Also BFV is hybrid title that runs variable amount of RT code.

    Now compare the lowly 2060 vs Titan V in Port Royale RT benchmark. They are comparable, and the 2060 is just as fast, if not faster despite having a fraction of the Tensor cores the Titan V has.
    https://www.dsogaming.com/news/nvid...-mark-port-royal-rtx-2060-as-fast-as-titan-v/
     
    Last edited: Jun 15, 2019
  31. Factum

    Factum [H]ard|Gawd

    Messages:
    1,717
    Joined:
    Dec 24, 2014
    The amount of hoghwash posted is hillarious...Raytracing must really have ruffled some feathers...but watch the tune change when AMD supports DXR ;)
     
    GoldenTiger likes this.
  32. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,809
    Joined:
    Apr 22, 2006
    Now that IS marketing.

    Both consoles are coming out with RT support next year, that means AMD is working on it, but they aren't ready yet.

    So they have to pretend it's the market that isn't ready, when they simply don't have the product.
     
    GoldenTiger likes this.
  33. Imhotep

    Imhotep Gawd

    Messages:
    767
    Joined:
    Feb 12, 2014
    The tune will not change . AMD crowd does not care for ray tracing at $ 1200 and 30 FPS... :D
     
  34. Maddness

    Maddness [H]ard|Gawd

    Messages:
    1,167
    Joined:
    Oct 24, 2014
    I see Doom Eternal is also going to support Ray Tracing. That and the likes of Cyberpunk 2077 support means that it is starting to gain momentum.
     
  35. Imhotep

    Imhotep Gawd

    Messages:
    767
    Joined:
    Feb 12, 2014
    Are you high >? How is the market ready , 4-5 games and no one really uses this feature yet..... haha
     
  36. Maddness

    Maddness [H]ard|Gawd

    Messages:
    1,167
    Joined:
    Oct 24, 2014
    If that's directed at me, then No I don't take drugs. Although sometimes it would probably help. Doom Eternal and Cyberpunk 2077 and the new Wolfenstein are some of the most anticipapated upcoming games. I would say it definitely will gain traction.
     
    GoldenTiger likes this.
  37. Imhotep

    Imhotep Gawd

    Messages:
    767
    Joined:
    Feb 12, 2014
    Well, Then you contradicted yourself. The market will be ready someday and that is when AMD will support it....:D
     
  38. MangoSeed

    MangoSeed Limp Gawd

    Messages:
    391
    Joined:
    Oct 15, 2014
    That's fine but it doesn't explain wilfull ignorance about how the tech actually works. The AMD crowd as you call them are free to stand on the sidelines and hate all day long.
     
    GoldenTiger and Araxie like this.
  39. Pieter3dnow

    Pieter3dnow [H]ardness Supreme

    Messages:
    6,790
    Joined:
    Jul 29, 2009
    I recent that characterization.
    Ray tracing is fine if you want it go for it , enjoy your RTX2060 or RTX 2070 and low resolution and lower frame rate . If that makes any of you happy cool.

    What I recent even more that it is called ray tracing it is misleading where it should be named :
    Nvidia ray tracing: the way you are ment to pay us (R).
    Fast enough for ray tracing because in reality the segmented performance is clearly holding it back. Yet with stuff as Tessellation where was the segmentation between several cards within the same generation?
     
    noko likes this.
  40. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,809
    Joined:
    Apr 22, 2006
    As far as "The Market" not being ready, when does it ever make sense to sit back and let your competitor define the market? It's not a desirable position to be in. To suggest otherwise is spin.

    The simple reality is that AMD doesn't have HW ready, so they have to ignore it, or spin it and they are trying to do both. As soon as they have HW ready they will bring it to market, regardless of the state of "The Market".

    What matters much more than the number count of titles, are the killer apps, or in this case the killer games.

    From what I see, Cyberpunk 2077 is the first Killer App for Ray Tracing. It's probably the most anticipated game coming out in the next year. After the CP announcement I have seen several people say, that they were going to get an RTX card now, and Doom Eternal will like be another. This one will be interested because they are using Vulkan and claim they are doing RT better than everyone else.

    If NVidia is essentially giving you similar perf/$ as AMD with the Bonus of free RTX when games like CP 2077 and Doom Eternal coming to market with RT support, AMD is missing the boat. Only spin suggests otherwise.
     
Thread Status:
Not open for further replies.