RTX 2060 Super or RX 5700 XT or RX 5700 ?

Discussion in 'Video Cards' started by Swagata, Oct 20, 2019.

  1. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    12,486
    Joined:
    Jun 13, 2003
    Without being specific with respect to unit counts, I am generally talking about end results, not the cores themselves. I do understand that Nvidia will have to scale different parts of the GPU to emphasize RT performance more, and that as a result rasterized performance will naturally increase as well.
     
  2. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,719
    Joined:
    Apr 22, 2006
    If you are talking about 10X performance increase in RT, then you need 10x more RT cores and 10X more shader cores, which means 10X more transistors. Which is totally impossible.

    That max plausible transistor increase for RTX 3000, might be 30%, not 1000%. You aren't even going to get 50% increase in RT performance out of that transistor budget, let alone 10X (1000%).

    Though there is a thread speculating on RTX 3000, that is a better place to discuss this.
     
    GoldenTiger and IdiotInCharge like this.
  3. Stoly

    Stoly [H]ardness Supreme

    Messages:
    6,420
    Joined:
    Jul 26, 2005
    I don't think the 3080Ti would have 512 bit bus. Its too expensive. Besides nvidia is so much more memory efficient that it doesn't really need as much bandwidth. 384bit/DDR6 should be plenty.
     
    IdiotInCharge likes this.
  4. Stoly

    Stoly [H]ardness Supreme

    Messages:
    6,420
    Joined:
    Jul 26, 2005
    Denoise is made by the tensor cores, not the shaders.

    AFAIK RT cores performance is "linear" meaning it can increase with both more cores and higher clockspeed. The "heavy" part on RT is precisely the raytracing part (duh!!) shaders don't actually do much more than what they do without RT.

    I won't claim any performance target on RTX as there are plenty of factors. Core count, clock speeds and architechtural improvements among other things.

    But I don't think your figures are realistic.

    We'll see in a few months...
     
  5. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,719
    Joined:
    Apr 22, 2006
    I would say denoising was intended to be done on the tensor cores.

    AFAICT, almost no one is doing that.

    Most are opting for their own custom temporal denoising done on the Shader cores.

    Even the Q2 RTX update mentions its doing temporal denoising.
     
  6. Stoly

    Stoly [H]ardness Supreme

    Messages:
    6,420
    Joined:
    Jul 26, 2005

    Wow, that would mean a whole lot of wasted space if that's true, as that is the "raison de etre" of the Tensor cores. Maybe something is broken? I mean tensor cores are supposed to be orders of magnitude faster for denoising than shaders.
     
  7. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,719
    Joined:
    Apr 22, 2006
    Yeah, the two reasons were supposed to be denoising and DLSS. Neither of which has really delivered.

    Control has the best DLSS so far, and it turns out, they aren't aren't running a tensor network in that version either, but a coded algorithm, which presumably also runs on shader cores.

    Tensor cores have really been massively oversold.
     
    cybereality and Dayaks like this.
  8. Jandor

    Jandor Limp Gawd

    Messages:
    296
    Joined:
    Dec 30, 2018
    It's been established by developers that the RTX architecture is a failure because it's highly inefficient, probably made in a rush and added to an already developed Turing architetcture. There is a cache problem of some sort that you can't obtain the full performance of the cores working all together. As a matter of fact Nvidia did great, since their Turing part was well ahead of AMD architecture and the hardware Raytracing for gaming, even out of AMD radar, not the real goal for them. Mind that the DXR optimized driver for pascal cards only show how bad those cards are for the specific features ued by Nvidia to feature raytrace on those cards. AMD may be much better fit by using algorythms with FP16 which would put Vega 56/64 cards quite close to RTX 2060. Nvidia may completely rework its architecture regarding Raytrace and implement it into the CUDA cores so this will bring flexibility. New DRX version from M$ tends to go into that direction with new APIs using balanced features between raster and raytrace, which Turing is incapable of.
    So, all put together, the best speculation is whether you need to spend more than $300 on a graphics card if you are not ready to spend much more in less than 6 months to go in the newt to come high end of gaming. There is better value for a 6 month living card if you want to be in the middle class of gaming in 6 months. Meaning some $200 Ampere card will be much better in 6 month than any RTX 2060 S at $400 today. Same about RX 5700/XT. Too expensive for their value in 6 months.
     
  9. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,719
    Joined:
    Apr 22, 2006
    This is your second hyperbolic and complete unsubstantiated post in the thread.

    You really need some solid evidence to back your outlandish claims.
     
    wyqtor, Maddness, GoldenTiger and 3 others like this.
  10. oldmanbal

    oldmanbal 2[H]4U

    Messages:
    2,071
    Joined:
    Aug 27, 2010
    5700 if you're going to flash the bios, otherwise 5700xt is the king right now. Anything else up to a 2080ti pounds sand value wise, and since the new consoles are all going to be amd, it will continue to eat up pc ports like rdr2 just showcased.