RTX 3xxx performance speculation

Discussion in 'nVidia Flavor' started by Nebell, Oct 25, 2019.

  1. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,499
    Joined:
    Apr 22, 2006
    Sure, but I am trying a bit of thought experiment and streamlining a bit, and it doesn't look like Tensor cores really play much part, but yeah they do eat some portion of die space.

    I just wanted to cover the point, that a lot of people bring up that, there will be a big boost in RT performance, some thinking RT will be negligible performance hit, next release.

    But in reality that doesn't seem feasible. RT performance, even "pure" RT effects still hit the shaders hard, and there is no free lunch, any extra performance poured into RT Performance will be less of the transistor/performance budge put into Shader perfformance.

    I really think an across the board approach is best.
     
    Manny Calavera likes this.
  2. spine

    spine 2[H]4U

    Messages:
    2,608
    Joined:
    Feb 4, 2003
    That may be true for the current architecture, but a future one may be less shader reliant.

    Turing feels like a bridge to a more RTX focused architecture.
     
    Manny Calavera and Maddness like this.
  3. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,499
    Joined:
    Apr 22, 2006
    How is an architecture going to be less shader reliant, when all games are primarily shader reliant, even RTX games?
     
    Armenius and GoldenTiger like this.
  4. Dayaks

    Dayaks [H]ardness Supreme

    Messages:
    7,306
    Joined:
    Feb 22, 2012
    Yeah, we can say that’s definitely not happening for the 3000 series. Maybe the 6000 series.

    I don’t expect much of a change because we would need RTX tech to be viable at the low/mid range. Anything under $500 can’t run RTX for a damn. There’s going to be no massive shift anytime soon.
     
    GoldenTiger and Snowdog like this.
  5. Lord_Exodia

    Lord_Exodia [H]ardness Supreme

    Messages:
    6,999
    Joined:
    Sep 29, 2005
    20% faster at Rasterization 80% faster at Ray Tracing.
     
  6. Stoly

    Stoly [H]ardness Supreme

    Messages:
    6,391
    Joined:
    Jul 26, 2005
    80% performance increase in RT is not enough. You'll need at least twice the performance if you want to get anywhere near 60fps at 4k which would be the wholy grail.
     
  7. Dayaks

    Dayaks [H]ardness Supreme

    Messages:
    7,306
    Joined:
    Feb 22, 2012
    80% would get you there in a lot of RTX games.

    I maintain 60Hz in basically all RTX games, might have to knock down a setting or two, at 3440x1440. 4k is 60% more megapixels so given the proper amount of VRAM, 80% faster RT would be great.

    I don’t know if I’d expect that high of gains though... but the tech is new, not impossible. A chiplet design for RT might do it but I haven’t seen any info hinting at that.
     
    Last edited: Nov 5, 2019
  8. StormClaw

    StormClaw Gawd

    Messages:
    553
    Joined:
    Jun 10, 2009
    When are the 3080ti coming out? Approximately?
     
  9. harmattan

    harmattan [H]ardness Supreme

    Messages:
    4,248
    Joined:
    Feb 11, 2008
    If you mean Navi 12 e.g. 3800 xt, it's definitely in the pipeline and breadcrumbs have been turning up in drivers and support docs for months now, as well as comments from Lisa Su (albeit I've seen noting yet appear in shipping manifests). All things considered, I'd guess Jan-March is most likely, late this year very unlikely at this point.

    Performance is anyone's guess, but with the rumored 4096 SPs (64 CUs) on the 3800 xt, I'd guess somewhere around or just above 2080 Super. As for pricing, I'm guessing ~$650.
     
  10. Dayaks

    Dayaks [H]ardness Supreme

    Messages:
    7,306
    Joined:
    Feb 22, 2012
    He was asking about the nVidia 3080ti, which rumors have Q2 2020.

    I hope big Navi isn’t that expensive... it better match nVidia in features if it is.
     
  11. Thunderdolt

    Thunderdolt Limp Gawd

    Messages:
    257
    Joined:
    Oct 23, 2018
    I'm looking forward to updated Titans.
     
    AceGoober likes this.
  12. Thunderdolt

    Thunderdolt Limp Gawd

    Messages:
    257
    Joined:
    Oct 23, 2018
    Are you high?

    Please, be honest.
     
  13. noko

    noko [H]ardness Supreme

    Messages:
    4,359
    Joined:
    Apr 14, 2010
    I would predict Nvidia will do a more traditional launch with a 3080 and 3070 first holding off on big Ampere. Then the 3060 3-6 months later. 3080 Ti 6 - 9 months later. Unless Samsung can make big EUV 7nm chips right off the bat with decent yields or Nvidia beats AMD to chiplet designs for GPU's as in a separate RT co-processor. Not only a new node Nvidia has to overcome but also performance from very large 12nm chips already being made. So Nvidia can brag the 3080, marketing gimmick, it performs the same or slightly better than a 2080 Ti with a $400 cost savings and give AMD pure havoc with price cuts on the 2080 super and 2070 super on down. Big Ampere can retain the high prices Nvidia set with Turing unless AMD can pull a rabbit out of thin air.
     
  14. Lord_Exodia

    Lord_Exodia [H]ardness Supreme

    Messages:
    6,999
    Joined:
    Sep 29, 2005
    I hear you, but it's not a matter of it being enough or not, Nvidia will have to make either a Massive GPU with an incredibly high die size to double their RTX performance. Or make an RTX Co-Processor dedicated to RTX Computing. Both options are very expensive and quite a gamble. Ray Tracing is not guaranteed to be the future. Momentum is in its favor, but nothing is 100% certain yet.

    I anticipate they will add more RTX cores on their Next GPU by giving a larger chunk of the die size to RTX Cores. I think they might take this approach, until RTX becomes the Norm and solidifies itself as the future methodology

    Then possibly by the next GPU Generation they'll be giving away 80-90% of the GPU die to be dedicated to Ray Tracing as Rasterization will be phasing out.

    The good thing with this approach is they can go the opposite way if Ray Tracing doesn't take off as they anticipate and allocate less RTX cores on the Gefoce RTX 4000 Series.

    I expect Nvidia to take careful baby Steps and allocate a little more Die size to Ray Tracing each time.

    So I predict something like this

    RTX3000 +80% Ray Tracing Boost +20% Rasterization Boost
    RTX4000 +90% Ray Tracing Boost +10% Rasterization Boost
    RTX5000 Dedicated to Ray Tracing. Rasterization possible but 95 - 100% of the GPU will be fully dedicated to Ray Tracing performance. At this point Most/All games coming to the market will be fully Ray traced and 4k 240hz and 8k 120hz will be the holy grail of Gaming.
     
  15. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,499
    Joined:
    Apr 22, 2006

    Misses the point. Even pure RT effects use shaders a LOT. RT HW, just detects intersections. You still need to shade those pixels after you determine the intersection, you still need to texture them.

    If a pure RT frame is 40% RT testing and 60 % shading, you could have 200% RT boost and get less than 40% FPS boost.
     
  16. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    12,063
    Joined:
    Jun 13, 2003
    Ray tracing is absolutely the future. Refinements like path tracing included, but we'll get to ray tracing and similar before we get to whatever follows ray tracing.

    It really depends on how well they can optimize things per-generation; we might see more consolidation of cores like we did with the GTX8800's for DX10.

    Note that raster rendering likely isn't going away anytime soon, nor are shaders. Quite likely the architectures we'll be looking at in a decade will still be a combination of rasterization and ray tracing, with traditional rendering techniques shifting more toward fillrate for higher render resolutions as well as various scaling and anti-aliasing effects while ray tracing will largely take over for the real-time effects that shaders currently perform.
     
  17. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    12,063
    Joined:
    Jun 13, 2003
    Perhaps we should define a goal: maybe 4k120 / 8.3ms frametimes in 10bit HDR, regardless of the mix of rasterization and ray tracing?
     
    Manny Calavera, N4CR and noko like this.
  18. Thunderdolt

    Thunderdolt Limp Gawd

    Messages:
    257
    Joined:
    Oct 23, 2018
    There is currently no roadmap for the other parts of the system (primarily, the monitor and video bus) to support 8K120. That would be 120Gbps of data with various transfer overhead on top of that. 4K240 has the same issue of data rate (64Gbps) with the additional issue of no 240hz panels. Heck, even a true 120hz panel which has sufficient response times so that frames don't bleed into each other is still some time away.

    Over the next three generations from Nvidia, which is likely just 4 years, 4K120 is going to be the maximum goal for pixels per second even on their highest level cards. What will happen over the three generations is that the pixels are going to be massively higher quality - but there aren't going to be more of them.
     
    Manny Calavera and IdiotInCharge like this.
  19. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    12,063
    Joined:
    Jun 13, 2003
    I agree in general, but I also wouldn't disregard 5k and >8MP ultrawides with sub-120Hz maximum refresh rates as possibilities here. The panels are easy enough to make now, they're just waiting on a decent interconnect for gaming use so VRR may be employed.

    Also, OLED can do what we're looking for if LG is up to producing them or someone else gets their stuff together. I'd love to see a 5120 x 1600 43" OLED at 120Hz.
     
    noko likes this.
  20. harmattan

    harmattan [H]ardness Supreme

    Messages:
    4,248
    Joined:
    Feb 11, 2008
    Hah, completely thought Stormclaw was being coy with "3080ti", joking it was actually 5800xt. I didn't know that card was even rumored yet.
     
  21. kirbyrj

    kirbyrj [H]ard as it Gets

    Messages:
    24,713
    Joined:
    Feb 1, 2005
  22. Dayaks

    Dayaks [H]ardness Supreme

    Messages:
    7,306
    Joined:
    Feb 22, 2012
    Yawn. 6% faster? Come on nVidia hurry up with Ampere.
     
    AceGoober, Maddness and Geforcepat like this.
  23. Armenius

    Armenius I Drive Myself to the [H]ospital

    Messages:
    19,166
    Joined:
    Jan 28, 2014
    Dayaks, AceGoober, AlphaQup and 10 others like this.
  24. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,499
    Joined:
    Apr 22, 2006
    Not for current 2080ti owners, but if they could deliver 10% more perf with 2080ti Super and maybe offer an entry price of $899, that would be a big thing to a lot of Non owners.
     
    AceGoober, Riptide_NVN, Auer and 3 others like this.
  25. MangoSeed

    MangoSeed Gawd

    Messages:
    539
    Joined:
    Oct 15, 2014
    Q1 2020 is kinda late to be investing big bucks in Turing. Hopefully that rumor is just a bunch of crock.
     
    5150Joker, DF-1, Savoy and 1 other person like this.
  26. chameleoneel

    chameleoneel 2[H]4U

    Messages:
    3,021
    Joined:
    Aug 15, 2005
    I suspect that much like how geometry and pixel shaders were unified, RT will become a unified, dynamic feature, as well. So, RT won't be a completely separate duty.
     
  27. Thunderdolt

    Thunderdolt Limp Gawd

    Messages:
    257
    Joined:
    Oct 23, 2018
    This could all be nothing more than Nvidia waiting to let AMD shoot first.
     
  28. Lastan010

    Lastan010 Limp Gawd

    Messages:
    147
    Joined:
    Mar 2, 2017
    I think the higher clock speed due to 7nm is going to be the game changer for the 3 series, same formula when we went from 9 series to 10 series..
     
  29. Armenius

    Armenius I Drive Myself to the [H]ospital

    Messages:
    19,166
    Joined:
    Jan 28, 2014
    It's going to come from the increased transistor density, which at the same time makes it harder to cool, meaning clock speed may even go down compared to 16nm. We're talking about ~50 billion transistors in the same amount of die area the 2080 Ti has for 18.6 billion.
     
  30. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,499
    Joined:
    Apr 22, 2006
    I think you will be disappointed. 7nm (10nm Intel) hasn't really seen much clock speed boost. Intel actually went backwards. AMD got a few percent.
     
    5150Joker and IdiotInCharge like this.
  31. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    12,063
    Joined:
    Jun 13, 2003
    Intel's 10nm is a self-admitted romeo fox, and Nvidia usually gets more than AMD out of new processes...

    But yeah, we're not likely to see a large deviation.
     
  32. Skott

    Skott 2[H]4U

    Messages:
    3,986
    Joined:
    Aug 12, 2006
    Well normally I would say 25%-35% game dependent but if what Lisa Su says is true and AMD does go after Nvidia on the top gpu end next year then my normal estimation could be low. After all unlike Intel Nvidia will see this coming and could decide to move the bar much higher. Timing could matter though. Word is 3xxx comes first half 2020. Does AMD challenge first half or second half? Does Nvidia do their normal increase or fight AMD early? That is yet unknown. :::prays::: Please gpu gods let there be a gpu war!! We consumers need it badly!!
     
  33. Dayaks

    Dayaks [H]ardness Supreme

    Messages:
    7,306
    Joined:
    Feb 22, 2012
    We’re getting a 50 billion transistor GPU? Or is that just an example?
     
  34. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,499
    Joined:
    Apr 22, 2006
    In practice:
    16nm (and 12nm FFN) TSMC is ~25 Million transistors/mm^2
    7 TSMC is ~40 Million transistors/mm^2

    40/25 *18.6 = 29.8 Billion

    But there is ZERO chance of even that. 1.6 times the transistors will cost close to 1.6 times as much money, on top of an already ludicrously expensive die.

    I'd expect closer to 20 Billion to cheap costs under control.
     
  35. MangoSeed

    MangoSeed Gawd

    Messages:
    539
    Joined:
    Oct 15, 2014
    In that case let’s hope the rumors of bargain basement 7nm prices at Samsung are true.
     
  36. N4CR

    N4CR 2[H]4U

    Messages:
    3,969
    Joined:
    Oct 17, 2011
    Dem newfangled fuel injected voxel stuyff ya earin'me!?
     
  37. 5150Joker

    5150Joker 2[H]4U

    Messages:
    3,216
    Joined:
    Aug 1, 2005
    Voxel, the mythical unicorn we've all dreamed about. I think everyone is putting too much emphasis on RTs importance in the near future. Unless PS5 can do RT that blows your panties off at 60 fps, don't expect devs to put much effort into it for the next 5-10 years. Give me a gigantic leap in raster performance, I couldn't care less for RT right now or even 5 years from now. Shit, some of you old bastards on this forum won't even be alive by the time RT is truly ubiquitous lol!!

    If Ampere is another overpriced dud like Turing, I'm definitely going with whatever AMD has unless they really screw up too. Then I'll have to cry in a corner and hope Intel pulls a miracle out of its big blue ass.

    P.S. We should have a HardOCP betting pool on future gpu releases and the winner gets a nice chunk of change. My bet is AMD will catch up to nVidia by 2021-2022 across the board because of vulkan/dx12 and RT won't be as big of a deal as some think it will. The writing is already on the wall, nVidia can't pull driver magic anymore and they're only slightly better in efficiency now. Games like Call of Duty MW and RDR2 are indicative of what's to come from AAA devs.
     
    Last edited: Nov 11, 2019
    N4CR likes this.
  38. MangoSeed

    MangoSeed Gawd

    Messages:
    539
    Joined:
    Oct 15, 2014
    Nope still need to shade voxels.

    Shading is dynamic lighting + dynamic shadows + anything that isn't geometry (fog, clouds, fire, smoke etc) + post processing (dof, motion blur, AA)

    Raytracing doesn't reduce the need for shading in any way. It just changes what data you pass to the shader.
     
  39. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    9,499
    Joined:
    Apr 22, 2006
    I agree Efficiency is very close now, but not power that most point out (though that is also close now), but of the kind that really matters the most in delivering enthusiast cards. Perf/Transistor. Which helps on Perf/$.

    If you look back at Vega vs Pascal:

    Vega 64 (12.5 Billion transistors) vs GTX 1080 ( 7.2 Billion transistors). Very similar performance, but AMD had to throw almost double the transistors to match NVidia. Vega architecture was so far behind, that there was no way they could aim for the top.

    Now look at Navi vs Turing:

    5700 Xt (10.3 Billion transistors) vs RTX 2070 ( 10.8 Billion transistors). Very similar performance, but now AMD has completely erased the huge perf/transistor deficit.

    So it looks like the most even playing field in a LONG time. AMD can challenge for the top again. Though there are some caveats:

    1) AMD doesn't have RT HW yet, so it isn't paying the "RT TAX". Regardless of what anyone thinks of RT importance, it is pretty much table stakes going forward, and is expected that next AMD high end GPU will have RT HW, so AMD will start paying the "RT Tax"
    2) AMD is already benefiting from the move to 7nm, NVidia isn't. I don't expect huge gains from 7nm, but it is a benefit NVidia has "in the bank".

    So AMD has a penalty yet to come (RT Tax), and NVidia a benefit yet to come(7nm Shrink).

    Mitigating AMDs pending RT tax somewhat is that it looks like NVidia placed a bad bet on Tensor cores. They were supposed to be used for two things: Denoising RT and DLSS. Both are largely bust, so AMD can likely leave out Tensor cores and just have RT intersection testing cores. Less wasted transistors.

    My expectation of the immediate future (2020) is that AMD will draw next blood:

    RX 5800 or whatever it is called. Will have RTX 2080 Beating, perhaps a bit behind 2080 Ti, but at a much better price, and it will include AMD RT HW. AMD will be extremely well reviewed and market success will follow.

    Though I expect that soon after NVidia will launch RTX 3000 on 7nm which will tip the balance back.

    2021/2022 get murkier. AMD might edge more wins in 2021, with a response to RTX 3000, which NVidia will be riding until 2022, where RTX 4000 arrives to shift the balance back.

    Bottom line. I expect much more of race than we have had in a while, primarily owing to AMD closing the perf/transistor gap, which allows them to aim high again.

    I really don't expect much from Intel in this time frame. I certainly won't be a beta test for Intel GPUs.
     
    DanNeely and 5150Joker like this.
  40. Auer

    Auer Gawd

    Messages:
    881
    Joined:
    Nov 2, 2018
    Besides pricing and 7nm AMD still brought a knife to a gunfight.

    Top 3 cards are still Nvidia.

    I think it's going to be a while still before we see a sub $800 AMD card with 2080ti beating specs. AMD finally came close to the 1080ti this year...

    RTX will play a role if price differences level out between AMD and Nv.