AMD's next Navi GPUs could have the specs (and ray tracing) to beat Nvidia

Discussion in 'HardForum Tech News' started by Zarathustra[H], Mar 29, 2019.

  1. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    27,711
    Joined:
    Oct 29, 2000
    From the rumor mill over on Techradar we have a story about what the future of AMD GPU's may look like. It suggests that later this year we may see Navi 10, which will likely be a mid range offering and might be used in consoles. More midrange cards aren't exactly exciting.

    The interesting mention is Navi 20, which they say might feature Ray Tracing technology, and be faster than Nvidia's 2080ti. The downside? It suggests it is due a year after Navi 10, so we are talking late 2020? This sounds nice and all, and I am happy for Nvidia to have some high end competition, but beating what Nvidia has now, with a product that won't be available for over a year does not equal beating Nvidia.
     
    scojer, lostin3d and anthrex like this.
  2. Derangel

    Derangel [H]ard as it Gets

    Messages:
    17,331
    Joined:
    Jan 31, 2008
    If Nvidia keeps up the same 30%-ish gen-to-gen increase we saw with Turing then having a Navi 20 that is better than the 2080 ti wouldn't be that horrible. Maybe somewhere in the 15-25% range of the new thing. That said, I tend to treat all rumors of AMD GPU performance as nothing more than wild speculation. There have been too many cases of people spreading rumors and "leaks" claiming amazing performance from AMD cards for the last few generations that it makes it rather hard to believe anything.
     
    N4CR, lostin3d, Bawjaws and 4 others like this.
  3. Oldmodder

    Oldmodder Gawd

    Messages:
    658
    Joined:
    Aug 24, 2018
    Wouldn't it be stupid by AMD to try and do some form of hardware RT, just to have 2 different platforms.
    But if they have some form of general shiny multicolored sprinkles that can make the world a better place i would take that in a heartbeat.
     
  4. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    9,625
    Joined:
    Jun 13, 2003
    That they'll have hardware RT is pretty much a given. Whether it'll be remotely competitive...

    Well, RT itself is very simple from a hardware (and software) perspective. AMD already has the software side figured, has for some time. But to be effective, they'll not only have to implement RT, but they'll also need something to do the denoising that DLSS performs and helps keep performance reasonable.
     
  5. sirmonkey1985

    sirmonkey1985 [H]ard|DCer of the Month - July 2010

    Messages:
    21,166
    Joined:
    Sep 13, 2008
    welp we can only hope it performs well, at least having both camps using it may drive more developers to use the technology and thus more improvements on the hardware side. only time will tell though.
     
  6. Uvaman2

    Uvaman2 2[H]4U

    Messages:
    3,000
    Joined:
    Jan 4, 2016
    I am not sure how much official information has been out about navi, but what I have gleaned its that it is a SMALL chip aimed at efficiency.. NOT a " crown " contender.
    The way i see Navi going for a ' crown' is if it's invisible mGPU capable in some kind of chiplet modular configuration that no one expected.
    So barring a miracle expect mid range at best, no crown.
     
    gigaxtreme1 likes this.
  7. Nobu

    Nobu 2[H]4U

    Messages:
    2,910
    Joined:
    Jun 7, 2007
    Navi10 definitely will be (small, that is). No solid info on 20 yet afaik.
     
  8. Grimlaking

    Grimlaking 2[H]4U

    Messages:
    2,755
    Joined:
    May 9, 2006
    Why would no one expect that? Haven't we been discussing exactly that? Multiple Navi chiplets running in unison each with xxx number of core xxx and xxx and xxx type. Wait this is starting to look like a pornhub advertisement.

    My point is that I and many others fully expect AMD to take the crown by applying lessons learned from the CPU side of the house. To not use that engineering advantage would be a mistake.

    Yes they will need to address the memory space in a new way to be competitive on the memory bandwidth front. But I think they can do this.

    And yes I just bought at 2080. Sigh...
     
  9. Uvaman2

    Uvaman2 2[H]4U

    Messages:
    3,000
    Joined:
    Jan 4, 2016
    Heheh

    I hope they will invisible- mGPU... However, they been NO credible leaks on this (that i know of anyway)... It makes me think it ain't happening... That said invisible (with good scaling obviously) mGPU would be a massive, definitive , competition blown out of the water advantage, that it would be worth it to keep it under the tightest of lids... But AMD lids aren't usually so tight we wouldn't even have a hint by now (?).
     
  10. Bawjaws

    Bawjaws Limp Gawd

    Messages:
    434
    Joined:
    Feb 20, 2017
    So, to be cynical for a moment: the cards that AMD might be releasing in a year's time might be better than the cards that Nvidia released six months ago? And they might also include a feature that Nvidia were derided for including?

    Sounds like progress, to be sure.

    But more seriously: whatever the rumour mill churns out, we need AMD to do what they can to close the gap at the high end. I don't expect them to do this completely in one or even two generations, but if they can make steady progress then that's okay by me. We just need to temper expectations and try to wait for verifiable data to emerge (and really sadly, that won't be on HardOCP :( )
     
  11. harmattan

    harmattan [H]ardness Supreme

    Messages:
    4,204
    Joined:
    Feb 11, 2008
    Exactly. This is what it's come to? We're supposed to be excited about AMD putting out a card in ~Q3 2020 that has the same features and performance as a card that NV released in Q3 2018? If this is the case, they better plan on releasing it at or below $700 or lower since nV will almost certainly have something much faster out by then.

    Color me MEH. And this is from someone who is an AMD fan and loves his VII.
     
    c3k likes this.
  12. c3k

    c3k 2[H]4U

    Messages:
    2,087
    Joined:
    Sep 8, 2007
    I'd really like Navi to be good. I mean, I...REALLY...want Navi to be good.

    Low cost, low power, great performance. Easy peasy. ;)

    But, if Navi 10 gets announced with specs and it underwhelms, then the evil green will get my green. My GPUs are too long in the tooth. I passed on the 20xx generation in hopes that AMD would bring some gaming goodness: Navi is gonna make me decide which way to go. (I've got 2 or 3 GPUs ready to be updated. A friggin' GTX670 (!) is still in my HTPC. C'mon, man.)
     
  13. kirbyrj

    kirbyrj [H]ard as it Gets

    Messages:
    24,064
    Joined:
    Feb 1, 2005
    Midrange cards can be exciting depending on the price...If I could get 2060 performance for ~$200-250 I'd bite.
     
  14. RussianJ

    RussianJ [H]Lite

    Messages:
    67
    Joined:
    Feb 10, 2012
    Just give us a comparable performance card at slightly less msrp and I’ll be happy. Need something on the market to lower these prices
     
  15. Ziontrain

    Ziontrain n00b

    Messages:
    13
    Joined:
    Sep 13, 2018
    Seems like AMD is always 1-2 years behind Nvidia nowadays. They need to try the "beats 2080ti" and release it around the same time!
     
  16. techguymaxc

    techguymaxc [H]Lite

    Messages:
    91
    Joined:
    Jul 6, 2016
    There are really only 2 possibilities here.

    1) This RT solution will be comprised entirely of software that leverages existing general purpose shader ALUs.
    or
    2) There will be some fixed-function hardware in Navi dedicated to accelerating portions of the RT pipeline, but those portions will be broken and likely disabled - because AMD.
     
  17. Auer

    Auer Limp Gawd

    Messages:
    500
    Joined:
    Nov 2, 2018
    I really don't think AMD will release anything much cheaper than Nv.

    Things are already pretty well priced for the 1080p crowd.

    And anything in the 2070 segment and above is not going to drop significantly.
    Because it keeps selling at current prices.

    A lot of gamers I personally know have had no problems coughing up for 2080's etc...
     
  18. sirmonkey1985

    sirmonkey1985 [H]ard|DCer of the Month - July 2010

    Messages:
    21,166
    Joined:
    Sep 13, 2008
    the mistakes made with vega set amd years behind.. up until amd put all their egg's in the vega basket they were usually 3-5 months ahead of nvidia's releases after the HD3k series.

    hopefully navi isn't another dead end architecture.

    also agree with you auer, while those rumored prices were nice i don't see them ending up anywhere close to them if they in fact beat what nvidia is currently offering.
     
  19. Krenum

    Krenum [H]ardForum Junkie

    Messages:
    15,364
    Joined:
    Apr 29, 2005
    I think by the time this comes out, Nvidia will have already launched Amphere.
     
  20. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,558
    Joined:
    Apr 22, 2006
    It feels like another set of rumors setting up for massive disappointments.


    Then you just have SLI/CF problems again, and I seriously doubt AMD will solve this problem before NVidia. Multiple CPUs have always been relatively easy, there are not really any lessons to be learned there.
     
  21. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    9,625
    Joined:
    Jun 13, 2003
    This is not an absolute; using the same interposer as in HBM (likely alongside HBM), interconnects can be much higher bandwidth, to the point that the driver could theoretically present multiple GPU die as a single device. Now, I don't think that this is beyond any company involved from a technical perspective, but the approach is likely more amenable to AMD's efforts so we'd likely see it from them first.

    Making multiple CPUs work well for highly parallelizable code has taken work but that is mostly behind us. Making them work for less paralellizable code is a nightmare, and game development is specifically one area of computer science that is slowly chewing through the problem. Fortunately, the rendering pipeline is one part that is nearly infinitely parallelizable.
     
    N4CR likes this.
  22. ChadD

    ChadD 2[H]4U

    Messages:
    3,820
    Joined:
    Feb 8, 2016
    Well it sounds like Navi won't be a chiplet. It sounds like it will be more of a traditional GPU design.

    Having said that it if its a chiplet that is the point of chiplet... not needing to use any OS software to see more then one.

    AMD is using it with Ryzen to basically make 2 Ryzen chiplets run as one chip. The OS won't think look one 4 core ryzen and another 4 core ryzen... it will talk to the control chip on the silicon that will say I have 8 cores. The same theory will one day work with GPU designs if the navi or its follow up go that route. One control chip talking to 2 or more GPU chiplets.... the OS won't see multiple GPUs just one reporting both bits of hardware as one.

    It has been pointed out to me that AMD has stated Navi is not a chiplet which if true is disappointing. It seems that is where the future will be... with low end cards running one chiplet and high end running 2 perhaps even 3. At some point though I would expect AMD will go that way using what they have learned to increase their GPU performance while also reducing their costs. (Chiplets are much cheaper as you basically end up fabbing lots of smaller less complicated parts)
     
  23. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,558
    Joined:
    Apr 22, 2006
    Of course, because they haven't solved the software/synchonization issues.

    Simply building chiplets, won't solve the GPU software problem. You can do CPU chiplets because CPUs are trivial to join together by almost any method imaginable. GPUs are not.

    Synchronizing gaming GPUs is such a massive mess, that they still can't even do this in the driver to make it completely transparent to games. Games pretty much still need to be reworked to support CF/SLI.

    It will probably done eventually, but you need to move significant intelligence (SW) into the central synchronizing/interconnect chip, and of course you need massive interconnection BW, and even then the design will still be slower than a monolithic one of similar specs, so it will only really make sense for replacing massive chips.
     
    Last edited: Mar 30, 2019
  24. ChadD

    ChadD 2[H]4U

    Messages:
    3,820
    Joined:
    Feb 8, 2016
    That simply isn't true what you are talking about is software solutions to tie together to seperete bits of hardware both talking to their own PCIX lanes. That is not what chiplets do... and ting CPUs together is not trivial. You have to develop stuff like super fast infinity fabric to make it possible.The other solution in the CPU world is to add a ton of L3 cache systems so the core complex units have a buffer for communication back and forth.

    A Chiplet design GPU does NOT involve software of any kind. A controller chip would be part of the package along with the actual GPU cores. As the Ryzen 3000s will have... 2 or more chiplet CPUs with a control chip talking to both. The OS only actually talks to the control chip... any commands giving to the cores are routed through the control chip.

    There is nothing stopping GPUs from using chiplet designs. At this point its the only logical solution going forward. If people think the 2080ti is priced insanely just wait until a traditional monolithic GPU with another 20% bump in transistor count replaces it. Sure smaller fabs make the chips physically smaller but its still billions of transistors that all still need to work. Yields on everyones top end skus are terrible... they are not taking fully functioning turing chips and neutering them to sell 2070s, those are chips that are not fully functioning. Chiplets solve one of the biggest issues right now for Chip companies. Yields. Its far easier to fab a 1 billioni transistor part and another 700 million transistor part and package them together then to try and bake 1.7 billion transistors into one functioning chip.

    I guess I'm saying NV is likely to go the same route if not with their first 7nm then the design after. The single chip road only leads to more and more expensive parts because the yields on 100% functional silicon get worse and worse.
     
  25. steakman1971

    steakman1971 2[H]4U

    Messages:
    2,433
    Joined:
    Nov 22, 2005
    I want to believe. Actually, I don't really care who is better. I just know that Nvidia needs competition to spur competition and innovation.
     
    Auer likes this.
  26. GoodBoy

    GoodBoy [H]ard|Gawd

    Messages:
    1,267
    Joined:
    Nov 29, 2004
    What's the saying about Crying Wolf over and over...

    Yeah, heard this same story, for years.

     
  27. lostin3d

    lostin3d [H]ard|Gawd

    Messages:
    1,965
    Joined:
    Oct 13, 2016
    Totally agree with you on every point. If this is true it only maintains what for AMD has become a nearly 2+ year cycle of catch up to x80TI's. If they really want to be competitive then they need to get it down to ~12 months with at least a 20% less price for the same tier.
     
  28. N4CR

    N4CR 2[H]4U

    Messages:
    3,616
    Joined:
    Oct 17, 2011
    Navi won't be MCM, it certainly still will be GCN. I'm not touching AMD again until next generation when GCN goes to the freezing works.
     
  29. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,558
    Joined:
    Apr 22, 2006
    Wrong. There have been multiple CPU designs tied together every way imaginable, including just plopped together on the same motherboard, and they just work. There is nothing to getting multiple CPUs to work together, no matter how they are connected.

    Again, incorrectly applying the trivial work of putting CPU cores together, to GPUs which has non trivial actual problems to overcome.

    Sure there is. Unlike multi-CPU designs, multi-gaming-GPU designs have actual problems to overcome. Here you really do need insane bandwidth and ultra low latency, and you need some rock solid intelligence in the controller chip to manage it all transparently. Don't hold your breath on this one.
     
  30. ChadD

    ChadD 2[H]4U

    Messages:
    3,820
    Joined:
    Feb 8, 2016
    I guess will see what happens. I think you really miss understand what a chiplet design is. No nothing like a chiplet has been done before on a CPU or a GPU. And really there is nothing more complicated about a GPU compared to a general compute CPU... no matter what the GPU companies would have you believe they are just stripped down cores doing far less accurate math. Its cool that the GPU marketing companies like to advertise tons of flops of calculation... but its in very inaccurate low precision modes. GPUs lack the precision modes of a general compute CPU and don't have to transfer the same types of higher precision 80bit floats around.

    Its nice that GPU companies like to talk about tons of TFLOPs due to there 1000s of "cores" but flops are not always equal. Its really not hard to move a bunch of fused 4 bit data around. GPUs don't have some miracle data transport on board. If that was the case why would say AMD not use that tech on their CPUs ? lol There is always things to overcome when you design something new no doubt... of course I would expect AMD would solve them in one area first. So sure I expect their CPU money makers will be chiplet first which we already know to be true. The GPU design after the sucessful launch of chiplet CPUs I have no doubt will be a chiplet design. (its also possible they have a chiplet version of navi designed for the console parts... but perhaps that is far to semi custom)
     
  31. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,558
    Joined:
    Apr 22, 2006
    There is nothing more complicated about actual GPU cores.

    There are a lot more complications connecting them together for real time game rendering.

    Which is why that after more than twenty years of multiple GPU usage, multiple GPU use in games is still a mess.

    Using chiplets won't magically solve all the problems, as so many assume.

    MCM/Chiplets will happen, but there will be issues/glitches to solve along the way, much more than there was doing similar with CPUs.

    I expect NVidia and AMD will both have their solutions in the market in a similar time-frame so it won't be any kind of deciding factor. NVidia is clearly researching along these lines, but the paper is specifically about compute loads and their design is still problematic for gaming.
     
  32. ChadD

    ChadD 2[H]4U

    Messages:
    3,820
    Joined:
    Feb 8, 2016
    You are mistaking external connections with internal ones.

    Chiplets don't communicate over PCIx. They are ON PACKAGE.

    You misunderstand what a chiplet is. You could say GPUs are already chiplet to a degree as they have computation clusters now. All a chiplet design means is spinning those clusters out in multiple smaller bits of silicon which are then packaged together. Instead of baking one massive 2 billion transistors part that has 64 clusters with 2048 "cores" a chiplet would bake 2 32 cluster parts minus the memory controlling hardware... then a third part would replace the GPU controller bits (they ARE in the chips now Nvidia calls theirs falcon) https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-RISC-V-Next-Gen-Falcon

    A chiplet design would simply spin off that controller part to its own bit of silicon greatly reducing the complication in its fabrication. (which is why the ryzen controllers are on 14nm instead of 7nm they don't need high end fabs) That controller already communicates with multiple core clusters. The only difference is the clusters would be housed in 2 or more chiplets... which would also be easier to FAB and skyrocket in theory at least yields. Greatly reducing the number of semi defective parts being sold as low and mid range parts.

    To be honest NV is in a very good position to go this route as well and likely will in the next 2-3 generations almost no doubt. Its getting harder and harder to fab multiple billions of transistors on one part.
     
    Uvaman2 likes this.
  33. Uvaman2

    Uvaman2 2[H]4U

    Messages:
    3,000
    Joined:
    Jan 4, 2016
    I agree it is a compicated issue, let alone invisible mGPU... mGPU is basically dead... I guess some mention chiplets in thinking a way of making a modular chip, in which you have like a core, to which you can add more pipeline chiplets something like that?
    That would make sense too, it would have its advantages/drawbacks if its possible.
    Personally, I don't think much of Navi, other than probably/hopefully great value... I don't understand why people keep trying to paint unreal expectations on it... This time is not AMD painting unreal pictures here (like you could make a point about them before).. they have been pretty on point as far as presenting their Ryzen line, the 7... it has been roughly a fair/decent representation.
    Now, It would be a pretty incredible leap, if they did figure out invisible mGPU.. could it be? slim slim slim slim miracle chance?
    The things that make me think in favor is the fact that Lisa move most of the team to Navi, supposedly, while whatchamacallit, worked with a starved team for Vega supposedly?
    So, supposedly most of the resources went to a mid-range cheap chip, and that's it?
    I guess it could be, a cool efficient cheap chip might not win a crown, but it can go in a lot more devices that is for sure.
    Again, no leaks whatsoever, so invisible mGPU is but a dream, I think AMD would have leaked by now... and if they managed not to by now, well shit, that is a much tighter ship than I what it has been so far.
     
  34. Uvaman2

    Uvaman2 2[H]4U

    Messages:
    3,000
    Joined:
    Jan 4, 2016
    Well according to wtftech Intel got there first to invisible mGPU.. unless the supposed leak of the Xe gpus is bullshit.
     
  35. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,558
    Joined:
    Apr 22, 2006
    I know. But you are still leaving the chip, which increases latency and likely limits bandwidth, as pad count would be enormous on the controller chip trying to feed full bandwidth to each chiplet.