AMD's next Navi GPUs could have the specs (and ray tracing) to beat Nvidia

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,741
From the rumor mill over on Techradar we have a story about what the future of AMD GPU's may look like. It suggests that later this year we may see Navi 10, which will likely be a mid range offering and might be used in consoles. More midrange cards aren't exactly exciting.

The interesting mention is Navi 20, which they say might feature Ray Tracing technology, and be faster than Nvidia's 2080ti. The downside? It suggests it is due a year after Navi 10, so we are talking late 2020? This sounds nice and all, and I am happy for Nvidia to have some high end competition, but beating what Nvidia has now, with a product that won't be available for over a year does not equal beating Nvidia.
 
If Nvidia keeps up the same 30%-ish gen-to-gen increase we saw with Turing then having a Navi 20 that is better than the 2080 ti wouldn't be that horrible. Maybe somewhere in the 15-25% range of the new thing. That said, I tend to treat all rumors of AMD GPU performance as nothing more than wild speculation. There have been too many cases of people spreading rumors and "leaks" claiming amazing performance from AMD cards for the last few generations that it makes it rather hard to believe anything.
 
Wouldn't it be stupid by AMD to try and do some form of hardware RT, just to have 2 different platforms.
But if they have some form of general shiny multicolored sprinkles that can make the world a better place i would take that in a heartbeat.
 
Wouldn't it be stupid by AMD to try and do some form of hardware RT, just to have 2 different platforms.
But if they have some form of general shiny multicolored sprinkles that can make the world a better place i would take that in a heartbeat.

That they'll have hardware RT is pretty much a given. Whether it'll be remotely competitive...

Well, RT itself is very simple from a hardware (and software) perspective. AMD already has the software side figured, has for some time. But to be effective, they'll not only have to implement RT, but they'll also need something to do the denoising that DLSS performs and helps keep performance reasonable.
 
welp we can only hope it performs well, at least having both camps using it may drive more developers to use the technology and thus more improvements on the hardware side. only time will tell though.
 
I am not sure how much official information has been out about navi, but what I have gleaned its that it is a SMALL chip aimed at efficiency.. NOT a " crown " contender.
The way i see Navi going for a ' crown' is if it's invisible mGPU capable in some kind of chiplet modular configuration that no one expected.
So barring a miracle expect mid range at best, no crown.
 
I am not sure how much official information has been out about navi, but what I have gleaned its that it is a SMALL chip aimed at efficiency.. NOT a " crown " contender.
The way i see Navi going for a ' crown' is if it's invisible mGPU capable in some kind of chiplet modular configuration that no one expected.
So barring a miracle expect mid range at best, no crown.
Navi10 definitely will be (small, that is). No solid info on 20 yet afaik.
 
I am not sure how much official information has been out about navi, but what I have gleaned its that it is a SMALL chip aimed at efficiency.. NOT a " crown " contender.
The way i see Navi going for a ' crown' is if it's invisible mGPU capable in some kind of chiplet modular configuration that no one expected.
So barring a miracle expect mid range at best, no crown.

Why would no one expect that? Haven't we been discussing exactly that? Multiple Navi chiplets running in unison each with xxx number of core xxx and xxx and xxx type. Wait this is starting to look like a pornhub advertisement.

My point is that I and many others fully expect AMD to take the crown by applying lessons learned from the CPU side of the house. To not use that engineering advantage would be a mistake.

Yes they will need to address the memory space in a new way to be competitive on the memory bandwidth front. But I think they can do this.

And yes I just bought at 2080. Sigh...
 
Why would no one expect that? Haven't we been discussing exactly that? Multiple Navi chiplets running in unison each with xxx number of core xxx and xxx and xxx type. Wait this is starting to look like a pornhub advertisement.

My point is that I and many others fully expect AMD to take the crown by applying lessons learned from the CPU side of the house. To not use that engineering advantage would be a mistake.

Yes they will need to address the memory space in a new way to be competitive on the memory bandwidth front. But I think they can do this.

And yes I just bought at 2080. Sigh...
Heheh

I hope they will invisible- mGPU... However, they been NO credible leaks on this (that i know of anyway)... It makes me think it ain't happening... That said invisible (with good scaling obviously) mGPU would be a massive, definitive , competition blown out of the water advantage, that it would be worth it to keep it under the tightest of lids... But AMD lids aren't usually so tight we wouldn't even have a hint by now (?).
 
So, to be cynical for a moment: the cards that AMD might be releasing in a year's time might be better than the cards that Nvidia released six months ago? And they might also include a feature that Nvidia were derided for including?

Sounds like progress, to be sure.

But more seriously: whatever the rumour mill churns out, we need AMD to do what they can to close the gap at the high end. I don't expect them to do this completely in one or even two generations, but if they can make steady progress then that's okay by me. We just need to temper expectations and try to wait for verifiable data to emerge (and really sadly, that won't be on HardOCP :( )
 
So, to be cynical for a moment: the cards that AMD might be releasing in a year's time might be better than the cards that Nvidia released six months ago? And they might also include a feature that Nvidia were derided for including?

Sounds like progress, to be sure.

But more seriously: whatever the rumour mill churns out, we need AMD to do what they can to close the gap at the high end. I don't expect them to do this completely in one or even two generations, but if they can make steady progress then that's okay by me. We just need to temper expectations and try to wait for verifiable data to emerge (and really sadly, that won't be on HardOCP :( )

Exactly. This is what it's come to? We're supposed to be excited about AMD putting out a card in ~Q3 2020 that has the same features and performance as a card that NV released in Q3 2018? If this is the case, they better plan on releasing it at or below $700 or lower since nV will almost certainly have something much faster out by then.

Color me MEH. And this is from someone who is an AMD fan and loves his VII.
 
  • Like
Reactions: c3k
like this
I'd really like Navi to be good. I mean, I...REALLY...want Navi to be good.

Low cost, low power, great performance. Easy peasy. ;)

But, if Navi 10 gets announced with specs and it underwhelms, then the evil green will get my green. My GPUs are too long in the tooth. I passed on the 20xx generation in hopes that AMD would bring some gaming goodness: Navi is gonna make me decide which way to go. (I've got 2 or 3 GPUs ready to be updated. A friggin' GTX670 (!) is still in my HTPC. C'mon, man.)
 
Midrange cards can be exciting depending on the price...If I could get 2060 performance for ~$200-250 I'd bite.
 
Just give us a comparable performance card at slightly less msrp and I’ll be happy. Need something on the market to lower these prices
 
Seems like AMD is always 1-2 years behind Nvidia nowadays. They need to try the "beats 2080ti" and release it around the same time!
 
There are really only 2 possibilities here.

1) This RT solution will be comprised entirely of software that leverages existing general purpose shader ALUs.
or
2) There will be some fixed-function hardware in Navi dedicated to accelerating portions of the RT pipeline, but those portions will be broken and likely disabled - because AMD.
 
I really don't think AMD will release anything much cheaper than Nv.

Things are already pretty well priced for the 1080p crowd.

And anything in the 2070 segment and above is not going to drop significantly.
Because it keeps selling at current prices.

A lot of gamers I personally know have had no problems coughing up for 2080's etc...
 
Seems like AMD is always 1-2 years behind Nvidia nowadays. They need to try the "beats 2080ti" and release it around the same time!

the mistakes made with vega set amd years behind.. up until amd put all their egg's in the vega basket they were usually 3-5 months ahead of nvidia's releases after the HD3k series.

hopefully navi isn't another dead end architecture.

also agree with you auer, while those rumored prices were nice i don't see them ending up anywhere close to them if they in fact beat what nvidia is currently offering.
 
I think by the time this comes out, Nvidia will have already launched Amphere.
 
The interesting mention is Navi 20, which they say might feature Ray Tracing technology, and be faster than Nvidia's 2080ti.

It feels like another set of rumors setting up for massive disappointments.


Why would no one expect that? Haven't we been discussing exactly that? Multiple Navi chiplets running in unison each with xxx number of core xxx and xxx and xxx type. Wait this is starting to look like a pornhub advertisement.

My point is that I and many others fully expect AMD to take the crown by applying lessons learned from the CPU side of the house. To not use that engineering advantage would be a mistake.

Yes they will need to address the memory space in a new way to be competitive on the memory bandwidth front. But I think they can do this.

And yes I just bought at 2080. Sigh...

Then you just have SLI/CF problems again, and I seriously doubt AMD will solve this problem before NVidia. Multiple CPUs have always been relatively easy, there are not really any lessons to be learned there.
 
Then you just have SLI/CF problems again, and I seriously doubt AMD will solve this problem before NVidia.

This is not an absolute; using the same interposer as in HBM (likely alongside HBM), interconnects can be much higher bandwidth, to the point that the driver could theoretically present multiple GPU die as a single device. Now, I don't think that this is beyond any company involved from a technical perspective, but the approach is likely more amenable to AMD's efforts so we'd likely see it from them first.

Multiple CPUs have always been relatively easy, there are not really any lessons to be learned there.

Making multiple CPUs work well for highly parallelizable code has taken work but that is mostly behind us. Making them work for less paralellizable code is a nightmare, and game development is specifically one area of computer science that is slowly chewing through the problem. Fortunately, the rendering pipeline is one part that is nearly infinitely parallelizable.
 
  • Like
Reactions: N4CR
like this
Then you just have SLI/CF problems again, and I seriously doubt AMD will solve this problem before NVidia. Multiple CPUs have always been relatively easy, there are not really any lessons to be learned there.

Well it sounds like Navi won't be a chiplet. It sounds like it will be more of a traditional GPU design.

Having said that it if its a chiplet that is the point of chiplet... not needing to use any OS software to see more then one.

AMD is using it with Ryzen to basically make 2 Ryzen chiplets run as one chip. The OS won't think look one 4 core ryzen and another 4 core ryzen... it will talk to the control chip on the silicon that will say I have 8 cores. The same theory will one day work with GPU designs if the navi or its follow up go that route. One control chip talking to 2 or more GPU chiplets.... the OS won't see multiple GPUs just one reporting both bits of hardware as one.

It has been pointed out to me that AMD has stated Navi is not a chiplet which if true is disappointing. It seems that is where the future will be... with low end cards running one chiplet and high end running 2 perhaps even 3. At some point though I would expect AMD will go that way using what they have learned to increase their GPU performance while also reducing their costs. (Chiplets are much cheaper as you basically end up fabbing lots of smaller less complicated parts)
 
Well it sounds like Navi won't be a chiplet. It sounds like it will be more of a traditional GPU design.

Of course, because they haven't solved the software/synchonization issues.

Having said that it if its a chiplet that is the point of chiplet... not needing to use any OS software to see more then one.

Simply building chiplets, won't solve the GPU software problem. You can do CPU chiplets because CPUs are trivial to join together by almost any method imaginable. GPUs are not.

Synchronizing gaming GPUs is such a massive mess, that they still can't even do this in the driver to make it completely transparent to games. Games pretty much still need to be reworked to support CF/SLI.

It will probably done eventually, but you need to move significant intelligence (SW) into the central synchronizing/interconnect chip, and of course you need massive interconnection BW, and even then the design will still be slower than a monolithic one of similar specs, so it will only really make sense for replacing massive chips.
 
Last edited:
Of course, because they haven't solved the software/synchonization issues.

Simply building chiplets, won't solve the GPU software problem. You can do CPU chiplets because CPUs are trivial to join together by almost any method imaginable. GPUs are not.

Synchronizing gaming GPUs is such a massive mess, that they still can't even do this in the driver to make it completely transparent to games. Games pretty much still need to be reworked to support CF/SLI.

It will probably done eventually, but you need to move significant intelligence (SW) into the central synchronizing/interconnect chip, and of course you need massive interconnection BW, and even then the design will still be slower than a monolithic one of similar specs, so it will only really make sense for replacing massive chips.

That simply isn't true what you are talking about is software solutions to tie together to seperete bits of hardware both talking to their own PCIX lanes. That is not what chiplets do... and ting CPUs together is not trivial. You have to develop stuff like super fast infinity fabric to make it possible.The other solution in the CPU world is to add a ton of L3 cache systems so the core complex units have a buffer for communication back and forth.

A Chiplet design GPU does NOT involve software of any kind. A controller chip would be part of the package along with the actual GPU cores. As the Ryzen 3000s will have... 2 or more chiplet CPUs with a control chip talking to both. The OS only actually talks to the control chip... any commands giving to the cores are routed through the control chip.

There is nothing stopping GPUs from using chiplet designs. At this point its the only logical solution going forward. If people think the 2080ti is priced insanely just wait until a traditional monolithic GPU with another 20% bump in transistor count replaces it. Sure smaller fabs make the chips physically smaller but its still billions of transistors that all still need to work. Yields on everyones top end skus are terrible... they are not taking fully functioning turing chips and neutering them to sell 2070s, those are chips that are not fully functioning. Chiplets solve one of the biggest issues right now for Chip companies. Yields. Its far easier to fab a 1 billioni transistor part and another 700 million transistor part and package them together then to try and bake 1.7 billion transistors into one functioning chip.

I guess I'm saying NV is likely to go the same route if not with their first 7nm then the design after. The single chip road only leads to more and more expensive parts because the yields on 100% functional silicon get worse and worse.
 
I want to believe. Actually, I don't really care who is better. I just know that Nvidia needs competition to spur competition and innovation.
 
  • Like
Reactions: Auer
like this
From the rumor mill over on Techradar we have a story about what the future of AMD GPU's may look like. It suggests that later this year we may see Navi 10, which will likely be a mid range offering and might be used in consoles. More midrange cards aren't exactly exciting.

The interesting mention is Navi 20, which they say might feature Ray Tracing technology, and be faster than Nvidia's 2080ti. The downside? It suggests it is due a year after Navi 10, so we are talking late 2020? This sounds nice and all, and I am happy for Nvidia to have some high end competition, but beating what Nvidia has now, with a product that won't be available for over a year does not equal beating Nvidia.

Totally agree with you on every point. If this is true it only maintains what for AMD has become a nearly 2+ year cycle of catch up to x80TI's. If they really want to be competitive then they need to get it down to ~12 months with at least a 20% less price for the same tier.
 
Navi won't be MCM, it certainly still will be GCN. I'm not touching AMD again until next generation when GCN goes to the freezing works.
 
That simply isn't true what you are talking about is software solutions to tie together to seperete bits of hardware both talking to their own PCIX lanes. That is not what chiplets do... and ting CPUs together is not trivial. You have to develop stuff like super fast infinity fabric to make it possible.The other solution in the CPU world is to add a ton of L3 cache systems so the core complex units have a buffer for communication back and forth.

Wrong. There have been multiple CPU designs tied together every way imaginable, including just plopped together on the same motherboard, and they just work. There is nothing to getting multiple CPUs to work together, no matter how they are connected.

A Chiplet design GPU does NOT involve software of any kind. A controller chip would be part of the package along with the actual GPU cores. As the Ryzen 3000s will have... 2 or more chiplet CPUs with a control chip talking to both. The OS only actually talks to the control chip... any commands giving to the cores are routed through the control chip.

Again, incorrectly applying the trivial work of putting CPU cores together, to GPUs which has non trivial actual problems to overcome.

There is nothing stopping GPUs from using chiplet designs..

Sure there is. Unlike multi-CPU designs, multi-gaming-GPU designs have actual problems to overcome. Here you really do need insane bandwidth and ultra low latency, and you need some rock solid intelligence in the controller chip to manage it all transparently. Don't hold your breath on this one.
 
Wrong. There have been multiple CPU designs tied together every way imaginable, including just plopped together on the same motherboard, and they just work. There is nothing to getting multiple CPUs to work together, no matter how they are connected.

Again, incorrectly applying the trivial work of putting CPU cores together, to GPUs which has non trivial actual problems to overcome.

Sure there is. Unlike multi-CPU designs, multi-gaming-GPU designs have actual problems to overcome. Here you really do need insane bandwidth and ultra low latency, and you need some rock solid intelligence in the controller chip to manage it all transparently. Don't hold your breath on this one.

I guess will see what happens. I think you really miss understand what a chiplet design is. No nothing like a chiplet has been done before on a CPU or a GPU. And really there is nothing more complicated about a GPU compared to a general compute CPU... no matter what the GPU companies would have you believe they are just stripped down cores doing far less accurate math. Its cool that the GPU marketing companies like to advertise tons of flops of calculation... but its in very inaccurate low precision modes. GPUs lack the precision modes of a general compute CPU and don't have to transfer the same types of higher precision 80bit floats around.

Its nice that GPU companies like to talk about tons of TFLOPs due to there 1000s of "cores" but flops are not always equal. Its really not hard to move a bunch of fused 4 bit data around. GPUs don't have some miracle data transport on board. If that was the case why would say AMD not use that tech on their CPUs ? lol There is always things to overcome when you design something new no doubt... of course I would expect AMD would solve them in one area first. So sure I expect their CPU money makers will be chiplet first which we already know to be true. The GPU design after the sucessful launch of chiplet CPUs I have no doubt will be a chiplet design. (its also possible they have a chiplet version of navi designed for the console parts... but perhaps that is far to semi custom)
 
I guess will see what happens. I think you really miss understand what a chiplet design is. No nothing like a chiplet has been done before on a CPU or a GPU. And really there is nothing more complicated about a GPU compared to a general compute CPU... no matter what the GPU companies would have you believe they are just stripped down cores doing far less accurate math. Its cool that the GPU marketing companies like to advertise tons of flops of calculation... but its in very inaccurate low precision modes. GPUs lack the precision modes of a general compute CPU and don't have to transfer the same types of higher precision 80bit floats around.

There is nothing more complicated about actual GPU cores.

There are a lot more complications connecting them together for real time game rendering.

Which is why that after more than twenty years of multiple GPU usage, multiple GPU use in games is still a mess.

Using chiplets won't magically solve all the problems, as so many assume.

MCM/Chiplets will happen, but there will be issues/glitches to solve along the way, much more than there was doing similar with CPUs.

I expect NVidia and AMD will both have their solutions in the market in a similar time-frame so it won't be any kind of deciding factor. NVidia is clearly researching along these lines, but the paper is specifically about compute loads and their design is still problematic for gaming.
 
You are mistaking external connections with internal ones.

Chiplets don't communicate over PCIx. They are ON PACKAGE.

You misunderstand what a chiplet is. You could say GPUs are already chiplet to a degree as they have computation clusters now. All a chiplet design means is spinning those clusters out in multiple smaller bits of silicon which are then packaged together. Instead of baking one massive 2 billion transistors part that has 64 clusters with 2048 "cores" a chiplet would bake 2 32 cluster parts minus the memory controlling hardware... then a third part would replace the GPU controller bits (they ARE in the chips now Nvidia calls theirs falcon) https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-RISC-V-Next-Gen-Falcon

A chiplet design would simply spin off that controller part to its own bit of silicon greatly reducing the complication in its fabrication. (which is why the ryzen controllers are on 14nm instead of 7nm they don't need high end fabs) That controller already communicates with multiple core clusters. The only difference is the clusters would be housed in 2 or more chiplets... which would also be easier to FAB and skyrocket in theory at least yields. Greatly reducing the number of semi defective parts being sold as low and mid range parts.

To be honest NV is in a very good position to go this route as well and likely will in the next 2-3 generations almost no doubt. Its getting harder and harder to fab multiple billions of transistors on one part.
 
There is nothing more complicated about actual GPU cores.

There are a lot more complications connecting them together for real time game rendering.

Which is why that after more than twenty years of multiple GPU usage, multiple GPU use in games is still a mess.

Using chiplets won't magically solve all the problems, as so many assume.

MCM/Chiplets will happen, but there will be issues/glitches to solve along the way, much more than there was doing similar with CPUs.

I expect NVidia and AMD will both have their solutions in the market in a similar time-frame so it won't be any kind of deciding factor. NVidia is clearly researching along these lines, but the paper is specifically about compute loads and their design is still problematic for gaming.
I agree it is a compicated issue, let alone invisible mGPU... mGPU is basically dead... I guess some mention chiplets in thinking a way of making a modular chip, in which you have like a core, to which you can add more pipeline chiplets something like that?
That would make sense too, it would have its advantages/drawbacks if its possible.
Personally, I don't think much of Navi, other than probably/hopefully great value... I don't understand why people keep trying to paint unreal expectations on it... This time is not AMD painting unreal pictures here (like you could make a point about them before).. they have been pretty on point as far as presenting their Ryzen line, the 7... it has been roughly a fair/decent representation.
Now, It would be a pretty incredible leap, if they did figure out invisible mGPU.. could it be? slim slim slim slim miracle chance?
The things that make me think in favor is the fact that Lisa move most of the team to Navi, supposedly, while whatchamacallit, worked with a starved team for Vega supposedly?
So, supposedly most of the resources went to a mid-range cheap chip, and that's it?
I guess it could be, a cool efficient cheap chip might not win a crown, but it can go in a lot more devices that is for sure.
Again, no leaks whatsoever, so invisible mGPU is but a dream, I think AMD would have leaked by now... and if they managed not to by now, well shit, that is a much tighter ship than I what it has been so far.
 
Well according to wtftech Intel got there first to invisible mGPU.. unless the supposed leak of the Xe gpus is bullshit.
 
You are mistaking external connections with internal ones.

Chiplets don't communicate over PCIx. They are ON PACKAGE.

I know. But you are still leaving the chip, which increases latency and likely limits bandwidth, as pad count would be enormous on the controller chip trying to feed full bandwidth to each chiplet.
 
Back
Top