confirmed: AMD's big Navi launch to disrupt 4K gaming

The default memory configuration for all Navi 2x series seems to be GDDR & not HBM

Power considerations suggest a doubling of 5600 XT for big Navi which means 12gb vram is likely. There could be a top-end / pro card with 24 gb ram too.

And if I were to take a guess, iff AMD designed big Navi to support both GDDR & HBM and iff big Navi is a hit and/or performs better than RTX 3080ti then AMD might be tempted to release a 16GB/24GB HBM card to take on RTX 390/titan !!?


I don't see how big navi of around 505mm can possibly perform better than a 3080ti, nvidia is finally shrinking down to 7nm and they are reportedly increasing the die size on flagships to over 800mm. It's brute forced supremacy again. I expect nvidia to have the fastest cards on the high end.
 
I don't see how big navi of around 505mm can possibly perform better than a 3080ti, nvidia is finally shrinking down to 7nm and they are reportedly increasing the die size on flagships to over 800mm. It's brute forced supremacy again. I expect nvidia to have the fastest cards on the high end.

Nvidia is not shrinking down to 7nm. They are using Samsung's 8nm LPP which is closer to their 10 nm node than TSMC's 7nm. If they actually released a 800mm die size, the card would be $4,000 and turn a room into a sauna.
 
Nvidia is not shrinking down to 7nm. They are using Samsung's 8nm LPP which is closer to their 10 nm node than TSMC's 7nm. If they actually released a 800mm die size, the card would be $4,000 and turn a room into a sauna.


Just looked this up and I guess that changes things. Is there confirmation nvidia won't go with jumbo die sizes this time though?
 
Nvidia is not shrinking down to 7nm. They are using Samsung's 8nm LPP which is closer to their 10 nm node than TSMC's 7nm. If they actually released a 800mm die size, the card would be $4,000 and turn a room into a sauna.
Looking at their cooler design if that is the one for the 3080, a furnace inside the case. If the 3080 is using the GA102 vice GA104, that is basically the Ti version which could explain the 3090 naming if that takes with Nvidia for their top gaming card. I suspect Nvidia is expecting RNDA2x will be competitive. The good news is if NVIDIA calls it the 3080, it would mean pricing should not be that much higher over the 2080 Super (hoping with fingers crossed). I am not convinced NVIDIA will release their top card until AMD releases theirs or at least know more about how it performs. Coming out with the 3080, using GA102 Samsung node which can be later updated to Super keeping the better yielded dies for it and releasing 3090 (different series, different node) for TSMC version of GA102(?) top card next year is one possibility for 3090 (HBM2e?). NVIDIA using two foundries (if that is the case) can become very useful and stabilizing.

Now what does that do for the 3070, GA104? I can see Nvidia waiting or releasing it the same time as the 3080. Maybe announcing it with unknown price for a month later release. Just so many ways I see Nvidia has for Ampere, launch etc. Basically 3080 is a cutdown Ti version with the later Super the Ti version using Samsung GA102, 3090 beating whatever AMD has offered using TSMC 7nm process shortly afterwards or when availability is available.

In short, none of us really know how this will play out in the next several month and into the first half of next year. NVIDIA gave themselves a lot of wiggle room while AMD locked up the availability on TSMC process node but that will only last for so long but also lock themselves into a more fixed position.
 
Ouch...

I believe that AMD can do better, but have simply chosen not to (again).

I'm actually fine with a card that's marginally above 2080 Ti in performance if it comes in $700-800. It gets me to 60 FPS at 4k in most games and is notably faster than the 1080Ti/2080/VII/2080 Super performance I've been stuck at for the past 4 years.

In any respect, I'm not paying for nVidia's flagship which, while I'm sure will be faster, will certainly be vomit-inducingly expensive. And we've learned over the past 2 years that Ray Tracing means basically zilch, so I'm 100% focused on the price/(rendering)performance proposition.
 
🤷‍♂️

Is it that important? Should I create a website and release source information to compete with someone else's website that releases source information?

Having a URL doesn't really give anymore legitimacy to one rumor over another.
Actually it kinda does.
 
I let a nasty fart and it seriously disrupted my 4k gaming for awhile.

Where are all of the facts??? 20 pages of rumor... You better start a new thread once real details emerge, it's cruel and unusual to make someone dig thru 20 pages of this crap to find actual news.
 
And we've learned over the past 2 years that Ray Tracing means basically zilch, so I'm 100% focused on the price/(rendering)performance proposition.
Chicken and egg, have to pick one. Nvidia getting hardware out there with Turing means that their second release with Ampere and AMDs first release with Big Navi / Navi 2 will be far more important.
 
Where are all of the facts??? 20 pages of rumor... You better start a new thread once real details emerge, it's cruel and unusual to make someone dig thru 20 pages of this crap to find actual news.

Unfortunately for now, we only have more rumors. Somehow have to keep this thread going till Nov or Jan, whenever it is that the reviews are out ...

The latest comes, once again, via a user on the website Chiphell.

They state that Big Navi could come in two variants featuring either 16 GB or 12 GB of VRAM. It does, however, pose an interesting notion about the VRAM options for Big Navi.

https://www.thefpsreview.com/2020/08/06/big-navi-could-come-in-both-16-gb-and-12-gb-variants/

This latest rumor also collaborates another made back in May by well-known leaker _rogame. Back then, it too was stated that Big Navi would have at least two variants.

cc erek cybereality

Screenshot_2020-08-07-00-01-57-377.jpeg
 
Ouch...

I believe that AMD can do better, but have simply chosen not to (again).
Chosen not too and can't afford the silicon different reasons with the same outcome, AMD has a lot of large contracts, MS, Sony, Lenovo, ... At the early stages of a new launch when failure rates are larger and the process less refined, I am not sure they can afford the waisted silicon chasing the performance king crown. If the rumors are true that AMD is struggling with the 4000 series APU's and that the OEM's are lodging complaints that AMD is not meeting their demand, paired with the increased orders from MS and Sony for their console chips than AMD simply can't spare the silicon to go chasing those parts. I am very interested to see what AMD does launch to compete with the Quadro's and Tesla's. The early leaks for the Impact cards are at least promising to be a solid alternative for many in Servers and I would very much like to see AMD put something out that can give the Quadro lineup a proper challenge.
 
There are rumors of both GGDR6 and HBM on RDNA2 cards, so that doesn't indicate any such default memory configuration. 'Big Navi' is the 505mm² behemoth, the others are just smaller configurations on a different die sizes.
I could see them using HBM on their upper end workstation and all their server parts but I don't see it being something that the consumer parts really need, in gaming scenarios the added bandwidth doesn't really add much where in AI and other Workstation/Server applications the performance difference really does come through.
 
I let a nasty fart and it seriously disrupted my 4k gaming for awhile.

Where are all of the facts??? 20 pages of rumor... You better start a new thread once real details emerge, it's cruel and unusual to make someone dig thru 20 pages of this crap to find actual news.

Once there’s actual facts you can just go to nvidia.com and amd.com.
 
Chicken and egg, have to pick one. Nvidia getting hardware out there with Turing means that their second release with Ampere and AMDs first release with Big Navi / Navi 2 will be far more important.
Yeah, they needed hardware in the field, without that MS could not have worked with everybody they did to get DX12U out in the way they did, they certainly wouldn't have been able to optimize the DXR libraries like they did and DLSS wouldn't have made the advancements it has, I mean look at its initial release and compare it to what 3.0 is offering and that is a huge leap forward. the RTX series cards laid the groundwork for a lot of really cool things which could be fun for everybody going forward.
 
I could see them using HBM on their upper end workstation and all their server parts but I don't see it being something that the consumer parts really need, in gaming scenarios the added bandwidth doesn't really add much where in AI and other Workstation/Server applications the performance difference really does come through.

AMD's workstation card is the MI100 which is CDNA based, not RDNA2.
 
They state that Big Navi could come in two variants featuring either 16 GB or 12 GB of VRAM. It does, however, pose an interesting notion about the VRAM options for Big Navi.

That implies 384 bit bus on 12GB and 33% wider 512 bit bus (and bandwidth) on 16GB.

The thing is an 8GB card could have full 512 bit bus, and thus full bandwidth. So it would make a lot more sense to have 8GB and 16GB models if they are going with a 512 bit bus.
 
I don't see how big navi of around 505mm can possibly perform better than a 3080ti, nvidia is finally shrinking down to 7nm and they are reportedly increasing the die size on flagships to over 800mm. It's brute forced supremacy again. I expect nvidia to have the fastest cards on the high end.


The RTX3080 die is rumored to be 627mm square.

What is AMD doing to match the Tensor cores/DLSS 2.0/3.0? and nVIDIA's RT Cores ?
 
Last edited:
That implies 384 bit bus on 12GB and 33% wider 512 bit bus (and bandwidth) on 16GB.

The thing is an 8GB card could have full 512 bit bus, and thus full bandwidth. So it would make a lot more sense to have 8GB and 16GB models if they are going with a 512 bit bus.

Redgamingtech's sources claim that big Navi does not have a 512 bit bus

Also – despite recent reports that Navi higher-end SKUs have both 12 and 16 GB RAM, I was told that the GPU doesn’t seem to have a 512 bit bus. Both Rogame and Jim from AdoredTV messaged me about this, which brings into question how the bus structure for RDNA 2 actually functions and what the memory layout is.

http://www.redgamingtech.com/amd-rdna-3-is-chiplet-based-and-rdna-4-in-development/
 
Contradictory rumors are nothing new. A top end card with only 256 bit bus (the other main 16GB option) doesn't make sense. Though top card could be HBM, though that would be >512 bit bus.

Still all just rumors.

The Linux drivers showed HBM2e. If there are two memory options and one of them is 16GB, I think the most logical assumption is that it is HBM2e.
 
Looks like Nvidia triggered the disruption themselves before AMD could do it

Maintaining the same price for 3080 as the 2080 is impressive considering the increased power&cooling requirements

Digital Foundry has shared an early look at the performance of NVIDIA’s new “flagship” Ampere gaming GPU, the GeForce RTX 3080. According to Richard Leadbetter’s tests with Borderlands 3, Shadow of the Tomb Raider, Control, and other hit titles, the card is around 70 to 80 percent faster than its predecessor – the GeForce RTX 2080 – for high-preset, 4K gaming. It even manages to be nearly 100 percent faster in Quake II RTX.
 
Looks like Nvidia triggered the disruption themselves before AMD could do it

Maintaining the same price for 3080 as the 2080 is impressive considering the increased power&cooling requirements

They are actually launching 3080 FE for $100 less than 2080 FE launch (was $799).

It looks like there is no "FE Tax" this time.
 
The Linux drivers showed HBM2e. If there are two memory options and one of them is 16GB, I think the most logical assumption is that it is HBM2e.
I suppose we could be dealing with 4 stacks of HBM, one disabled for yields. Or HBM + GDDR6, with GDDR6 optional. Might as well put that HBCC to use.
 
I suppose we could be dealing with 4 stacks of HBM, one disabled for yields. Or HBM + GDDR6, with GDDR6 optional. Might as well put that HBCC to use.

Nvidia had to tack on a $150 fan to cool GDDR6+ and overclocked large die. That's a lot of headroom in price to stick HBM2e on their high end.
 
Nvidia had to tack on a $150 fan to cool GDDR6+ and overclocked large die. That's a lot of headroom in price to stick HBM2e on their high end.

If you really think that cooler cost $150, I have a couple of bridges to sell you. The cooler is odd, but there is no more fins, nor heat pipes, nor aluminum, nor fans than in a typical GPU cooler. It's probably cheaper than the typical large Tri-Fan AIB cooler.
 
If you really think that cooler cost $150, I have a couple of bridges to sell you. The cooler is odd, but there is no more fins, nor heat pipes, nor aluminum, nor fans than in a typical GPU cooler. It's probably cheaper than the typical large Tri-Fan AIB cooler.

What about the salaries of the thousands of engineers who designed it? :cool:
 
I agree, this rumor does seem off. If AMD prices a GPU for $500 to $600 then it is more likely competing with 3070/3070 ti & it is not the big Navi

Big Navi likely to cost more than 3080, is my guess

This is mis-information.

https://hardforum.com/threads/amds-...a-amperes-second-tier.1999515/post-1044713436

According to coretek’s AIB sources, AMD is changing its pricing plans for “Big Navi” due to NVIDIA’s sudden bout of generosity. Red team was going to release its (presumably flagship) 16 GB Radeon RX 6000 Series GPU at $599, but to better compete with green team, these cards may launch at $549 instead

https://www.thefpsreview.com/2020/0...following-geforce-rtx-30-series-announcement/
 
I agree, this rumor does seem off. If AMD prices a GPU for $500 to $600 then it is more likely competing with 3070/3070 ti & it is not the big Navi

Big Navi likely to cost more than 3080, is my guess



https://hardforum.com/threads/amds-...a-amperes-second-tier.1999515/post-1044713436



https://www.thefpsreview.com/2020/0...following-geforce-rtx-30-series-announcement/

I really wonder what makes people act as if every rumor is true. This is from "coreteks" he has a history of making up BS.
 
Yeah I can't put my finger on why, but I thought the last Coreteks video was off. Just a hunch, and I literally can't tell for myself what's giving me vibes.
 
More rumors:

As per Redgamingtech's guess (source):

nov-2020 release: 6900 > 3080
dec-2020 release: 6800 between 3070 & 3080

mar-2021 release: 6700 ~ 3070

Prices will be fixed according to market situation & performance vs equivalent nvidia cards

 
Back
Top