Big Navi VRAM specs leak: 16GB Navi 21 and 12GB Navi 22 go head to head with GeForce RTX 3080 and RTX 3090

How do we know they have 30% of TSMC 7nm? That kind of info is held very close.

There's been reporting all year about Apples move to 5nm and AMD securing the capacity that was freed up. The numbers were 30,000 WPM out of 140,000, and then they picked up an additional 14000 freed up by the Huawai sanctions. 44,000 out of 140,000 is 31%

Initial info from Jan.

https://www.tomshardware.com/news/amd-to-become-tsmcs-largest-7nm-customer-in-2020-report
https://www.chinatimes.com/newspapers/20200727000134-260202?chdtv
https://www.techspot.com/news/83400-amd-set-become-tsmc-biggest-7nm-customer-2020.html

Additional Info about Mediatek capacity.

https://www.hardwaretimes.com/amd-g...-tsmc-to-be-diverted-to-xsx-and-ps5-consoles/
 
I hope so too. I want to play 2077, max settings, 1440@144. RT would be a nice addition. I'm waiting to see how it all shakes out before i make a decision about what to buy, 3080 or Big Navi.
I'm more interested in the 5970x, I'm likely sticking with Nvidia for the GPU unless AMD really brings it to the table the NVidia software ecosystem beyond just the performance has some things I am interested in CUDA being one of them so likely snagging a 3090
 
AMD has TSMC all to themselves. Stock "should" by strong.

Yes stock is strong, but it is still bound by the inventory lag rules.

Which means bulk buyers/system builders/AIB partners have to place an estimated demand with AMD & AMD will then plan order with TSMC with appropriate buffer

If there is sudden demand like what happened with 3080, PS5, Xbox then there is no way AMD can cope. As there is a lag time after placing fresh order with TSMC & receiving chips for the same

What AMD can do is juggle around the orders. So that they can reschedule the higher priority/higher value chips & push back others to later

For example, as per Redgamingtech's sources/guess?, AMD has prioritized the 6900, & 6800 over the 6700

What this means is that AMD will likely release the cards priced ABOVE 3080 & 3070 this year.
The cards equivalent to 3070 or 3060ti or 3060 would likely be released from March to June of next year.
 
I'm more interested in the 5970x, I'm likely sticking with Nvidia for the GPU unless AMD really brings it to the table the NVidia software ecosystem beyond just the performance has some things I am interested in CUDA being one of them so likely snagging a 3090

Everyone has their use case, I think people forget sometimes how different they can be.

I'm on x370, with no Zen 3 support, I'm waiting for ddr5 before i buy another MB. Hell, I dropped to a6 core when I upgraded. 8 core lives in my media server, running Plex and converting some of my media to h265. No need for cuda or the extra vram, if there was a larger jump in raster performance, I'd consider the 3090.

Either way, lots of new hardware coming out, will be fun to see how it all shakes out in the next few months.
 
Everyone has their use case, I think people forget sometimes how different they can be.

I'm on x370, with no Zen 3 support, I'm waiting for ddr5 before i buy another MB. Hell, I dropped to a6 core when I upgraded. 8 core lives in my media server, running Plex and converting some of my media to h265. No need for cuda or the extra vram, if there was a larger jump in raster performance, I'd consider the 3090.

Either way, lots of new hardware coming out, will be fun to see how it all shakes out in the next few months.

https://www.pcgamer.com/amp/a-linux-update-may-have-let-slip-amd-big-navis-mammoth-core/
 
No, "Navi 21", "Navi 22" etc are referring to the chip. Semi-conductors/chips are often named in descending order based on their size and place in the line up e.g. GP102 (Titan XP, 1080 ti), GP104 (1080, 1070, 1060), GP106 (1050 etc). The retail cards in which the chips sit will surely be called by ascending names e.g. 6600, 6600 XT, 6700,, 6700 XT, 6800...

Also keep in mind the design process.

21 would be the original design, nothing held back. 22 would be the follow up design for a lower tier chip based on the same arch.

For most GPU (and chip designs in general) the first iteration is the full on chip. After that comes the cut down versions, and the power optimized versions ect ect.

As you say NV and AMD are no different in this regard. There newest designs that are not a completely new Arch.... or almost always scaled down or optimized versions. Lower tier or "super" versions ect.
 
So..... ~Big Navi has 5120 shaders 128 rops/320 tmu's ,and 256bit bus and the x70~ class has a 192 bit bus and 2560 shaders.

I can't believe that, unless the massive 128MB L3 cache turns out to be true.

And if it does, then we're talking the same size die as GA102 to make this happen. The die size of RDNA1 is over 250mm. Double that, with the same memory interface. and you get 450-ish.

And if you look at the Zen 2 die shots, the 2 × 16 MiB L3 cache cache takes up about 2 × 16.8 mm² . Quadrupling that to 128MB requires 8 × 16.8 mm² = 134 mm²

Adding the L3 cache plus the compute unit die size That puts RDNA2 dangerously close to 600 mm²


Are we sure they didn't just go with thee Xbox Series X solution (384-bits for 12GB, and 512-bits for 16GB?), and keep their die size closer to 500 mm²
 
Last edited:
And if you look at the Zen 2 die shots, the 2 × 16 MiB L3 cache cache takes up about 2 × 16.8 mm² . Quadrupling that to 128MB requires 8 × 16.8 mm² = 134 mm²

Adding the L3 cache plus the compute unit die size That puts RDNA2 dangerously close to 600 mm²
Maybe not, as some are saying the cache uses some kind of infinity fabric. Not chiplets, but a half-step.
 
Maybe not, as some are saying the cache uses some kind of infinity fabric. Not chiplets, but a half-step.

Fair, butt when you might as well go HBM cache.

AMD uses Infinity Fabric to interconnect like parts to reduce the real-world latency, but on a GPU, an L3 cache in a separate chip is kinda pointless. it just adds to your build costs, for a part that will barely sell in the hundreds of thousands.

Intel already ditched the idea of eDRAM, s their high-end integrated graphics, so it can't be a good idea for something even lower volume from AMD!

They will either (1) put this on the sane chip, or they will just use a single HBM chip mapped to L3, OR,they will just use the Xbox solution, and go back to 512-bit bus/ 320-bits for Series X/S
 
Last edited:
Fair, butt when you might as well go HBM cache.

AMD uses Infinity Fabric to interconnect like parts to reduce the real-world latency, but on a GPU, an L3 cache in a separate chip is kinda pointless. it just adds to your build costs, for a part that will barely sell in the hundreds of thousands.

Intel already ditched the idea of eDRAM, s their high-end integrated graphics, so it can't be a good idea for something even lower volume from AMD!

They will either (1) put this on the sane chip, or they will just use a single HBM chip mapped to L3, OR,they will just use the Xbox solution, and go back to 512-bit bus/ 320-bits for Series X/S
Yeah, I'm very curious to see. It seems they wouldn't waste a huge die like that if it was completely memory constrained and could have gotten by with a smaller die for the same performance. Looking forward to some more information as well as real world benchmarks.
 
Everyone's saying mystery cache is cheaper than bus width, though while not necessarily HBM, but that's kind of implied, too.

If they can make 192- and 256-bit busses work and it's cheaper, more power to them. Who cares why or how.
 
Everyone's saying mystery cache is cheaper than bus width, though while not necessarily HBM, but that's kind of implied, too.

If they can make 192- and 256-bit busses work and it's cheaper, more power to them. Who cares why or how.
Yeah, I hope it works out good, because that gives more leeway to board manufacturers and a lot less routing involved :). If the performance is good, I don't care how it's done. Won't know until we see actual benchmarks though. Hopefully it works out well and should make AIB's happy (less complex boards and signals for them to worry about).
 
Fair, butt when you might as well go HBM cache.

AMD uses Infinity Fabric to interconnect like parts to reduce the real-world latency, but on a GPU, an L3 cache in a separate chip is kinda pointless. it just adds to your build costs, for a part that will barely sell in the hundreds of thousands.

Intel already ditched the idea of eDRAM, s their high-end integrated graphics, so it can't be a good idea for something even lower volume from AMD!

They will either (1) put this on the sane chip, or they will just use a single HBM chip mapped to L3, OR,they will just use the Xbox solution, and go back to 512-bit bus/ 320-bits for Series X/S

The eDRAM used by Intel was slow. It's far inferior to its eSRAM counterpart found in the Xbox One. AMD is likely taking the memory controller off the die much as they did with Ryzen, and with it they plan to include the cache. All of it will still be on the package, and having the I/O off the GPU will be much cheaper as it wouldn't shrink much moving to 7nm, and they can manufacture it at a larger node much as they are doing with the Ryzen.
 
The eDRAM used by Intel was slow. It's far inferior to its eSRAM counterpart found in the Xbox One. AMD is likely taking the memory controller off the die much as they did with Ryzen, and with it they plan to include the cache. All of it will still be on the package, and having the I/O off the GPU will be much cheaper as it wouldn't shrink much moving to 7nm, and they can manufacture it at a larger node much as they are doing with the Ryzen.

This is probably the plan for RDNA3 (the I/O, not the cache, which scales well with process shrinks and needs to be close to the logic), but almost certainly not RDNA2.
 
The eDRAM used by Intel was slow. It's far inferior to its eSRAM counterpart found in the Xbox One. AMD is likely taking the memory controller off the die much as they did with Ryzen, and with it they plan to include the cache. All of it will still be on the package, and having the I/O off the GPU will be much cheaper as it wouldn't shrink much moving to 7nm, and they can manufacture it at a larger node much as they are doing with the Ryzen.
This would make sense, it also seperated some parts so if the i/o die is defective, it doesn't take out an entire gpu or vice versa. This helped them a lot with ryzen to get better yields, seems like it could be a step in that direction. Guess time will tell if this is the case or something different, but it wouldn't surprise me.
 
This is probably the plan for RDNA3 (the I/O, not the cache, which scales well with process shrinks and needs to be close to the logic), but almost certainly not RDNA2.

RDNA3 will be full MCM. I expect RDNA2 will be a step toward that.
 
I don't think RDNA 3 will be MCM but probably the architecture after that. There's still plenty of life left in monolithic designs.
I think so, too, especially for APUs and embedded, but I've also heard it's gonna be full chiplet by RDNA3. Once we see how the "Infinity Cache" works we can make firmer assumptions, I'd wager.

It would be easy to crank out APUs, especially for mobile, that are monolithic, and then stacks for desktop graphics (and maybe discrete mobile, particularly for mobile workstation graphics).

Then again, it would be easy to maintain and market chiplets across the board with a vertical product pipeline...
 
Well same number of cards at half the demand is technically twice the supply...

That said I’m sure AMD has a solid stack lined up, Lisa is a task master and she will see to it that heads roll if they botch this launch. NVidia is less supply constrained as Samsung can keep a steady rollout to feed NVidia, TSMC on the other hand works in bursts so a steady supply is harder to maintain though they can meet some solid numbers. I still think that AMD has bitten off more than they should have for this Winter, but I will be more than happy to be wrong on the subject.
 
Well same number of cards at half the demand is technically twice the supply...

That said I’m sure AMD has a solid stack lined up, Lisa is a task master and she will see to it that heads roll if they botch this launch. NVidia is less supply constrained as Samsung can keep a steady rollout to feed NVidia, TSMC on the other hand works in bursts so a steady supply is harder to maintain though they can meet some solid numbers. I still think that AMD has bitten off more than they should have for this Winter, but I will be more than happy to be wrong on the subject.

There was a suggestion that AMD will prioritize cards that are costlier than 3080 and 3070 for this year

The cheaper cards will be out by march or kune of next year
 
There was a suggestion that AMD will prioritize cards that are costlier than 3080 and 3070 for this year

The cheaper cards will be out by march or kune of next year
Smaller volume, higher profit. And needed for console development.... makes sense.
Unless AMD wants all the AAA titles for the new consoles built on NVidia hardware...
 
There was a suggestion that AMD will prioritize cards that are costlier than 3080 and 3070 for this year

The cheaper cards will be out by march or kune of next year
They'd be silly not to prioritize lower quantities at higher margins. This is why Nvidia put out 3080/3090 first and not 3060's.
 
They'd be silly not to prioritize lower quantities at higher margins. This is why Nvidia put out 3080/3090 first and not 3060's.
I think the reason Nvidia and AMD do this is to try and get people who were on the fence of buying a high end card would be more tempted to do so if lower end products weren't available right away. It also helps to put your best foot forward. You don't want to release the 3060 when AMD could be releasing a more powerful card. Sure your card is the fastest for it's price range but nobody is paying attention when AMD's RX 6900 XT is the fastest card in the market. Consumers aren't very bright and AMD/Nvidia know this.
 
I think the reason Nvidia and AMD do this is to try and get people who were on the fence of buying a high end card would be more tempted to do so if lower end products weren't available right away. It also helps to put your best foot forward. You don't want to release the 3060 when AMD could be releasing a more powerful card. Sure your card is the fastest for it's price range but nobody is paying attention when AMD's RX 6900 XT is the fastest card in the market. Consumers aren't very bright and AMD/Nvidia know this.
It's a lot of things. One if you have limited quantity, why would you waste it on lower end parts? Also, the high end cards are designed first, then the lower end cards are designed with lower specs and/or cut down dies. Also, as you said, there isn't as much excitement about the lower end cards. They want to build hype with the high spec and then sell quantities of the low spec. I'm guessing Nvidia didn't have enough binned dies to release the 3090 first, so released them in short successions. Considering they put out a statement saying there would be limited quantities on 3090's helps bolster this idea.
 
They'd be silly not to prioritize lower quantities at higher margins. This is why Nvidia put out 3080/3090 first and not 3060's.

The main reason to release higher end cards first is volume. You simply don't need anywhere near the volume of product for high end products because of the cost. By releasing high end first you get lower volume at higher margins which allows you to build up higher volumes of lower margin products so when those products are released the supply meets the demand or at least closer to it. It's also about having time to ramp up production, especially if there are any issues in production. It's easier and quicker to identify and repair production issues or deficiencies in more complex chips so those issues do not affect or are less likely to affect the high volume lines.

High end cards are low volume, high margin items. However, the real money is made in the high volume and lower margin cards. These companies can afford to have some production issues in high end lines because the volume is low to begin with. They cannot afford to have production issues in high volume lines.
 
I've been looking to get an RTX 3080 since launch, but since i couldn't even get a chance to try and purchase it, i guess another month to wait to see what AMD has to offer. Maybe it pushes the price favorably towards consumers. Competition is always good.
 
C2F7DAC9-59DE-4C50-94D6-03A5E49E7898.png
 
I've been looking to get an RTX 3080 since launch, but since i couldn't even get a chance to try and purchase it, i guess another month to wait to see what AMD has to offer. Maybe it pushes the price favorably towards consumers. Competition is always good.


My local Bestbuy has had the Founder's Edition in stock for most of the weekend.

If you actually cared, you could have one by now. The uncut 3090 is of-course out-of-sock everywhere.
 
My local Bestbuy has had the Founder's Edition in stock for most of the weekend.

If you actually cared, you could have one by now. The uncut 3090 is of-course out-of-sock everywhere.

Wow, look at mister big shot over here.

I've been trying for the last two weeks with Distill and nada. My local Best Buy doesn't have jack shit, I called them on release day and they said they may have had a couple of cards but they were sold out immediately on opening. No stock has ever shown since then.
 
Wow, look at mister big shot over here.

I've been trying for the last two weeks with Distill and nada. My local Best Buy doesn't have jack shit, I called them on release day and they said they may have had a couple of cards but they were sold out immediately on opening. No stock has ever shown since then.
That's because Best Buy doesn't carry any 3080s in-store:

1601317559562.png


Haven't found anyone that said that they bought one in store either.
 
Back
Top