confirmed: AMD's big Navi launch to disrupt 4K gaming

"No, AMD Infinity Cache Isn’t for RDNA 2 GPUs: It’s for Die-to-Die Interconnects" ~ says Areej of hardware times.

https://www.hardwaretimes.com/no-am...-interconnects/amp/?__twitter_impression=true


today an AMD patent has surfaced that describes what Infinity Cache really is. The important lines from the patent are as follows:

System-on-chip (SoC) architecture for use with central processing units (CPU) and graphics processing units (GPU), namely, SoC architecture that connects die-to-die, chip-to-chip, and socket-to-socket, used across different microprocessors to enable increased computing performance; network-on-chip, namely, technology that provides interfaces across microprocessor CPU and GPU cores, memory, hubs, and data fabric to enable microprocessor communications and increase computing performance and efficiency; microprocessor communication fabric, namely, data communication interconnect architecture responsible for collecting data, and command control interconnect architecture responsible for data sensor telemetry
AMD Patent
The Infinity Cache will be used in SoCs, namely designs with a CPU, GPU, and DRAM on the same package. The Infinity Cache will be used to improve the latency between the various dies, chips, and sockets. This makes it clear that IF won’t be used on any Navi 2x GPU including Big Navi as it’s a single-chip solution. Future GPUs with a chiplet (MCM) design, may, however, see the use of this cache to reduce the latency penalty between the various GPU dies.
https://t.co/acLPZhsmpR?amp=1
 
This makes it clear that IF won’t be used on any Navi 2x GPU including Big Navi as it’s a single-chip solution.
Yeah I've heard that they're putting the cache on the die as well. Red Gaming said all of his sources are saying it's confirmed, whether or not it's on the die. I mean, they have to have something if they're not widening the bus or using faster memory.
 
Are you misremembering? The 5700XT was released last year, and it couldn't even compete in performance per watt with a 1080ti, a card released 3 years ago.

View attachment 283099View attachment 283100View attachment 283101

Knowing this trend we can expect that AMDs next release will likely hit the 20 series performance per watt.

You're missing the forest through the trees. Look at the 5700 (non XT). It outperforms the 2060 which is its direct competitor, and is more efficient. Where all cards start to go off the curve efficiency wise is pushing for the last few Mhz the GPU is capable of. Navi is significantly more efficient than it's predecessors. If RDNA2 allows for better efficiency, especially as Ampere seems to be somewhat less efficient, AMD could easily be ahead in performance per watt this generation.
 
If RDNA2 allows for better efficiency, especially as Ampere seems to be somewhat less efficient, AMD could easily be ahead in performance per watt this generation.

Rumor is that reference edition of big navi will be more efficient than founders edition of RTX 3080 but AMD would give free reign to AIB board partners to up the power & performance on custom boards
 
Just to rile up the NV boosters that may be around... anyone consider that the board partners have dropped the ball on the 3080 launch as they aren't that excited to be stuck with tons of stock when big navi is revealed ?

I'm joking... all the rumors and theories though are a lot of fun. I was disappointed by the big gap between the two announcing at first... but man all the fun we would miss out on if they announced at the same time.
 
I await AMD's launch event but I really want to see what they have to go up against the newly announced A6000, for me that is the card to beat and AMD to date has really dropped the ball on the workstation and server segments. While their cards were/are price competitive, their power and heat make them a hard sell when compared against the Quadro/Tesla lineups, their lack of AI capabilities to date also made them a complete non-option for a lot of applications. So outside very specific use cases where they work better in specific applications, the NVidia offerings were just generally better across the board.
 
Not surprising considering that amd abandoned the graphics market to save resources for ryzen.
And my hopes are now that Ryzen is on track they can shift some resources back, I really want AMD to be able to hold their own here but they have to give me a reason to buy into their platform and that goes for developers as well. AMD might be in all the consoles but all the developers for those consoles are running NVidia, but their hardware, software, and features have been a full generation behind for a long while and it is going to take some time to win back those customers. I am skeptically rooting for AMD because I do want a strong set of options, and even if they can't take the performance crown a solid offering would be welcome at this stage.
 
oh 100%. Even with Big navi and the 6000 line, I don't expect AMD to beat Nvidia in the full product stack and I wouldn't count it as a loss if they didn't. I do expect solid performance and price across the board, and use the momentum and increased sales here to really challenge them in the next generation. Nvidia is not Intel, they haven't been sleeping for the past decade.
 
oh 100%. Even with Big navi and the 6000 line, I don't expect AMD to beat Nvidia in the full product stack and I wouldn't count it as a loss if they didn't. I do expect solid performance and price across the board, and use the momentum and increased sales here to really challenge them in the next generation. Nvidia is not Intel, they haven't been sleeping for the past decade.
Yeah I mean the 2000 series wasn't great in terms of price/performance, but in terms of the features it introduced it was huge, it presented a lot of brand new toys for developers, researchers, hobbyists, and gamers and has set the bar for this generation. For better or worse it is going to be what everything is compared against for at least the next 2 generations.
 
and use the momentum and increased sales here to really challenge them in the next generation
I think a lot of people are underestimating how big a role efficiency plays here. In hindsight, Nvidia telling people "you're gonna need a new PSU" should have been a bigger red flag. Sure, there are plenty of gamers who want the most, spare no expense, but even they're getting bit by the 3080 and 3090 power problems. Most gamers want something that "just works."

This is ten times more important on mobile, and I expect mobile gaming to be the next big battleground. If we have video cards capable of driving AAA titles with the dials turned up at 4K, people are going to want a laptop that can do the same at 1080p and run for more than a couple of hours.

Hardcore gamers want things to scale up, but the bulk of buyers want things to scale down, and from what we've heard so far, it looks like AMD's decisions have been very well laid out in advance.
 
I think a lot of people are underestimating how big a role efficiency plays here. In hindsight, Nvidia telling people "you're gonna need a new PSU" should have been a bigger red flag. Sure, there are plenty of gamers who want the most, spare no expense, but even they're getting bit by the 3080 and 3090 power problems. Most gamers want something that "just works."

This is ten times more important on mobile, and I expect mobile gaming to be the next big battleground. If we have video cards capable of driving AAA titles with the dials turned up at 4K, people are going to want a laptop that can do the same at 1080p and run for more than a couple of hours.

Hardcore gamers want things to scale up, but the bulk of buyers want things to scale down, and from what we've heard so far, it looks like AMD's decisions have been very well laid out in advance.
100% agreed! The 3070 mobile looks promising and my Dell rep has let me know they are announcing them shortly along with the G5 refresh. But I really do think that AMD is going to have a big advantage for mobile if they can get parts to the OEM's which to date has been their biggest hindrance.
 
I got one of the new ryzen laptops with the 4500U I believe. Cpu performance is good but the APU is held back by 512mb frame buffer. So it tanks even at 1080p but with smaller window sizes it's not too bad. Didn't do much research I was expecting a little more.
 
I got one of the new ryzen laptops with the 4500U I believe. Cpu performance is good but the APU is held back by 512mb frame buffer. So it tanks even at 1080p but with smaller window sizes it's not too bad. Didn't do much research I was expecting a little more.
It doesn't help that the 4500U has both a 15W and a 25W variant and there is no indicator which is used unless you dig into the laptops specifics and most of the time it's not even mentioned there either, and there is a pretty decent performance gap between the two parts.
 
Yeah they kinda cut back on the graphics for the 4000 series, I don't know why. I have a 2700U and granted, it's a 7-series APU, but it's impressive for what it is. Seems like a blown opportunity.
 
Yeah they kinda cut back on the graphics for the 4000 series, I don't know why. I have a 2700U and granted, it's a 7-series APU, but it's impressive for what it is. Seems like a blown opportunity.
It's another case of them needing to hit a price point, the 4000 series gives the similarly priced Intel parts a good run either keeping pace or just edging out while managing a competitive price point, but its completely counter to their marketing about gaming performance if all they are doing is keeping up with Intel in that regards.
 
Well that and they're trying not to compete with themselves on the 4700, 4800, and 4900, I think.

Still -- and I think this would necessarily be RDNA3 territory -- if they could make an APU that hauls up with Xbox Series S performance, that would be the de facto APU for mainstream gamers. Not enthusiasts or anything. Just people who want enough to game.

Someone's going to do this. AMD is in spitting distance but who knows if they'll be the company to make it happen. Could be Intel if their graphics are as good as some people say/hope. Nvidia will do it if Arm goes through, but I can also see them setting up some kind of weird pseudo-console-PC hybrid with a walled garden. Bleh.

Make a laptop with the chops of an Xbox S, a desktop equivalent for OEMs, and the market will fall in line behind you.
 
Last edited:
It doesn't help that the 4500U has both a 15W and a 25W variant and there is no indicator which is used unless you dig into the laptops specifics and most of the time it's not even mentioned there either, and there is a pretty decent performance gap between the two parts.
Intel is doing the same things, seems the going trend, which is crappy. They used to use different letters (U, H, G) to differentiate, but I guess if the same part can fill either slot, it's not really up to the CPU manufacturer to give it a name, as they don't know at the point of sale if it's going to be 15w or 25w. The laptop or tablet manufacturer is the one that knows what it's going in and is happy to allow the consumer to buy a 15w part thinking they are getting the full 4500U (25w) part. It's unfortunate, but it's where we are at with both parties now..
 
Intel is doing the same things, seems the going trend, which is crappy. They used to use different letters (U, H, G) to differentiate, but I guess if the same part can fill either slot, it's not really up to the CPU manufacturer to give it a name, as they don't know at the point of sale if it's going to be 15w or 25w. The laptop or tablet manufacturer is the one that knows what it's going in and is happy to allow the consumer to buy a 15w part thinking they are getting the full 4500U (25w) part. It's unfortunate, but it's where we are at with both parties now..
They release CPU and tell the manufacturers that it can operate at X frequency range at Y power settings and let them build it into a device accordingly based on its ability to dissipate heat and battery life requirements, really the rest is on the consumer to take a look at benchmarks and reviews which really should be procedure at this point but it's still dirty pool in my book.
 
There also seems to be a patent filing (courtesy of overclockers.co.uk forum post)

https://www.overclockers.co.uk/forums/threads/rdna-2-128mb-infinity-cache-rumours.18899110/

These's some rumours going around about a 128MB Infinity Cache.

I'd say its very likely to be correct, a patent was published today from AMD called 'ADAPTIVE CACHE RECONFIGURATION VIA CLUSTERING', link here here:
https://www.freepatentsonline.com/y2020/0293445.html

Basically, what I think it will allow, is greater L1 cache capacity for GPUs. It does this by dynamically creating 'a plurality of compute units' clusters. The dynamically created clusters allow a 'backup' pool of shared L1 cache to be accessed, if the L1 cache for a CU (Compute Unit) isn't sufficient.

L1 cache is the fastest (lowest latency) type of GPU cache. So the more L1 capacity, the less the slower L2 and L3 caches are used.


There's a possibility that the Infinity Cache may be related to a patent that AMD filed last year on Adaptive Cache Reconfiguration Via Clustering. Subsequently, the authors published a paper on the topic. It talks about the possibility of sharing the L1 caches between GPU cores.

Traditionally, GPU cores have their own individual L1 cache, while the L2 cache is shared among all the cores. The suggested model proposes that each GPU is allowed to access the other's L1 cache. The objective is to optimize the caches' use by eliminating the replicated data in each slice of the cache.

The results are pretty amazing. Across a suite of 28 GPGPU applications, the new model improved performance by 22% (up to 52%) and energy efficiency by 49%.
(In worst-case the performance drops by 4%)
(Area overhead is 0.09 mm2/core.)

https://www.tomshardware.com/news/a...e-big-navis-rumored-mediocre-memory-bandwidth
 
Last edited:
3 SKUs rumored for RDNA 2

  1. 275w - 80 CU @ 256bit (16 GB) ~= RTX 3080
  2. 225w - 40 CU @ 192bit (12 GB) ~> RTX 3070
  3. 150w - 32 CU @ 128bit ( 8 GB) ~< RTX 2080ti
EDIT:
custom 80 CU skus from AIBs likely to have unlimited power target targetting 3090 performance (& price) ??

https://twitter.com/3DCenter_org/status/1313659345721200640?s=20

( So what happened to the 60 CU sku ? Did the short price gap between RTX 3080 & RTX 3070 scupper AMD's plan for a 60 CU sku or was it never in the works !? )
 
3 SKUs rumored for RDNA 2

  1. 275w - 80 CU @ 256bit (16 GB) ~= RTX 3080
  2. 225w - 40 CU @ 192bit (12 GB) ~> RTX 3070
  3. 150w - 32 CU @ 128bit ( 8 GB) ~< RTX 2080ti
EDIT:
custom 80 CU skus from AIBs likely to have unlimited power target targetting 3090 performance (& price) ??

https://twitter.com/3DCenter_org/status/1313659345721200640?s=20

( So what happened to the 60 CU sku ? Did the short price gap between RTX 3080 & RTX 3070 scupper AMD's plan for a 60 CU sku or was it never in the works !? )
If true, that 150w around the 2080TI point is going to be a game-changer if they can have sufficient numbers at launch and a good price, it would be a part that guarantees any 1080p user max settings, and probably the same for 1440p (the bulk of the monitors out there) and if priced appropriately would allow somebody to build a decent budget rig that could support a 4k monitor down the line once they come down in price a bit. Unless NVidia counters that it will be my goto card for most of the PC builds I recommend to people as they tend to want to spend around 800 Cad sometimes including a monitor.
 

Attachments

  • 10284640_575px.jpg
    10284640_575px.jpg
    19.3 KB · Views: 0
From what I can tell it is a tad bit slower than the rtx 3080? Does that sound about right? Just need to know the pricing now.
well lots more we need to know. for example, we dont know sku, or benching system. Its just a teaser. wait till the 28th.
 
Doesn't help that directly comparing numbers like that is a really bad idea. Way too many variables.
110% agree. That was the point of my post. One review, the one Master_shake_ posted had a completely different frame-rate than the one I posted. Its why I recommended waiting for reviews.
 
I'm excited.. 3080 performance with 16GB of ram, and a good amount less power to do so, that'll be nice, and then AIB's will produce cards next year with higher power limits and probably reach towards 3090 numbers hopefully with just a bit of binning and a slightly higher TDP, I am hoping this doesn't more than double the price ;).
 
then AIB's will produce cards next year with higher power limits and probably reach towards 3090 numbers
With Nvidia blowing the top off the power ceiling for GPUs, I'll be surprised if someone doesn't make a 6000-series with an AIO clocked like mad. That's just a gut feeling, but it's based on the rumors that the lower-end Big Navi cards will have higher clocks than the top tier.

I'm also looking forward to ending the debate on what is, isn't, or might be the difference between Big Navi and Biggest Navi, and what parts they are actually talking about.
 
Am i the only one who cares if big navi uses the new 12 pin power plug? If amd ever wants to be viewed as an innovator, they got to adopt new ideas.

Honestly I hope they use a 13 pin connector - since they are announcing near Halloween.
 
With Nvidia blowing the top off the power ceiling for GPUs, I'll be surprised if someone doesn't make a 6000-series with an AIO clocked like mad. That's just a gut feeling, but it's based on the rumors that the lower-end Big Navi cards will have higher clocks than the top tier.

I'm also looking forward to ending the debate on what is, isn't, or might be the difference between Big Navi and Biggest Navi, and what parts they are actually talking about.
I just want to know if I should return my 3090 or not. :ROFLMAO: Return window ends on Oct 30th.
 
Lisa Su show the Big Navy in her hand, but from which card was the result ? :)
So there is probability for more GPU power.
 
Back
Top