Navi Rumors v2.0

Status
Not open for further replies.
May be they plan to use the infinity fabric to make a dual gpu on one board as the needed bandwidth from the PC1 4.0 = Godzilla

Perhaps for niche content creation, but the lack of current game dev support for multi-GPU makes that application pretty pointless.

[that can and probably will change, but we have no idea when...]
 
May be they plan to use the infinity fabric to make a dual gpu on one board as the needed bandwidth from the PC1 4.0 = Godzilla
I just don't see AMD releasing another dual GPU gaming card for its consumer, easier for consumer to buy two gpu than spending unnecessary resources designing a dual GPU. Maybe for the pro market but AMD didn't release dual gpu for Instinct.
 
I just don't see AMD releasing another dual GPU gaming card for its consumer, easier for consumer to buy two gpu than spending unnecessary resources designing a dual GPU. Maybe for the pro market but AMD didn't release dual gpu for Instinct.

Hard part here is that the HBM layout makes putting two (or more) GPUs on a single card significantly easier, at least until you have to cool them. And that could be done with a 240mm AIO with the GPU blocks in parallel.

The main remaining issue is the software...
 
Hard part here is that the HBM layout makes putting two (or more) GPUs on a single card significantly easier, at least until you have to cool them. And that could be done with a 240mm AIO with the GPU blocks in parallel.

The main remaining issue is the software...

Does not have to be a dual GPU board, I would just like to see more hybrid AIO-cooled AMD GPUs...

Where is my 2080 Ti killing Radeon 5900 with integrated 240 AIO...?!?
 
Does not have to be a dual GPU board, I would just like to see more hybrid AIO-cooled AMD GPUs...

Same! I love the one on my 1080Ti.

Where is my 2080 Ti killing Radeon 5900 with integrated 240 AIO...?!?

Two years into the future- and hopefully catching up to whatever replaces the 2080Ti, cause it'll be that time, and hopefully with DXR support in hardware with a mature software stack too.

Between then and now, AMD doesn't seem to have much, sadly.
 
https://videocardz.com/80883/amd-radeon-rx-5700-navi-series-feature-225w-and-180w-skus

All three custom cards that were showcased by ASRock at Computex are dual 8-pin designs. This puts them in Radeon RX 590 territory, which already had a TBP of 225W. The RX 590 features the third generation of Polaris architecture.

AMD Radeon RX 5700 series was demoed during a pre-Computex press conference. According to the numbers provided by the manufacturer, RX 5700 were on average 10% faster than NVIDIA’s GeForce RTX 2070 in a game called Strange Brigade.

ASROCK-Radeon-RX-5700-Navi-2.jpg
 
I don't know what "Strange Brigade" is, but I'm going to buy a Navi, pick up a copy of that game, and ima gonna rock it. ;)

Okay, seriously: June 10th is coming up soon. It'll be good to see what AMD has for their release. It'll be better to see what the reviews show after Navi has been tested.
 
O i got it all, just need the GFX card :)
Though in all honesty i did go overboard on CPU and the motherboard
AND ! 32 GB more RAM
AND ! one of those new faster M2 drives as my computer booting in 16 seconds are too slow.
 
It'd be cool if the rumors were a little wrong and these cards are actually decent, I just don't see it. They have to fully get off of GCN and move on, and I think they know it. I'm really looking forward to Zen 2, but I don't think picking between mid range 20 series nvidia card and navi will be a hard choice unfortunately.
 
It'd be cool if the rumors were a little wrong and these cards are actually decent, I just don't see it. They have to fully get off of GCN and move on, and I think they know it. I'm really looking forward to Zen 2, but I don't think picking between mid range 20 series nvidia card and navi will be a hard choice unfortunately.

I don't get the GCN hate. As a pure raster uarch, it's efficient and flexible to purpose. The gaming cards, which topped out at the RX590 unfortunately, are as competitive as AMD decided to make them. Their compute cards are well respected. We can attribute more of AMD's product-like gaps to their business decisions than to GCN.
 
I don't get the GCN hate. As a pure raster uarch, it's efficient and flexible to purpose. The gaming cards, which topped out at the RX590 unfortunately, are as competitive as AMD decided to make them. Their compute cards are well respected. We can attribute more of AMD's product-like gaps to their business decisions than to GCN.


I don't hate them, they're just not as good as the competition. They were back in the 200-300 series days, but it's been a long time since then. I loved my Fury Nitro, great card.
 
It'd be cool if the rumors were a little wrong and these cards are actually decent, I just don't see it. They have to fully get off of GCN and move on, and I think they know it. I'm really looking forward to Zen 2, but I don't think picking between mid range 20 series nvidia card and navi will be a hard choice unfortunately.

The problem is that during the years if they have had anything other then GCN they would have still have to deal with a small R&D budget the name does not matter the budget does.

In the end they would always move on but lets say that the event to move would have been a good deal sooner if they did have the budget. And that is more or less the story of how AMD handled graphics.
https://www.anandtech.com/show/12363/amd-reassembles-rtg-hires-new-leadership

That was the most important thing RTG has a decent budget and new people and hopefully things that caused serious problems as scaling to higher frequency without needing a lot of power are things of the past now.

Even if in a few days time we hear about Navi at E3 the design is rumoured to still have those same problems regarding scaling.
 
I don't get the GCN hate.

I also don't get the GCN hate.

The problem is that during the years if they have had anything other then GCN they would have still have to deal with a small R&D budget the name does not matter the budget does.

Oh, but they have. And they've iterated in the exact same pace way Nvidia has:

2011 - Cape verde = GCN 1.1
2013 - Bonaire = GCN 1.2
2014 - Tonga = GCN 1.3
2016 - Polaris = GCN 1.4
2017 - Vega = GCN 1.5
2019 - Navi = GCN 1.6 (?)

2010 - Fermi = NV 1.0
2011 - Fermi (refresh) = NV 1.1
2012 - Kepler = NV 1.2
2014 - Kepler (refresh) = NV 1.3
2015 - Maxwell = NV 1.4
2017 - Pascal (refresh) = NV 1.5
2019 - Turing = NV 1.6

So, in the past 8 years, since 2011, they both have released 6 architectures. Some are bigger jumps (Bonaire, Polaris, Navi or Kepler, Maxwell, Turing), others are refreshes, but they've iterated very, very similarly. The only difference, is that AMD doesn't just communicate the architecture's name, it gives it an order in their history: GCN 1.1, 1.2, etc. Nvidia doesn't call them that, they just give you the architecture name, but make no mistake, you could just as well call those NV 1.1, 1.2, 1.3, etc after Fermi reset the whole architecture. Fermi was Nvidia's GCN (some would even say Geforce 8 was, but I'd say Fermi brought an overhaul to the whole thing).

It baffles me when people say, oh Polaris was just a small upgrade, from GCN 1.3 to 1.4, like that means anything. I bet most people don't even bother to read the architectural deep dives that the likes of Anandtech publish with each new GPU generation. GTX 500 and 700 did basically nothing to 400 and 600 series. Nvidia doesn't create a new architecture from the ground up each year, it takes years to do so (close to a decade actually, like we saw from Geforce 256 to 8000 and now Turing, which is not exactly the same case but DXR hardware is inaugurating a new architecture).

You can think of Navi as GCN 1.6, kind of, or you can think of it the way AMD desperately wants you to think of it: the nextGCN, as in, next decade of microarchitecture. It's not fully RDNA just like the Geforce 8 series wasn't fully the architecture that Nvidia then iterated on during the 2010s. Fermi was the full "new base". Navi is their Geforce 8 equivalent moment: sounds like it's a hybrid of GCN (aka we need this for now so thinks keep working and we don't break everything) and the next architecture (Arcturus?) will be their Fermi equivalent. You could say their Turing equivalent, but Turing is Nvidia's Geforce 8 moment, not Femi. Turing is not their full new base, but a hybrid - which you can clearly see when they tell you here are the regular raster cores, here are the RT new stuff. Hybrid. Turing and "Super" (whatever that ends up being) are middle steps. 2010 brought us the last big jump in architectures, 2020 will do the same. It is pretty obvious when you look at history. AMD's architectures have been less stellar than Nvidia's for sure over the past few years, but as Pieter3dnow says, they've done so with pennies in R&D, which is already pretty amazing on its own - but does nothing to make them actually competitive. Frustratingly, Navi is now expected to come at 2070 levels, and I'm already reading people saying - meeeh, it doesn't topple the 2080TI. Who the F cares, especially when we're talking about generational differences, not absolute performance - which, let's recall, is an infinitesimal minority of the market. This is reminiscent of my brother, who, when he has no arguments to counter something, comes out of nowhere with something that has nothing to do with the issue, as if that is a valid counterpoint. The only solution is to ignore my brother, but you can't do that in the market, where appearance drives marketing, which drives marketshare.

So, bottom line:
Has AMD iterated just like Nvidia over the past decade? Yes.
Do GCN versions differ among each other just like Nvidia's architecture versions? Yes.
Do GCN small numbering changes reflect the generational jump? No.
Does getting the number 1 spot in performance say anything about each generational step? No.
 
Last edited:
If Navi is 250 mm^2 and compete with a 455 mm^2 2070 or a 545 2080 that is a win for AMD.
My question is does AMD plan to make a big Navi. I think the answer is no.
 

ehhhmm... Im sorry but everything said in that Wall-o-text it's just wrong in every sentence.. GCN history it's this way (without going really technical or it Will end as another Wallotext

GCN 1.0 in 2012 started with:
*Tahiti (7900 series.. )
*Pitcairn (7800 series)
*cape verde (7700 series)

GCN 1.1 in 2013:
*Hawaii (R9 290 Series)

This was minor modification to the uarch itself but it was when they moved GCN to a scalable architecture as everything was grouped and treated as shader engines.. with a dedicated geometry processor (biggest bottleneck in tahiti) per Shader engine.. it was also when they added the newer PCI-E controller and DMA controller (for the bridgeless Xfire functionality) everything else was just bruteforcing Tahiti into bigger numbers.. with crippled FP32 performance..

GCN 1.2 in 2014
*Tonga (r9 285 and r9 380 series)

Again this added minor modifactions to GCN, and in fact Tonga only added a primitive delta color compression to further enhance their crappy geometry performance and gained efficiency over tahiti of same specs by using narrower 256 bit bus and reducing the total memory from 3GB to 2GB.. in the end that's it tonga was an improved Tahiti (which sometimes performed even worse than tahiti btw)..

GCN 1.2 refreshed in 2015.
*Fiji (Fury series..)

History repeat, short answer:.. bruteforcing of Tonga in huge numbers Fiji added nothing beyond HBM and a dedicated hardware scheduler for Async..

GCN 1.3 in 2016 this one is interesting..
*polaris…. it's just another refresh of Tonga ported to Finfet 14nm with more modern tech, support for HDR, HDMI 2.0, updated memory controller, updated geometry processor and delta color compression engines. aside from that everything was exactly as Tonga/fiji and in fact the GCN ISA was identical to Hawaii.. biggest changes with polares came via driver optimizations..

and... VEGA..

Vega was again another refresh, it is just in fact a refined Fiji, ported to 14nm and ported to infinity fabric, added more L2 cache, updated Memory controller.. every gain in performance on vega came on the extra large pipeline versus fiji, damn, they wasted almost 4B of transistor just to enlarge the pipeline to achieve those clocks 1.7ghz (versus 1.05ghz in fiji)… aside from that, it was tested an proved that vega at same clock as fury performed exactly the same.. not worse, not better. just the same.. in fact lot of "features" never were enabled by AMD as rapid packet math, primitive (shader) discard, tiled based rasterization, and HBCC never worked.

about Nvidia GPUs.. damn you should read every review of Nvidia GPUS since fermi up to this date to refresh your knowledge, everything it's just wrong, the changes from Fermi to Kepler were HUGE they were totally different, Kepler was a so dramatic architecture that allowed Nvidia to use their mid range dies (GK104) to complete in the high-end market, and their low-end dies (GK106) to compete as mid range, so much that the launched mid range 660/660ti was able to outperform the older GTX 580 even being a truly low-end die.. the changes from Kepler to maxwell were even to another level in performance/efficiency again very different to kepler… agreed on pascal being basically a refresh of Maxwell.. however Volta/Turing even sharing A LOT of the architecture are at the same time pretty much different architectures.. different as what GCN for AMD was and still is up to the VII.. so to make it short AMD since 2012 have launched a SINGLE architecture modernized and refied in each revisition? yes... but it is the same GCN..
 
ehhhmm... Im sorry but everything said in that Wall-o-text it's just wrong in every sentence..
*SNIP*
so to make it short AMD since 2012 have launched a SINGLE architecture modernized and refied in each revisition? yes... but it is the same GCN..

A) Thank you for the corrections on the AMD timeline. I've been a Nvidia user for most of my gaming life, so I went with what I remembered plus the dates Wikipedia had.

However,

B) It's delusional to say that AMD had one architecture while Nvidia's has somehow more differentiated architectures. I've been following Nvidia for the past 20 years, literally since the Geforce 256 came out in 1999. I can tell you you're %100 flat out wrong on assuming somehow AMD has less architectural development than Nvidia. Proof of fact:

B1 - Anandtech's review of Fermi:

upload_2019-6-8_15-27-16.png

B2 - Anandtech's review of Kepler:

upload_2019-6-8_15-5-25.png

B3 - Anandtech's review of Maxwell:

upload_2019-6-8_15-3-46.png

And finally Pascal basically shrinks Maxwell. Of course, the definition of "big" and/or "small" changes to architectures are dependent on what we're looking at. However, saying that AMD barely changed their architectures in 8 years, while Nvidia has done repeatedly, is beyond ignorant.

Navi is not, in any way, any smaller an architectural change than Fermi to Kepler, Kepler to Maxwell, Maxwell to Pascal or Pascal to Turing. They are ALL incremental changes to the same basic architecture. Nvidia has not changed anything massive since Geforce 8800. Notice Anandtech's evaluation of the G80 architecture:

upload_2019-6-8_15-17-0.png

Notice, again from Anandtech's review (for years they really were the only decent place to find architecture commentary), the small changes from G80 to GT200:

upload_2019-6-8_15-19-32.png

Note their wise mention of Nvidia's - and AMD's - strategy: modular architectures that are partially updated throughout the years.

Now let's look at AMD's architectures:

1) Anandtech's review of GCN 1.0:

upload_2019-6-8_15-37-38.png

2) Anandtech's review of GCN 1.1:

upload_2019-6-8_15-44-8.png

3) Anandtech's review of GCN 1.2:

upload_2019-6-8_15-52-48.png

4) Anandtech's review of GCN 1.3 / GCN 4, updated naming that's finally more logical and less confusing numbering wise:

upload_2019-6-8_15-55-2.png

So, judging from the past 13 years of GPU history I just demonstrated to you, you can see that we really only get an actually new architecture every decade or so. You can also see that the reviews of Nvidia and AMD's architectures sound very similar as to how they both do iterative upgrades and changes, with architectural overhauls happening really no more than once per decade. Last one we saw was unified shaders. Raytracing is poised to be the next big architectural shift. Turing started this with a hybrid design, RDNA is about to do the same. Thus, claiming that Nvidia somehow magically does bigger updates than AMD to their architecture is inaccurate. When you look beyond the 10 feet in front of your face and analyze these companies' histories, you see a very similar pattern of upgrades and architectural development, with each company leapfrogging the other constantly on a multi-year cadence.

You don't really need to agree with me or not - I just quoted more than a decade of analysis that proves my point. Do with it what you will.
 
So, judging from the past 13 years of GPU history I just demonstrated to you, you can see that we really only get an actually new architecture every decade or so. You can also see that the reviews of Nvidia and AMD's architectures sound very similar as to how they both do iterative upgrades and changes, with architectural overhauls happening really no more than once per decade. Last one we saw was unified shaders. Raytracing is poised to be the next big architectural shift. Turing started this with a hybrid design, RDNA is about to do the same. Thus, claiming that Nvidia somehow magically does bigger updates than AMD to their architecture is inaccurate. When you look beyond the 10 feet in front of your face and analyze these companies' histories, you see a very similar pattern of upgrades and architectural development, with each company leapfrogging the other constantly on a multi-year cadence.

You don't really need to agree with me or not - I just quoted more than a decade of analysis that proves my point. Do with it what you will.
Awesome summary.

The issue with GCN bashing is that architectural changes however big or small they were do not seem to do anything that is strongly reflected in performance or features for end user like myself. For example Fury X have nearly identical performance as Vega 64 when core clocks are the same and memory speed is set to have the same bandwidth and there were supposed to be tons of improvement in Vega, not to mention Vega took more than two years of development and have few millions of more transistors. 40% more to be exact... it is hard to imagine this actually how they wasted so much transistors for nearly zero performance improvement... In comparison Nvidia architectural improvements are clearly visible in how chips perform and each new revision seems like brand new thing even if it shares a lot of aspects with previous generation.

Turing is definitely not a new architecture. It looks very much like evolved Volta which also grew RT cores. But hey, changes to CUDA cores in Turing that enable concurrent INT/FPU operation, reduced precision operation and variable rate shaders seems to be far bigger of an improvement than AMD done to their chips in years of GCN development. It might not be true but it seems like that. Maybe this is only marketing but even if then it only shows AMD have bad marketing...
 
. . .

Vega was again another refresh, it is just in fact a refined Fiji, ported to 14nm and ported to infinity fabric, added more L2 cache, updated Memory controller.. every gain in performance on vega came on the extra large pipeline versus fiji, damn, they wasted almost 4B of transistor just to enlarge the pipeline to achieve those clocks 1.7ghz (versus 1.05ghz in fiji)… aside from that, it was tested an proved that vega at same clock as fury performed exactly the same.. not worse, not better. just the same.. in fact lot of "features" never were enabled by AMD as rapid packet math, primitive (shader) discard, tiled based rasterization, and HBCC never worked. . .

..
Rapid packet math works, primitive (shader) discard works per title, tiled based rasterization? and HBCC has always worked - no idea where you got those conclusions from. HBCC just does not serve that much purpose for gamers when you have enough installed VRAM to begin with.
 
If Navi is 250 mm^2 and compete with a 455 mm^2 2070 or a 545 2080 that is a win for AMD.
My question is does AMD plan to make a big Navi. I think the answer is no.

From a pure rasterization perspective it's still noticeably behind Nvidia. An OC'ed GTX 1080 should match this RX 5700 in performance, and that's just a 314mm2 die on 16nm that sips power (avg 165W gaming). If you die shrink the GTX 1080 to 7nm it uses significantly less power than the RX 5700 and the die would be significantly smaller.
 
A) Thank you for the corrections on the AMD timeline. I've been a Nvidia user for most of my gaming life, so I went with what I remembered plus the dates Wikipedia had.

However,

B) It's delusional to say that AMD had one architecture while Nvidia's has somehow more differentiated architectures. I've been following Nvidia for the past 20 years, literally since the Geforce 256 came out in 1999. I can tell you you're %100 flat out wrong on assuming somehow AMD has less architectural development than Nvidia. Proof of fact:

B1 - Anandtech's review of Fermi:

View attachment 166508

B2 - Anandtech's review of Kepler:

View attachment 166505

B3 - Anandtech's review of Maxwell:

View attachment 166504

And finally Pascal basically shrinks Maxwell. Of course, the definition of "big" and/or "small" changes to architectures are dependent on what we're looking at. However, saying that AMD barely changed their architectures in 8 years, while Nvidia has done repeatedly, is beyond ignorant.

Navi is not, in any way, any smaller an architectural change than Fermi to Kepler, Kepler to Maxwell, Maxwell to Pascal or Pascal to Turing. They are ALL incremental changes to the same basic architecture. Nvidia has not changed anything massive since Geforce 8800. Notice Anandtech's evaluation of the G80 architecture:

View attachment 166506

Notice, again from Anandtech's review (for years they really were the only decent place to find architecture commentary), the small changes from G80 to GT200:

View attachment 166507

Note their wise mention of Nvidia's - and AMD's - strategy: modular architectures that are partially updated throughout the years.

Now let's look at AMD's architectures:

1) Anandtech's review of GCN 1.0:

View attachment 166509

2) Anandtech's review of GCN 1.1:

View attachment 166510

3) Anandtech's review of GCN 1.2:

View attachment 166511

4) Anandtech's review of GCN 1.3 / GCN 4, updated naming that's finally more logical and less confusing numbering wise:

View attachment 166512

So, judging from the past 13 years of GPU history I just demonstrated to you, you can see that we really only get an actually new architecture every decade or so. You can also see that the reviews of Nvidia and AMD's architectures sound very similar as to how they both do iterative upgrades and changes, with architectural overhauls happening really no more than once per decade. Last one we saw was unified shaders. Raytracing is poised to be the next big architectural shift. Turing started this with a hybrid design, RDNA is about to do the same. Thus, claiming that Nvidia somehow magically does bigger updates than AMD to their architecture is inaccurate. When you look beyond the 10 feet in front of your face and analyze these companies' histories, you see a very similar pattern of upgrades and architectural development, with each company leapfrogging the other constantly on a multi-year cadence.

You don't really need to agree with me or not - I just quoted more than a decade of analysis that proves my point. Do with it what you will.

You forgot to look at one little thing (but rather important).
The relationship between CUDA cores and SM
Tesla -> Fermi From 8 CUDA cores per SM to 32 CUDA cores per SM.
Fermi -> Kepler: From 32 CUDA cores to 192 CUDA cores per SM, load/store/special unit double in size (and not to forget: Unified clock on die)
Kepler -> Maxwell: From 192 CUDA cores to 128 CUDA cores per SM, double the number of SM's compared to Kepler (And a LOT of rework im the SM-structure)
Maxwell -> Pascal Still 128 CUDA cores per SM, Shared recources in the SM: 96KB shared memory, the instruction cache, 4 FP64 CUDA cores and 1 FP16x2 CUDA core. PolyMorph Engine moved out of SM
Pascall -> Turing From 128 CUDA cores to 64 CUDA cores per SM.new L1 cache, unified memory subsystem, 2 SMs per TPC, 6MB of L2 (up from 3MB), RT cores..and and lot of other reworks behind the scenes

NVIDIA's SM's are vital to understanding their architechture...

Now if I look at AMD...when did they do something else than: 1 CU = 64 shaders + 4 TMU's?
 
Class it as rumor:
https://forum.beyond3d.com/threads/...nd-discussion-2019.61042/page-34#post-2072062

Alright, got some info about Navi, don't ask about the source, but it's reliable as hell, and I trust it implicitly.

The highest SKU launching will be named RX 5700 XT, 40CU, 9.5TFLOPS, 1900MHz max clocks, with 1750MHz being the typical gaming clock. Power delivery is through 2X 6pin connectors.

The thing I have the hardest time believing is dual 6 pin power...

These days hardly anyone bothers with 6 pin connectors anymore.
 
You forgot to look at one little thing (but rather important)....
Now if I look at AMD...when did they do something else than: 1 CU = 64 shaders + 4 TMU's?

Actually, you would be incorrect in assuming I forgot about Nvidia's SM changes. You either misread my post or didn't bother with it at all, because, and I quote again:

View attachment 166507

Note their wise mention of Nvidia's - and AMD's - strategy: modular architectures that are partially updated throughout the years.

I'm not going to chew the information again because I have better things to do, but to prove my point:

1) Transforming and lighting was an actual new architecture in 1999 with Geforce 256. No longer done in software and requiring specific hardware, this originated what we now call a GPU. See here.
2) Unified shaders were a new thing in 2006 with Geforce 8800. What you see in every Nvidia architecture until Pascal is, from a panoramic point of view, alterations on the same thing. See here.
3) RT and Tensor cores and were a new thing in 2018 with Geforce 2080. Neither were present in anything prior from Nvidia, and were a completely new layer of hardware added to the previous stuff. See here.

Shuffling around SMs and their contained Cuda cores generation after generation is by definition, a partially updated architecture, not a new one. T&L, Unified shaders, RT/Tensor were new architectures that required a great overhaul of how their GPUs work. Note again, once every decade.

Claiming that AMD did barely a change because they haven't changed their CU composition is supremely ignorant. You're dismissing, just to give one example, all the changes that happened with Polaris:

Radeon%20Technologies%20Group_Graphics%202016-page-015.jpg

AMD went through the same changes as Nvidia. They went through T&L, they went through unified shaders. They're now about to go through ray tracing hardware, whatever form that takes, it will actually be new again because to perform properly they'll need a new hardware architecture. Navi will probably be the Turing equivalent: a hybrid of old and new, because this time new breaks with everything we've rendered in the past 20 years. You can't just have ray tracing hardware, because you wouldn't be able to rasterize any of the games we've had in 2 decades. Thus, Navi needs to be a hybrid raster/DXR, just like Turing is. That doesn't mean their changing/updating any less than Nvidia is.
 
  • Like
Reactions: noko
like this
Actually, you would be incorrect in assuming I forgot about Nvidia's SM changes. You either misread my post or didn't bother with it at all, because, and I quote again:



I'm not going to chew the information again because I have better things to do, but to prove my point:

1) Transforming and lighting was an actual new architecture in 1999 with Geforce 256. No longer done in software and requiring specific hardware, this originated what we now call a GPU. See here.
2) Unified shaders were a new thing in 2006 with Geforce 8800. What you see in every Nvidia architecture until Pascal is, from a panoramic point of view, alterations on the same thing. See here.
3) RT and Tensor cores and were a new thing in 2018 with Geforce 2080. Neither were present in anything prior from Nvidia, and were a completely new layer of hardware added to the previous stuff. See here.

Shuffling around SMs and their contained Cuda cores generation after generation is by definition, a partially updated architecture, not a new one. T&L, Unified shaders, RT/Tensor were new architectures that required a great overhaul of how their GPUs work. Note again, once every decade.

Claiming that AMD did barely a change because they haven't changed their CU composition is supremely ignorant. You're dismissing, just to give one example, all the changes that happened with Polaris:

View attachment 166699

AMD went through the same changes as Nvidia. They went through T&L, they went through unified shaders. They're now about to go through ray tracing hardware, whatever form that takes, it will actually be new again because to perform properly they'll need a new hardware architecture. Navi will probably be the Turing equivalent: a hybrid of old and new, because this time new breaks with everything we've rendered in the past 20 years. You can't just have ray tracing hardware, because you wouldn't be able to rasterize any of the games we've had in 2 decades. Thus, Navi needs to be a hybrid raster/DXR, just like Turing is. That doesn't mean their changing/updating any less than Nvidia is.

So they fixed their front in what version?
 
So they fixed their front in what version?

Just because they didn't fix feature A or B doesn't imply there weren't other relevant changes, or that there weren't changes at all as you implied earlier. You don't get to decide what a meaningful change constitutes. The tech industry does. If you don't like it, take it up with AMD/Nvidia/the tech press - they all seem to agree with what I've argued, judging from the 13 years of tech reporting I've briefly documented here.
 
Just because they didn't fix feature A or B doesn't imply there weren't other relevant changes, or that there weren't changes at all as you implied earlier. You don't get to decide what a meaningful change constitutes. The tech industry does. If you don't like it, take it up with AMD/Nvidia/the tech press - they all seem to agree with what I've argued, judging from the 13 years of tech reporting I've briefly documented here.

The major crux of GCN (besides powerconsumption) is the frontend...tinkering with other parts still left the major bottleneck in place...and NVIDIA took advantage of this in their designs...they now have performance lead and RT hardware in place.

You can litterally see the path from their G80 to RT capable Turing today and it will not stop here...rasterization performance will matter less and less.
Expect stuff that is not CUDA cores to become more and more the norm post-Turing.

And I won't even mention the elephant in the room: R&D budget.
 
Rapid packet math works, primitive (shader) discard works per title, tiled based rasterization? and HBCC has always worked - no idea where you got those conclusions from. HBCC just does not serve that much purpose for gamers when you have enough installed VRAM to begin with.


Wait, you are staying that PSD works in a per title basis?!? Which "titles" have this enabled?

Most know that you and I are the big Vega users/fans around here, but that is news to me.

It actually pisses me off since Raja said it was coming to "everything" with a driver update that we never got. I made enough money from VEGA I didn't care too much, but if I were purely a gamer I would have been.


On ab uarch point, what a lot of you are forgetting is that AMD (via GCN) has a ton of compute performance enabled on their consumer facing cards. They allowed us to mod BIOS up to Polaris (from there MS prohibited it from all 3 camps to be certified).

Nviida gimps their compute performance. If you look at their compute dies they draw a fair amount of power and perform in the relm of AMD's offering aside from 1-2 SKUs.


AMD should have given Polaris another 20-30 CUs with an updated memory controller on 12nm. That would have given them close to the performance crown, especially if they could have tweaked their DC compression.

But Lisa Su knew that ZEN needed to come first, and AMD is just now reaping those rewards..Be interesting to see what the next release from both camps looks like.
 
With both Xbox (Scarlett) and PS5 running Navi with Ray Tracing, I am starting to wonder if Navi GPU announced tomorrow will have Ray Tracing...

It would be kind of strange that they could fit it in a console, but not in a dedicated GPU card. I suppose it could be timing. Navi GPU cards are shipping in July (?), and new Consoles are a year away.
 
With both Xbox (Scarlett) and PS5 running Navi with Ray Tracing, I am starting to wonder if Navi GPU announced tomorrow will have Ray Tracing...

Be it via software or hardware, I'll be baffled if it doesn't.
 
So my Radeon VII is 14.2 Tflops... I suppose it was a good choice to not wait so far.

That's about where I expected 5700 to be... ~10 Tflops. It's nothing to write home about, certainly not at $500.
 
Be it via software or hardware, I'll be baffled if it doesn't.

While I know others disagree, I think if it's just software coming in at a fraction of RTX cards (like Pascal) then that does AMD more harm than good.

All slow RT does is advertise RTX cards that have the real thing. Which is obviously better for NVidia than AMD.
 
That's about where I expected 5700 to be... ~10 Tflops. It's nothing to write home about, certainly not at $500.

That is the current version of Navi but down the line we might get other models. Supposedly the geometry engine on Navi has some improvements we have yet to see what is going on.
Pricing is everything as sabrewolf732 said Vega performance for Vega prices 2 years later, not cool!
 
That is the current version of Navi but down the line we might get other models. Supposedly the geometry engine on Navi has some improvements we have yet to see what is going on.
Pricing is everything as sabrewolf732 said Vega performance for Vega prices 2 years later, not cool!

Yeah at $499 this is pretty dissapointing, $399 would be a DECENT price considering you can get 64s now for $399 range. $349 would be a really winner, especially considering the lack of rtx hardware.

Also with 2070 super launching and rumored price drops this is no looking good for navi.
 
Performance matters over a Tflops number, personally I am not going to judge it until I see some benchmarks with it.

I guess we could hope they improved their efficiency. I also hope the $500 price is wrong.

Personally, I won’t buy another card without DXR support. The earliest I’d even buy is mid 2020 so AMD has some time.

Hoping Cyberpunk 2077 uses RT...
 
Status
Not open for further replies.
Back
Top