Navi vs Turing architechture comparison.

Good article....and interesting to see that NVIDIA keeps compute and adds Raytracing with Turing...while AMD removes a lot of compute and no Raytracing with Navi.

Huh?

AMD improved their compute and made it more developer friendly. They restructured the pipeline and basically increased everything to do with compute, inside of each stream processor. The only thing they decreased, are the SFUs, which don't sound very important for total performance in typical workloads. Its also completely backward compatible with stuff tailor made for GCN.

All of the improvements mean NAVI performs better with less total stream processors. Even though paper specs look like it should perform worse. This gives them a lot more room to scale up for a better, more expensive card. Nvidia has done the same thing with some of their cards. Less specs on paper, same or better performance in games.
 
Huh?

AMD improved their compute and made it more developer friendly. They restructured the pipeline and basically increased everything to do with compute, inside of each stream processor. The only thing they decreased, are the SFUs, which don't sound very important for total performance in typical workloads. Its also completely backward compatible with stuff tailor made for GCN.

All of the improvements mean NAVI performs better with less total stream processors. Even though paper specs look like it should perform worse. This gives them a lot more room to scale up for a better, more expensive card. Nvidia has done the same thing with some of their cards. Less specs on paper, same or better performance in games.

https://www.anandtech.com/show/14618/the-amd-radeon-rx-5700-xt-rx-5700-review/13
 
That article says its a driver/support problem. There's no reason specs-wise, that Navi shouldn't be wasting AMD's previous cards in these tests. Many of them don't even run. Again, due to driver stagnation. This is a trend with several of AMD's features in their driver. Hopefully their RDNA branding will make them more internally accountable for this stuff so that long term, we won't have these issues anymore.

*it also shows that being backwards compatible in hardware, is not a free pass to performance with older software.
 

I suspect the current lowish compute performance shown by navi under opencl has to do with it basically being addressed as a GCN arch. Double edge sword of building a new arch that is 100% backwards compatible. It is running on software compiled for GCN, but its not running at max potential. It would be nice if AMD was quicker on enabling compiler stuff. They should really have updated stuff out the door on launch. ROCm amds open compute stack still hasn't had navi support patched in.

It will be interesting to see what ROCm looks like in the next month or two with 2.7 which should have navi support. It will be interesting to see if the low compute performance right now is because of backwards compatibility. I guess AMD figures 5700 isn't positioned as a compute part. Hopefully 5800/5900 means new RadeonPRO parts and updates to compute compilers for RDNA. They aren't selling RDNA RadeonPros yet so I guess we can't really bitch yet. lol
 
RDNA is for games... Vega is for Enterprise. Nobody is buying Navi for compute. (Radeon 7's were on sale).
 
That was as a good read, thanks for that. Interesting how close the outcomes where. AMD uses muchess transistors to match the the performance (after taking tensor cores I to account), so it's curious why it still seems to be less power efficient. Less transistors and 7nm... Kind of weird, hopefully increases in 7nm production can reduce the power consumption some in the future.
 
That was as a good read, thanks for that. Interesting how close the outcomes where. AMD uses muchess transistors to match the the performance (after taking tensor cores I to account), so it's curious why it still seems to be less power efficient. Less transistors and 7nm... Kind of weird, hopefully increases in 7nm production can reduce the power consumption some in the future.

If you compare the 5700xt to the 2070 they are about the same perf and transistors. Take out the RTX resistors and nVidia is more efficient both in power and fps per transistor in rasterized.
 
If you compare the 5700xt to the 2070 they are about the same perf and transistors. Take out the RTX resistors and nVidia is more efficient both in power and fps per transistor in rasterized.
I will as specifically talking about the 2070 super from the article, sorry about that, I forgot to type super in my post. They conclude it's within 2% speed, so figured it was a pretty good comparison. Of course some games one way or the other as we've seen in the last. I typically type on my phone, so it's easy for me to make mistakes, have my phone auto correct or just miss letters :). My apologies. I also realize the article led me astray, it compares transistor count for tu102, not for the 2070 super. Looking up transistor count for that, it appears they are pretty similar in transistor counts (if you take tensor cores into account).
Thanks for pointing out my mistake, it happens! (Probably more often than I care to admit).
 
If you remove the marketing fluff RDNA is basically Pascal + much larger L1 cache. The execution engines SM/CUs, schedulers, wavefront size and cache hierarchy are remarkably similar. With RDNA AMD has now caught up to Nvidia on efficiency per flop but are still behind on other efficiency metrics.
 
If you remove the marketing fluff RDNA is basically Pascal + much larger L1 cache. The execution engines SM/CUs, schedulers, wavefront size and cache hierarchy are remarkably similar. With RDNA AMD has now caught up to Nvidia on efficiency per flop but are still behind on other efficiency metrics.


Navi10 is only the beginning.
 
Turing is only the beginning.

Plot twist, Intel devastates AMD and nVidia.


But Volta was suppose to be the next big gaming thing, then Nvidia released Turing to Gamers. It's been out a year... and at $800 doesn't offer much performance over Pascal's performance. Now Nvidia is going to be on their 3rd new architecture in just 2 years to try and compete..?

RDNA is here to stay, Turing is already dead.
 
But Volta was suppose to be the next big gaming thing, then Nvidia released Turing to Gamers. It's been out a year... and at $800 doesn't offer much performance over Pascal's performance. Now Nvidia is going to be on their 3rd new architecture in just 2 years to try and compete..?

RDNA is here to stay, Turing is already dead.

I don’t really get attached to architectures... I am fine with Turing dies if something 30%+ faster comes out.

And you are correct, RDNA is here to stay. AMD will take RDNA, a decent arch, and run it into mediocracy like they did GCN.
 
Last edited:
But Volta was suppose to be the next big gaming thing

Citation please

It's been out a year... and at $800 doesn't offer much performance over Pascal's performance.

Citation please

Now Nvidia is going to be on their 3rd new architecture in just 2 years to try and compete..?

You mean embarress AMDs best efforts?

RDNA is here to stay, Turing is already dead.

Nice troll.
 
But Volta was suppose to be the next big gaming thing, then Nvidia released Turing to Gamers. It's been out a year... and at $800 doesn't offer much performance over Pascal's performance. Now Nvidia is going to be on their 3rd new architecture in just 2 years to try and compete..?

RDNA is here to stay, Turing is already dead.

But Vega was supposed to be the next big gaming thing (and fury before that, and polaris, and... well, you get the idea), then AMD released Navi to Gamers. It's late to the party and at $400 doesn't offer any performance over Pascal's. Now AMD is going to be on their 3rd new architecture in like forever to try and compete..?

RDNA is here to stay, becuase AMD has nothing else to offer.

You see? it works both ways. Except this one is real :D:D:D:rolleyes::rolleyes:
 
But Vega was supposed to be the next big gaming thing (and fury before that, and polaris, and... well, you get the idea), then AMD released Navi to Gamers. It's late to the party and at $400 doesn't offer any performance over Pascal's. Now AMD is going to be on their 3rd new architecture in like forever to try and compete..?

RDNA is here to stay, becuase AMD has nothing else to offer.

You see? it works both ways. Except this one is real :D:D:D:rolleyes::rolleyes:


You keep talking about the past and mocking AMD. That only works for people who are still in high school.

Clearly, you don't want to deal with reality and what is going on right now, so you keep talking about the past. Vega is EOL and everybody knows it, but you have to bring it up, because Vega vs Turing is all you have. As such, You don't have an argument here, you are just commenting because that is what you are stuck doing, because you know very little about the subject matter. Because Nvidia and Jensen don't have anything for Dr Lisa Su's RDNA... so I doubt you, or any others on this forum do, either.


Word games only works with children, I wish Kyle was around to tamp-down all these soothsayers. In the real world, you have to deal with facts. Here is one more:

Assetto Corsa Competizione, the big AAA race simulator slated for a September release, will lack support for NVIDIA RTX real-time raytracing technology, not just at launch, but even the foreseeable future. The Italian game studio Kunos Simulazioni in response to a specific question on the Steam Community forums confirmed that the game will not receive NVIDIA RTX support.

"Our priority is to improve, optimize, and evolve all aspects of ACC. If after our long list of priorities the level of optimization of the title, and the maturity of the technology, permits a full blown implementation of RTX, we will gladly explore the possibility, but as of now there is no reason to steal development resources and time for a very low frame rate implementation," said the developer, in response to a question about NVIDIA RTX support at launch.

This is significant, as Assetto Corsa Competizione was one of the posterboys of RTX, and featured in the very first list by NVIDIA, of RTX-ready games under development.



Stoly, you will be gaming on RDNA soon. You'll be able to sell you RTX card and buy a cheaper big-navi and get more performance. That is, if you RTX has any resale.
 
You keep talking about the past and mocking AMD. That only works for people who are still in high school.

Clearly, you don't want to deal with reality and what is going on right now, so you keep talking about the past. Vega is EOL and everybody knows it, but you have to bring it up, because Vega vs Turing is all you have. As such, You don't have an argument here, you are just commenting because that is what you are stuck doing, because you know very little about the subject matter. Because Nvidia and Jensen don't have anything for Dr Lisa Su's RDNA... so I doubt you, or any others on this forum do, either.


Word games only works with children, I wish Kyle was around to tamp-down all these soothsayers. In the real world, you have to deal with facts. Here is one more:





Stoly, you will be gaming on RDNA soon. You'll be able to sell you RTX card and buy a cheaper big-navi and get more performance. That is, if you RTX has any resale.


I knew you were going to post Assetto Corsa and I couldn't agree more with them. Why would you want a racing game running at anything less than 60fps?
IQ wise from the screenshots I've seen, RTX does look quite a bit better.

And I won't get RDNA anytime soon if ever, not even getting a console.
 
Citation please
Some of his points aren't hard in reality, but the one about performance is...
"Based on what we can test today, the GeForce RTX 2080 and GTX 1080 Ti perform equally in traditional games, but the GeForce RTX 2080 costs $100-plus more."
https://www.pcworld.com/article/330...0-vs-gtx-1080-ti-which-graphics-card-buy.html

And most came out to similar conclusion. 2080 was more expensive and didn't bring any more performance. The 2080 ti brought more performance at a sustainable price increase.
 
Some of his points aren't hard in reality, but the one about performance is...
"Based on what we can test today, the GeForce RTX 2080 and GTX 1080 Ti perform equally in traditional games, but the GeForce RTX 2080 costs $100-plus more."
https://www.pcworld.com/article/330...0-vs-gtx-1080-ti-which-graphics-card-buy.html

And most came out to similar conclusion. 2080 was more expensive and didn't bring any more performance. The 2080 ti brought more performance at a sustainable price increase.

2080 is 5-10% faster than 1080 Ti and 2080S even more.
 
Some of his points aren't hard in reality, but the one about performance is...
"Based on what we can test today, the GeForce RTX 2080 and GTX 1080 Ti perform equally in traditional games, but the GeForce RTX 2080 costs $100-plus more."
https://www.pcworld.com/article/330...0-vs-gtx-1080-ti-which-graphics-card-buy.html

And most came out to similar conclusion. 2080 was more expensive and didn't bring any more performance. The 2080 ti brought more performance at a sustainable price increase.

I think a big part of many being turned off by Turing was the boost from engineering was only about 15% over Pascal for rasterized efficiency. The problem was Tensor / RT ate up that 15% and most don’t value RT (yet). In a sense, the rasterized performance is brute forced through transistors since there’s all that RTX dead weight.

It really was a ballsy move by nVidia.
 
2080 is 5-10% faster than 1080 Ti and 2080S even more.
I'm assuming that article was comparing when Turing was the new architecture. Yes, they have just recently released a new card, but the 2080 super or want available as an upgrade for a 1080ti user at the time.
I was just saying that when it came out it was more expensive to get the same level of performance. Normally you get more performance for the same price with new generations. Like I said, that was about his only point that made any sense that could easily be verified.
 
I think a big part of many being turned off by Turing was the boost from engineering was only about 15% over Pascal for rasterized efficiency. The problem was Tensor / RT ate up that 15% and most don’t value RT (yet). In a sense, the rasterized performance is brute forced through transistors since there’s all that RTX dead weight.

It really was a ballsy move by nVidia.
Not really ballsy, just they knew AMD didn't have a competitive product and it was a great chance to put pressure on the market. Tensor cores may be dead weight, but it's still currently the fastest raster ever produced. They could have put swapped tensor cores for more raster, but they were only competing with themselves at this point and wanted to come to market with RT before AMD did.
 
Not really ballsy, just they knew AMD didn't have a competitive product and it was a great chance to put pressure on the market. Tensor cores may be dead weight, but it's still currently the fastest raster ever produced. They could have put swapped tensor cores for more raster, but they were only competing with themselves at this point and wanted to come to market with RT before AMD did.

I saw it as a large risk AMD could use to gain market share. They had to have seen the same risk. They could have continued with GTX and kept the status quo but went with the large risk so I considered it ballsy, especially for a corporation.
 
I saw it as a large risk AMD could use to gain market share. They had to have seen the same risk. They could have continued with GTX and kept the status quo but went with the large risk so I considered it ballsy, especially for a corporation.

If you consider their position, it's not much of a risk, they have a huge lead on AMD in market share, and performance on the top end, so they have room to make these kinds of moves, without impacting the bottom line. Even if it was a failure, people will still buy it, because of the brand recognition, and they don't know any better.
 
If you consider their position, it's not much of a risk, they have a huge lead on AMD in market share, and performance on the top end, so they have room to make these kinds of moves, without impacting the bottom line. Even if it was a failure, people will still buy it, because of the brand recognition, and they don't know any better.

According to Gamer X the risk has been realized and nVidia is getting gaped. Not that we’ve seen any evidence of that in anyway, shape or form from him.
 
It is up to AMD to compete with Nvidia and make money, market share and grow their market. While RTX has not been that compelling in the end result, it is not hurting them either other than loosing some sells due to higher prices but AMD has been absent so long even that picked up in the last quarter for Nvidia. Unless AMD releases some higher and lower end cards it will be a Nvidia Christmas celebration moving into the new year. When Nvidia has another big jump, bigger than what we saw from Pascal to Turing I would think - a large node change will make this very interesting. AMD will need some magic, a two chip or more solution? A 60+ compute unit chip with another I/O, RT(?) hardware, encoder and what not? AMD maybe not ready for that and will pump out a decent rasterizer card for the money - that would be OK but nothing too exciting.

I think I will be more in tune with a HBM version that does ProRender exceedingly well hopefully with better RT support and some hefty amounts of ram. Game wise current generation and even last generation is getting the job done at this time. I wish next gen ThreadRipper would be revealed and become available, that to me will be more interesting for graphical work.
 
I saw it as a large risk AMD could use to gain market share. They had to have seen the same risk. They could have continued with GTX and kept the status quo but went with the large risk so I considered it ballsy, especially for a corporation.

It wasn’t much of a risk in terms of performance leadership. Turing is much faster than Pascal and it was very unlikely that AMD would catch up in 2018.

The risk was that AMD would pull out another HD 4870 surprise in 2019 with amazing price/perf before nvidia could get to 7nm. Well that didn’t happen so nvidia is breathing easy right now.
 
We know that Nvidia has nothing to defend against RDNA with. So we wait.

The problem for many cheerleaders is that their team doesn't have anything exciting to cheer about, or anything worthwhile on the horizon.
 
Back
Top