AMD RDNA 2 gets ray tracing

Well, if we go only by what AMD have already told us, which is up to 50% better performance per watt, that is a very big performance hint.

So, it really seems like we can guess how well RDNA2 / Big Navi will perform by just guessing how high they are willing to go in TDP.

Typically stock GPU's tend to top out at about 250W, if that's where they go, expect something up to 60% faster than a current 5700 XT, which means it would be trading blows with a 2080 ti.

AMD have not been shy about upping the TDP a whole lot more in the past with fancy AIO coolers though. If they are willing to go up to 350W again like they were with one of the liquid cooled Vega 64 (Fronteir edition or something?) we could be talking 125% faster than the 5700 XT which would make it the fastest consumer GPU on the market, at least until Ampere hits.

It's going to be interesting to see where this one lands.

The performance of the raytracing is a big unknown.
Yep, I wonder about two things, for one AMD is not have a big node change but yet is gaining 50% performance increase per watt. The 36 CU PS5 is being shown to do 60FPS at 4K, not checkerboard with RT reflections at 1080p. The 5700XT 40 CU's can't even come close to usable 4K, little alone with RT. RNDA2 I can't believe is an utter whole new uber design, it is most likely based off of the original RNDA. So what is this secret sauce that it appears to be using?

This was shown last year in March:

oNDie.png


I think some do not understand that Coretek mostly uses available patents, AMD official slides/whitepapers, source code from available programs like in Linux open drivers and then he speculates, any leaks appears he will also consider. So he is basically speculating as best he can with the available data. Anyways 3d Stacked Memory either SRam or some other form, maybe MRam but doubt that since TSMC has it on 22nm process unless they are working with AMD. Putting the most intensive memory operations on a local fast very large pool/cache which for textures and transversal of the BVH would free up the regular GPU ram which could allow them to use cheaper slower DDR 6 and a smaller width bus while outperforming previous performance. I really don't know but something big I think has to be incorporated to get AMD another 50%/watt boost from basically the same process.
 
Last edited:
Yep, I wonder about two things, for one AMD is not have a big node change but yet is gaining 50% performance increase per watt. The 36 CU PS5 is being shown to do 60FPS at 4K, not checkerboard with RT reflections at 1080p. The 5700XT 40 CU's can't even come close to usable 4K, little alone with RT. RNDA2 I can't believe is an utter whole new uber design, it is most likely based off of the original RNDA. So what is this secret sauce that it appears to be using?

This was shown last year in March:

View attachment 255791

I think some do not understand that Coretek mostly uses available patents, AMD official slides/whitepapers, source code from available programs like in Linux open drivers and then he speculates, any leaks appears he will also consider. So he is basically speculating as best he can with the available data. Anyways 3d Stacked Memory either SRam or some other form, maybe MRam but doubt that since TSMC has it on 22nm process unless they are working with AMD. Putting the most intensive memory operations on a local fast very large pool/cache which for textures and transversal of the BVH would free up the regular GPU ram which could allow them to use cheaper slower DDR 6 and a smaller width bus while outperforming previous performance. I really don't know but something big I think has to be incorporated to get AMD another 50%/watt boost from basically the same process.


Yeah, beats me too, but it would be stupid of them to make projections like that if they didn't think they could at least get close, when the products will be testable as soon as they launch.
 
Well, AMD has been known for over-hyping products, but 50% per watt improvement is very specific and I don't believe they would say that unless they were there or close (obviously in the best cast scenario).

But we don't know what power levels we are talking about. We are not guaranteed 50% better performance overall unless we are assuming same power usage. So it could be less, or even more, than 50%, we don't know.

They had to figure out something fresh. I don't think it's the same architecture with a few tweaks, it must be something more. Clearly we've seen the PS5 videos, and the next PC card should be at least there probably much better (minus the Sony SSD magic, leave that for another thread).
 
  • Like
Reactions: noko
like this
Well, If you take the 50% performance per watt as the best case scenario.

Using Power consumptions figures that I got off various reviews of both the 2080Ti and the 5700 XT, and using Metro Exodus benchmarks at 4K.

An RDNA 2 card using the same power as the 2080Ti will be roughly 25% faster than the 2080Ti.
 
  • Like
Reactions: noko
like this
Well, If you take the 50% performance per watt as the best case scenario.

Using Power consumptions figures that I got off various reviews of both the 2080Ti and the 5700 XT, and using Metro Exodus benchmarks at 4K.

An RDNA 2 card using the same power as the 2080Ti will be roughly 25% faster than the 2080Ti.

Interesting. I did a similar calculation, but I just assumed the TDP Nvidia published was accurate. What numbers dud you find for actual power use for the 2080ti?
 
Well, If you take the 50% performance per watt as the best case scenario.

Using Power consumptions figures that I got off various reviews of both the 2080Ti and the 5700 XT, and using Metro Exodus benchmarks at 4K.

An RDNA 2 card using the same power as the 2080Ti will be roughly 25% faster than the 2080Ti.
I would not use 4K numbers due to the 5700 XT bandwidth limitations, unless that is carried over to RNDA 2 designs. Here are the numbers I used:

Power usage gaming @ TomsHardware -> Their testing is pretty involved
2080 Ti -> 277w
5700 XT 217w

Metro Exodus data at 1440p:
https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/18.html

5700 XT 82fps
2080 Ti 116.4fps

First is calculating or the projection of what the fps would be if RNDA2 part is using 217w, same as the 5700 XT and increasing perf/w by 50% or factor of 1.5 in Metro Exodus:

  • 1.5 x 82fps = 123fps -> this is faster than a 2080 Ti, which kinda explains how the 36Cu PS5 is able to do 4K at 60fps

Now if RNDA 2 with more CU's etc. than a 5700 XT maintains or is 50% Perf/w better, calculating what the performance would be at 275w (basically = 2080 Ti stock gaming) in Metro Exodus:

  • 275w/217w x 1.5 x 82fps = 156fps

This would put the RNDA 2 part 156fps/116fps = 1.34 or a factor of 1.34 faster than a 2080 Ti (34% faster) in Metro Exodus at 1440p (no RT) at roughly the same power usage.

RNDA 1 did not appear to scale well with increase power:
https://www.techpowerup.com/review/xfx-radeon-rx-5700-xt-thicc-iii-ultra/32.html

The XT Thicc III power was 277w (same as a 2080 Ti) and only gained 3 fps. That does not mean RNDA 2 will not have some OCing room or scaling past let say 275w but to start calculating 350w etc. I think would fall apart. This is one game. Best to wait to see what the real actual performance is.

The other factor is how well the fix function RT works with the shaders, that I believe can only be compared once the hardware is out, for sure with Ampere. It looks promising in my opinion from PS5 reveal but nothing to compare it to with Ampere. Just have to wait for the hardware, good reviews, price points and hopefully pick out what fits best for you.
 
Last edited:
Interesting. I did a similar calculation, but I just assumed the TDP Nvidia published was accurate. What numbers dud you find for actual power use for the 2080ti?

Most of the review sites have the power use at around 275 watts. Techpowerup, Guru3D, Tomshardware.
 
Calculations

Yeah, I was going to use 1440P. But thought that the 4K figures, despite the 5700xt's bandwidth, would be more give a more realistic estimate, especially when AMD uses the phrase "up to 50% better performance per watt"

I believe that they will get most of the performance per watt in consoles as they will be built that way. Not sure you will get the same on the desktop cards.

The 50% performance per watt is coming from a combination of a new 7nm process and a more refined Architecture. I believe the new process gives better yields and about 10% better performance per watt, so the other 40% improvement is coming from RDNA 2.

If they keep it under 300Watt probably looking at around 30% improvement over the 2080Ti.

As for picking the card that suits best. People in the other threads are suggesting that 3080ti/3090 will be a 30% improvement over the 2080Ti. I can't see it been that low myself. AMD got that jump moving Vega to 7nm without any real changes. Surely If Ampere was just a die shrink of Turing it would get the same 30% improvement, actually it should get more as it's been manufactured on the same improved 7nm process as RDNA 2.

I am ignoring Ray Tracing for the moment. We don't know enough about AMD's solution to make any sort of estimates on power use or performance.

We shall have to wait and see. It's definitely the most interesting launch in a good few year with the consoles, AMD and Nvidia all releasing in a relatively short space of time.
 
  • Like
Reactions: noko
like this
The 50% performance per watt is coming from a combination of a new 7nm process and a more refined Architecture. I believe the new process gives better yields and about 10% better performance per watt, so the other 40% improvement is coming from RDNA 2.

The MAJORITY of any performance benefits is most of the time due to changes in the PROCESS not achitechture.
One of the few exceptions I can remember is Maxwell...but you are VERY mistaken.
If would be closer to 45% improvement from the process and 0-5% from the architechture.

Fuck I hate "Silly Season"...so much crap get posted and too many people get away with posting with their head up their rectums...
 
The MAJORITY of any performance benefits is most of the time due to changes in the PROCESS not achitechture.
One of the few exceptions I can remember is Maxwell...but you are VERY mistaken.
If would be closer to 45% improvement from the process and 0-5% from the architechture.

Fuck I hate "Silly Season"...so much crap get posted and too many people get away with posting with their head up their rectums...

Yes, A die shrink always leads to bigger performance jumps than those GPUs that are released without a die shrink.

But AMD don't have a die shrink this time around, the process they are moving to is just an improved 7nm process. From all that I can find out moving from 7nm to 7nm+ offers 10% better performance per watt.

So if AMD have a 50% performance per watt increase over RDNA with RDNA 2, that means the architecture accounts for the majority of the performance per watt increase.

It was the same with Pascal to Turing. The 12nm process wasn't a big jump in power savings or performance per watt over the 14nm/16nm process, something like 25% power/10% performance.
 
  • Like
Reactions: noko
like this
The MAJORITY of any performance benefits is most of the time due to changes in the PROCESS not achitechture.
One of the few exceptions I can remember is Maxwell...but you are VERY mistaken.
If would be closer to 45% improvement from the process and 0-5% from the architechture.

Fuck I hate "Silly Season™"...so much crap get posted and too many people get away with posting with their head up their rectums...
Well then, stop posting. The slight node change is not giving 45% increase in performance/w, 7nm to 7nm whatever.
 
Yeah, I was going to use 1440P. But thought that the 4K figures, despite the 5700xt's bandwidth, would be more give a more realistic estimate, especially when AMD uses the phrase "up to 50% better performance per watt"

I believe that they will get most of the performance per watt in consoles as they will be built that way. Not sure you will get the same on the desktop cards.

The 50% performance per watt is coming from a combination of a new 7nm process and a more refined Architecture. I believe the new process gives better yields and about 10% better performance per watt, so the other 40% improvement is coming from RDNA 2.

If they keep it under 300Watt probably looking at around 30% improvement over the 2080Ti.

As for picking the card that suits best. People in the other threads are suggesting that 3080ti/3090 will be a 30% improvement over the 2080Ti. I can't see it been that low myself. AMD got that jump moving Vega to 7nm without any real changes. Surely If Ampere was just a die shrink of Turing it would get the same 30% improvement, actually it should get more as it's been manufactured on the same improved 7nm process as RDNA 2.

I am ignoring Ray Tracing for the moment. We don't know enough about AMD's solution to make any sort of estimates on power use or performance.

We shall have to wait and see. It's definitely the most interesting launch in a good few year with the consoles, AMD and Nvidia all releasing in a relatively short space of time.
With TPU data, look at the 5700 XT at 1440 compared to Vega 64 and then at 4K, 5700 XT nose dives to the point where the Vega 64 is faster at 4K and is very much GPU memory limited vice GPU limited. Hopefully AMD has proper bandwidth for their generation of GPU's for 4K on the higher end.

https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/18.html
 
With TPU data, look at the 5700 XT at 1440 compared to Vega 64 and then at 4K, 5700 XT nose dives to the point where the Vega 64 is faster at 4K and is very much GPU memory limited vice GPU limited. Hopefully AMD has proper bandwidth for their generation of GPU's for 4K on the higher end.

https://www.techpowerup.com/review/amd-radeon-rx-5700-xt/18.html

I think we are both saying the same thing though. That 30% faster than the 2080Ti should be achievable if AMD's 50% performance per watt is accurate.
 
  • Like
Reactions: noko
like this
Well then, stop posting. The slight node change is not giving 45% increase in performance/w, 7nm to 7nm whatever.

You again :rolleyes:

Please provide examples of a SKU where the uplift was caused mainly by architechture.

AFTER you provide the documentation you never posted about "serial raytracing"!
(You posted VERY sure of yourself...but never cared to backup your false claims)

I wonder how long the list of unsubstantiated claims you make can become? :)
 
Off the top of my head.

G71 to G80.
Kepler to Maxwell.
Pascal to Turing.

Now with data...subtract the node uplift from performance.
That will give you Kepler -> Maxwell

And I will love to see you figures from Pascal -> Turing...
 
Now with data...subtract the node uplift from performance.
That will give you Kepler -> Maxwell

And I will love to see you figures from Pascal -> Turing...

Pascal -> Turing was about 10% on rasterization IIRC. nVidia usually gets at least 10% out of an arch change. I think Turings was I can run mixed precision on the same cycle?

Anyways, I think you guys are putting too much faith into the efficiency per watt. Companies love to give out bogus numbers for that and it’s usually for some asinine use case no one actually performs.
 
You again :rolleyes:

Please provide examples of a SKU where the uplift was caused mainly by architechture.

AFTER you provide the documentation you never posted about "serial raytracing"!
(You posted VERY sure of yourself...but never cared to backup your false claims)

I wonder how long the list of unsubstantiated claims you make can become? :)
lol, :ROFLMAO:, that is super easy, on the same exact node even, 7nm TSMC

5700XT is 1% faster at 1080p, 1% slower at 1440p compared to a Vega VII.
GCN Vega VII is 300w
RNDA 5700 XT is 225w

How much more efficient is RNDA over GCN 5th generation architecture on the exact same node?

  • Basically at 1080P and 1440p they are basically equal
  • 300w/225w = 1.33, the Vega II takes 33% more power for the same gaming performance
  • RNDA for gaming at 1080p and 144p is 33% more efficient then GCN 5th generation on the same node
You are very entertaining. ;)

That is with the 5700 XT with much less memory bandwidth and 1/3 less shaders to boot!
 
Last edited:
This is LITERALLY the only reason I won’t consider AMD cards at the moment. The number of AMD driver issues I’ve seen posted across several different hardware forums completely turns me off from their GPUs.
I hope then in the same vein nvidia not having a win 10 driver working for ages did put you off, or hardware failures did too.
Friend of mine is a newbie and hasn't had a single issue gaming on 5700xt since launch and there are many like him, but they don't make loud noises and new threads every time.
I haven't had a major AMD driver issue since 2011.
But you don't listen to feedback like that, just fud train. Hell, the GeForce forums can be full of that shit too but you guys don't scream it from the rooftop.
But don't worry, AMD drivers suxxxx and nvidia Intel bestors because some angry fanboys said so.
 
{insert new AMD architecture here} will release. It will offer roughly the same performance as {insert Nvidia's current 2nd fastest product here} while generating more heat and noise, but it will cost only ${insert (Nvidia's price for equivalent performance, subtract 12%)}!
 
Well then, stop posting. The slight node change is not giving 45% increase in performance/w, 7nm to 7nm whatever.

Unless there are massive yield improvements, producing higher quality silicon, which seems unlikely.

It would be profoundly stupid of AMD to lie about something so verifiable in the not too distant future, but I too am wondering how in hell they are going to pull 50% performance per watt improvement out of without a major node change. It seems too good to be true.
 
I hope then in the same vein nvidia not having a win 10 driver working for ages did put you off, or hardware failures did too.
Friend of mine is a newbie and hasn't had a single issue gaming on 5700xt since launch and there are many like him, but they don't make loud noises and new threads every time.
I haven't had a major AMD driver issue since 2011.
But you don't listen to feedback like that, just fud train. Hell, the GeForce forums can be full of that shit too but you guys don't scream it from the rooftop.
But don't worry, AMD drivers suxxxx and nvidia Intel bestors because some angry fanboys said so.
Don't need to get all pissy about it. When you take the proportion of people who have Nvidia cards to AMD and then compare how many more driver issue threads I see from actual AMD users is telling. I had a R9 390 a few years back; loved the hardware but was not a fan of the software back then either. I can say with certainty that during the year that I had my R9 390 I had as many problems that I've had with Nvidia cards over 2-3 years. I'm not new to people bitching on forums, I understand how it works.
 
Unless there are massive yield improvements, producing higher quality silicon, which seems unlikely.

It would be profoundly stupid of AMD to lie about something so verifiable in the not too distant future, but I too am wondering how in hell they are going to pull 50% performance per watt improvement out of without a major node change. It seems too good to be true.
This is what AMD put out dealing with RNDA 2 in March 2020, look at the endnote for what and how tested (brief), bottom of first image says to see Endnote RX 325. Since this was a financial call, misstatements would not be wise for the company.
https://ir.amd.com/static-files/321c4810-ffe2-4d6c-863f-690464c033a9


50perc.png

Method.png


EndNotes.png


Increase clock speed is one of the methods which normally decreases perf/w but shifting the efficiency curve to the right with frequency appears to have been successful since AMD is claiming on raising it above. It will be interesting when these cards are launched and reviewers get a first stab at them. No real info on RT performance other than PS5 stuff which actually looks decent but also limited. Improve IPC, Higher Frequency, Decrease Switching Power. Still listed as 7nm node, no specifics if 7nm+.
 
Don't need to get all pissy about it. When you take the proportion of people who have Nvidia cards to AMD and then compare how many more driver issue threads I see from actual AMD users is telling. I had a R9 390 a few years back; loved the hardware but was not a fan of the software back then either. I can say with certainty that during the year that I had my R9 390 I had as many problems that I've had with Nvidia cards over 2-3 years. I'm not new to people bitching on forums, I understand how it works.
Seems like most if not all reviewers said they had no significant AMD driver issues. When the 2080 Ti came out, many of the reviewers had hardware issues, the note worthy Space Invader clone looking failure. I believe Kyle had two 2080 Ti's fail here. Does that mean everyone had hardware issues? More so then the AMD driver issues that are eluding the reviewers? As for future cards, it is best to see what is actually delivered, tested, what people find etc. and not come to premature conclusions of what it will be. While speculation with good information can be entertaining, rarely does it all pan out in the end. Now AMD is actually making the claim on the perf/w improvement, putting it on the line, PS5 games look very good for the 1st batch of games with the new console including RT. Lets hope that AMD has a very successful launch as well as Nvidia -> We win in that case.
 
  • Like
Reactions: N4CR
like this
I think AMD likes to put out Teasers, did anyone catch one on End Notes RX-325?
 
I think AMD likes to put out Teasers, did anyone catch one on End Notes RX-325?
I assume this means that they tested GCN to RDNA and got 50% p/w boost.

But it also says "RDNA2 improvement based on AMD internal estimation" which is vague enough to mean anything.
 
I assume this means that they tested GCN to RDNA and got 50% p/w boost.

But it also says "RDNA2 improvement based on AMD internal estimation" which is vague enough to mean anything.
They indicated test was with Division 2 at 1440p Ultra settings done 6/1/19. Some of the other testing on the report was done in 2020. Just enough info for one to say wtf? They had RNDA 2 back in June of 2019? How much has it improved since then? Yes, that is what they are saying based on their test of 6/1/19, estimate of 50%, is that 48.5%, 51%? Now being over a year later, hopefully with better drivers . . . . Damn AMD and their Teasers :).
 
No, what I'm saying is that they *didn't* test RDNA 2 (and that would make sense given the date).

It clearly states "RDNA2 improvement based on AMD internal estimation" meaning it's an estimation maybe based on clock speed or shader cores or what they know about the design.
 
Seems like most if not all reviewers said they had no significant AMD driver issues. When the 2080 Ti came out, many of the reviewers had hardware issues, the note worthy Space Invader clone looking failure. I believe Kyle had two 2080 Ti's fail here. Does that mean everyone had hardware issues? More so then the AMD driver issues that are eluding the reviewers? As for future cards, it is best to see what is actually delivered, tested, what people find etc. and not come to premature conclusions of what it will be. While speculation with good information can be entertaining, rarely does it all pan out in the end. Now AMD is actually making the claim on the perf/w improvement, putting it on the line, PS5 games look very good for the 1st batch of games with the new console including RT. Lets hope that AMD has a very successful launch as well as Nvidia -> We win in that case.
Just to be clear, I never said Nvidia is without it’s faults. I simply said that what I’ve seen and personally experienced; issues are more common with AMD drivers. And issues also don’t need to be major to be an annoyance, continuous minor bugs is enough to drive me away from products.

Just recently many (not all) AMD GPU users were having black screen and system crash issues in various games for months. The latest I’ve read is that while they’ve addressed most of the causes of this issue making it much less widespread and frequent, it still happens.

I have zero brand loyalty. I’d love AMD’s Big Navi to be successful. If the murmurs are accurate that Big Navi is +15% performance over a 2080 Ti and they release it at a good price (~$599) I’d be very interested. But if they can’t back a strong piece of hardware with reliable drivers then I don’t want it.
 
No, what I'm saying is that they *didn't* test RDNA 2 (and that would make sense given the date).

It clearly states "RDNA2 improvement based on AMD internal estimation" meaning it's an estimation maybe based on clock speed or shader cores or what they know about the design.
Very interesting take, good news is we will find out true or not. The 50% better p/w , basically the same node, does sound to be too good to be true but not impossible. It definitely would be bad for AND claiming this the same year they launch RNDA2 products if not true.
 
Now with data...subtract the node uplift from performance.
That will give you Kepler -> Maxwell

And I will love to see you figures from Pascal -> Turing...

G71 to G80 no node uplift.
Maxwell to Kepler, no node uplift.
Pascal to Turing - 14nm to 12nm around 10% node lift.

lol, and likewise I would love to see your figures for the node uplift from Pascal to Turing. It's 14nm to 12nm finfet, it's basically the same process just slightly improved. The same with AMD moving from 7nm to 7nm+ it's just a slightly improved process.

There were figures out there for this. Here is the 7nm to 7nm+

https://www.techspot.com/news/80237-tsmc-7nm-production-improves-performance-10.html

The 14nm to 12nm improvement was roughly the same. But I can't find the page on TSMC, but I am sure anyone else reading this thread will be able to confirm my figures.
 
The MAJORITY of any performance benefits is most of the time due to changes in the PROCESS not achitechture.
One of the few exceptions I can remember is Maxwell...but you are VERY mistaken.
If would be closer to 45% improvement from the process and 0-5% from the architechture.

Fuck I hate "Silly Season"...so much crap get posted and too many people get away with posting with their head up their rectums...

You included ;). They aren't getting a 45% improvement from 7nm to 7nm+, that just as asinine as you claim his statement to be. Maybe if you didn't contribute to whatever season you're claiming (it's trademarked, didn't want to use it in my sentence for fear of retribution) it wouldn't be such a bad season.

So if (this is a BIG if) the claimed 50% is accurate, it's not coming from just the process. More likely the 50% was only in particular things (and/or really low power parts) and it's really less than that in general with some coming from process and some coming from architecture.
 
You included ;). They aren't getting a 45% improvement from 7nm to 7nm+, that just as asinine as you claim his statement to be. Maybe if you didn't contribute to whatever season you're claiming (it's trademarked, didn't want to use it in my sentence for fear of retribution) it wouldn't be such a bad season.

So if (this is a BIG if) the claimed 50% is accurate, it's not coming from just the process. More likely the 50% was only in particular things (and/or really low power parts) and it's really less than that in general with some coming from process and some coming from architecture.
RNDA2 Note RX 325 claim is The Division 2 at 1440p and not specific what it is being compared to. Note RX-362 for RNDA 50% improvement is way more specific, Navi 10 40 CU compared to Vega 64 giving a 50% performance/w boost. So I think you are right that this gives a lot of wiggle room on what is being compared, is it a mobile part? Still it is RNDA2 compared to RNDA, how that will compare at the high end, midrange, mobile if relatively consistent is to be seen.
 
Yep, I wonder about two things, for one AMD is not have a big node change but yet is gaining 50% performance increase per watt. The 36 CU PS5 is being shown to do 60FPS at 4K, not checkerboard with RT reflections at 1080p. The 5700XT 40 CU's can't even come close to usable 4K, little alone with RT. RNDA2 I can't believe is an utter whole new uber design, it is most likely based off of the original RNDA. So what is this secret sauce that it appears to be using?

This was shown last year in March:

View attachment 255791

I think some do not understand that Coretek mostly uses available patents, AMD official slides/whitepapers, source code from available programs like in Linux open drivers and then he speculates, any leaks appears he will also consider. So he is basically speculating as best he can with the available data. Anyways 3d Stacked Memory either SRam or some other form, maybe MRam but doubt that since TSMC has it on 22nm process unless they are working with AMD. Putting the most intensive memory operations on a local fast very large pool/cache which for textures and transversal of the BVH would free up the regular GPU ram which could allow them to use cheaper slower DDR 6 and a smaller width bus while outperforming previous performance. I really don't know but something big I think has to be incorporated to get AMD another 50%/watt boost from basically the same process.
There was an article floating around about it but basically in a nutshell compute will be taking a backseat on its gaming video cards. Instead for compute AMD will have a separate architecture much like Nvidia does. Nvidia did the same thing with Pascal.

This allows AMD to leave much of the high precision compute stuff on the floor which saves space and tons of power.
 
Don't need to get all pissy about it. When you take the proportion of people who have Nvidia cards to AMD and then compare how many more driver issue threads I see from actual AMD users is telling. I had a R9 390 a few years back; loved the hardware but was not a fan of the software back then either. I can say with certainty that during the year that I had my R9 390 I had as many problems that I've had with Nvidia cards over 2-3 years. I'm not new to people bitching on forums, I understand how it works.

I have an extra Ryzen 3600 rig and may just drop the Tri X 290x in it to test the current driver just for fun as 375 watts is not to fun .
 
Navi GPU issues might not be driver issue but actually faulty hardware (or just unable to hit specific turbo frequency with given voltage, etc.). This would explain why some users are plagued with black screen issue while others never see it.
Similarly Turing GPU's having space invader might have been caused by error in drivers. It does not take long to fry hardware when you are dynamically switching clocks and voltages and clocks and make some kind of over voltage situation for a short moment. It would not be the first for Nvidia to have GPU breaking driver issue and obviously Nvidia would rather not make you know they had such driver issues because their drivers have generally good reputation.

What I am saying here is: everyone makes mistakes in drivers/hardware. I would not dismiss RDNA2 just because some of their previous products had some driver issues, especially since most users do not report any issues.

edit://
I am not really aware how these things really are so take my speculation with very large grain of salt.
 
If I were to guess I would say that Big Navi will be roughly equivalent to 3080Ti, maybe faster in some slower in others but below the 3090. Without getting into anything at all technical, if the 3090 rumour is true and it sounds like it is the really only good explanations are this:

1. They are dropping the 3080T for just 3090 and will later do Super versions

2. They will still have 3080TI but think from whatever information that they have cobbled together that it might be short of Big Navi and need a non-Titan variant to "keep" speed crown.

Also, again not getting techincal but PS5 seems to be able to do 4K 60 FPS in some new games and has 36CU. Since Big Navi is rumored to have 80 it would seem like it should be a beast.
 
A big part of the game here is how much silicon each player is going to be willing to give to the consumer market.

As an example, Big Navi is rumored to be an 'up to' 80cu part. But for the consumer market they may never offer the entire 500nm^2 part. The chess match with nVidia is how performant are the parts going to be at every "cut down" level and also what will their willingness to give the most expensive, most powerful, least profitable silicon at the top be?
As an absurd example, if nVidia can cut their die in half and be competitive with AMD top end, they could just call that the 3080Ti (so long as the raster is better than the previous gen). But if AMD is more than able to crush them at that level they'd have to offer more silicon, and that is the back and forth battle that AMD and nVidia are likely figuring out. How much to hold back, or if they should hold back any at all.

If nVidia is foreced to give up their entire die on a 3080Ti, then there can't be a 3090. You can't just "make a bigger die" it doesn't work that way. NVIDIA can choose as their product stack to ignore everything AMD is doing and just offer "this amount of die" at each level, but if there is a 3090 and not just a 3080Ti, all that will do is create another SKU at the top. As at the end of the day the entire die is the entire die, they can't make the die any bigger. It doesn't work that way. So what I;m saying is they'd essentially be renaming the previous uncut die's from *080Ti to *090Ti. Which is fine, but we as consumers wouldn't be "getting any more" than we have before. Just marketing and a renamed part.

The only other alternative is that the 3090Ti just will have better binning and be clocked higher. Which might appeal to overclockers and people willing to pay a premium for even better silicon. But without offering more die there is a limitation to how valuable and how additionally performative a part like that could be.

Also, super models are just clocked up models. Technically AMD can do the same-thing as binning goes up. But that has never been a part of their strategy.
 
Last edited:
Back
Top