AMD's next-gen RDNA 2 'major leap forward' up to 225% faster than RDNA

I've been a proponent of AMD drivers in the past, but I got bit by them this weekend on my 5700 XT.

Got a new monitor, swapped it out, and was stuck at 60Hz. Thought the monitor was broken at first (165Hz capable) but a AMD driver update fix it.

Then I was getting crashing after about 5 or 10 minutes in several games. Had to do a DDU safe mode uninstall and reinstall (of the same newest driver).

Now things are working fine, but it wasn't the best experience.
Man this sucks, and I hate to pile on! But the new AMD drivers came out and 2 games that were working just fine started to fail to launch. This happens with every beta driver. I went back to last months drivers and it is all fine. This is a process that happens every new driver. Why is that? I am serious, this is real world experience. If you have an adrenaline driver that works do not update until at least 2 new versions are released.
 
This happens with every beta driver.
Think I found your problem. Beta drivers will have issues...that's why they're marked beta. Stable drivers should have no issues, but with how complex gpus are, and how hacky game code is, it shouldn't be a surprise that some stable drivers have issues too. Report them during the beta phase, and they might get fixed for the stable release.
 
225%?!? I love AMD but that is a huge number.

I don't even want Big Navi as much as I want whatever that writer was smoking.
I think they are cherry picking their numbers, like big font “225% performance increase”, small font “in ray tracing performance”. So I am sure the numbers are accurate for the scenarios they were generated, the question is how useful are those scenarios.
 
I suppose that AMD will show us something connected not too much in the RT direction but in something that they show with this benchmark 3DMark "PCI Express feature test". Something with more objects and less RT vs Nvidia DLSS blurryness AI + RT.
In this way they will not confront directly with Nvidia and will give us more from this that they have and in the end no one will can "blame" them for that.
 
The 225% number is wrong, I think the guy got his math wrong and assumes 200% is double when in fact it would be around 100% faster. Basically he took the 'Big Navi is 72CU' rumour and extrapolated performance based on the current 5700 cards and then added 7% IPC improvement. Seems somewhat possible if you think Big Navi will have 72 CU and similar clock speeds to the current Navi.
 
The 225% number is wrong, I think the guy got his math wrong and assumes 200% is double when in fact it would be around 100% faster. Basically he took the 'Big Navi is 72CU' rumour and extrapolated performance based on the current 5700 cards and then added 7% IPC improvement. Seems somewhat possible if you think Big Navi will have 72 CU and similar clock speeds to the current Navi.
Navi 1 was the 5700XT. 100% improvement would only be around 2080Ti speed. It's supposed to be above 2080Ti speed by around 50% (according to rumors). Based on that, 225% is "accurate" (200% being a 3x improvement), if a waste of time.

Personally I'm wait and see. All this hype beforehand is a complete and utter waste of time. For not just big Navi and Ampere, but basically anything before launch.
 
Last edited:
I'm truly rooting for AMD to put out an amazing GPU that can do what this story claims, but I will believe it when I see the benchmarks.
 
I'm truly rooting for AMD to put out an amazing GPU that can do what this story claims, but I will believe it when I see the benchmarks.
I am more interested in their workstation offerings, their existing ones aren’t remotely worth it and I would like an alternative to the Quadro’s.
 
Navi 1 was the 5700XT. 100% improvement would only be around 2080Ti speed. It's supposed to be above 2080Ti speed by around 50% (according to rumors). Based on that, 225% is "accurate" (200% being a 3x improvement), if a waste of time.

Personally I'm wait and see. All this hype beforehand is a complete and utter waste of time. For not just big Navi and Ampere, but basically anything before launch.

2080ti is not double the speed of 5700xt. Common now that’s just silly thing to say. Have you actually seen the numbers?
 
You think 2x 5700X is only around a 2080ti?
Grabbed the fist overall GPU list, there are plenty more.
Shows 2080ti @ 94.4 and a 5700XT @ 70.3

https://www.tomshardware.com/reviews/gpu-hierarchy,4388.html

Or put another way on Toms graph.
Average of 9 real world games
1080p Ultra 2080 ti 151.5 FPS.... 5700 XT 117.7 FPS
1440 Ultra 2080 ti 120.5 FPS... 5700 XT 86.8 FPS
4k Ultra 2080 ti 72.7 FPS... 5700 XT 49.5 FPS

So 70-80% of the performance of a 2080 TI depending on the resolution you like. Fast enough to push a 120 HZ 1080p monitor with everything cranked to max in almost every game.... and in a very nice freesync range at Ultrawide or 1440... and even 4k. (in ultra settings)

And of course where I am in Canada. a 2080 TI is selling for $1200 Canadian... and I paid $450 for my 5700 XT. Can almost buy 3 AMD cards for the cost of one 2080 TI.... 75% of the performance. (and more importantly where I am its also 50% cheaper then a 2070 TI and within single digits performance wise.) All I know is 20% more GPU performance was in no way close worth $750 to me.... that money is much better spent on a CPU bump, better mobo... more ram. Or just a higher end monitor, and extra $750 on a high end monitor is much better investment imo. 2080 TI and even the 2070s only made sense imo if you have money to burn and or care more about what people think of your rig then actual performance. NV may have the fastest card this round... but by pricing them where they are there a waste of money. I am hoping AMD doesn't just match NV pricing if the RDNA 2 stuff is as good as the rumors.
 
2080 Ti is about 50% better than the 5700 XT.

1595375638640.png


Meaning, if the rumors are true and Big Navi is 2x the performance, we would be looking at around 50% better than 2080 Ti perf.
 
  • Like
Reactions: Nobu
like this
Well, whatever it ends up being, AMD simply has to beat the 2080ti. Anything else would be...sad...since they still have to beat the fucking 1080ti.

Also, I have a feeling that pure raster muscle might not be the deciding factor as far how sales will go for the next gen of GPU's.
 
Well, whatever it ends up being, AMD simply has to beat the 2080ti. Anything else would be...sad...since they still have to beat the fucking 1080ti.

Also, I have a feeling that pure raster muscle might not be the deciding factor as far how sales will go for the next gen of GPU's.

5700XT is a match to the 1080ti there is very little difference between the two except I get a new 5700XT for less then a used 1080ti. Since I am still using a 1080 I dont think thats a bad place to be for AMD.
 
Well, whatever it ends up being, AMD simply has to beat the 2080ti.

AMD has been consistently saying that RDNA 2 is 50% improvement on RDNA 1 in terms of performance/power

If we take that RX 5700 XT is 70% of RTX 2080 ti, than if an RTX 6700 XT uses same power budget it should be close to or slightly more than RTX 2080 ti

So 70-80% of the performance of a 2080 TI depending on the resolution you like.
 
sad...since they still have to beat the fucking 1080ti.
Well the Radeon VII was pretty close to 1080 Ti, about 3 less fps in the chart I posted above.

I would say, in practice, it was close enough to be comparable. But, yeah, technically they haven't beaten it.
 
AMD has been consistently saying that RDNA 2 is 50% improvement on RDNA 1 in terms of performance/power

If we take that RX 5700 XT is 70% of RTX 2080 ti, than if an RTX 6700 XT uses same power budget it should be close to or slightly more than RTX 2080 ti

yep. I think people forget that the reports are they are really shooting for higher clock speeds and then on top you have almost double CU. With the clock speed improvement its not too hard to believe we can get double the speed of 5700xt.
 
yep. I think people forget that the reports are they are really shooting for higher clock speeds and then on top you have almost double CU. With the clock speed improvement its not too hard to believe we can get double the speed of 5700xt.
The only thing that makes me doubt AMDs (and TSMCs) capability to produce a 2x Nave would be AMDs own propensity to mess shit up. This feels like every other hyped release, just with twice the potential for a shitshow.

On the other hand, it would certainly be refreshing for AMD to toss something competitive out, even if it's only competitive for a few days to a few months before Nvidia responds. We're just hoping that AMD gives Nvidia a reason to respond in the first place!
 
Nvidia responded significantly to little Navi. Why do you think you even have to the super lineup?

This idea that amd isn't causing the much more entrenched favorites (Nvidia, Intel) to respond just because they don't put out hardware at twice the price point for a dick measuring contest at a teeny tiny market share is just ignoring reality.

Whether it beats Nvidia's best most expensive hardware or not..,. It'll be plenty fast and more importantly, it'll work correctly in more than just windows. So I'll be buying it anyway. Some things are more important than getting to say you are fastest... Like working open drivers and excellent vulkan support. But it is nice when you can do all that and rub more expensive hardware in the dirt like they have been doing to Intel for over a year.

Nvidia needs a good slapping around and amd is the only company capable of doing it... It's in everyone's interest that they do... Unless you think 1000 dollar video cards using two year old tech is good.
 

WCCFTech Rumor said:
AMD Radeon RX 'Big Navi' Graphics Card To Feature 16 GB GDDR6 VRAM Capacity

The first bit covers the specifications and in that regard, the Big Navi graphics card is stated to get 16 GB of VRAM capacity. That's double the VRAM buffer of the existing Radeon RX 5700 XT graphics card which is based on the Navi 10 GPU. It is not mentioned what type of memory architecture will be utilized but the leaker does mention a 384-bit bus interface that would point out to a GDDR6 interface rather than HBM2(e).

Unless AMD is doing something funky with non-symetric memory channels (like NVidia got sued for a few years back), 16GB and a 384 bit bus don't agree with each other; which makes me inclined to put this rumor among the more dubious that WCCFtech has published recently.
 
Unless AMD is doing something funky with non-symetric memory channels (like NVidia got sued for a few years back), 16GB and a 384 bit bus don't agree with each other; which makes me inclined to put this rumor among the more dubious that WCCFtech has published recently.
While very unlikely I agree, it is possible to use 1 channel per memory chip if the memory chips are not all equal capacity. Nvidia has used this configuration a few cards (usually lower end) some channels with 2GB and some with 1GB chips. This is a sort of similar issue as the GTX970 but not exactly the same. As long as the card is designed so that a single cluster of ROPs is attached to 1 memory channel, and 1 memory channel is attached to 1 chip on them all then it will be "fine", it wont lead to a GTX970 situation. Realistically, the way data would be stored would have somewhat slower access on the larger portions of half the chips simply because they would be cramming more activity onto some controllers than others, but this shouldnt matter too much in the majority of situations.
The problem with the 970 was that one of the 512MB chips was attached to a memory channel that already had a regular chip on it since the ROPs were cut down and no longer attached at a 1:1 like they needed to for that architecture.
 
Last edited:
Also, I have a feeling that pure raster muscle might not be the deciding factor as far how sales will go for the next gen of GPU's.

AMD could very well be betting that rasterization remains the deciding factor. It’s not a bad gamble as they know nvidia will throw a lot of transistors at RT at the expensive of raster.

There’s no killer app for RT yet and based on early footage Cyberpunk doesn’t seem to be it. If the rumors are true of great TSMC yields and less than great results from Samsung AMD may have an opening to cause some real damage.

Either way it will be great to see a heavyweight fight again. AMD needs to make a big splash though. The 290x and Fury X were competitive with Nvidia’s best at the time but still never really caught on.
 
Realistically, AMD will be "betting on" whatever hardware design was ordered by Microsoft and Sony. RDNA2 was designed for them, so whatever "performance profile" the consoles have that is what "big navi" will have. They can scale up to more amounts of hardware and get more performance sure, but the basic design of how many TMUs and RT processors per shader core cluster and ROP cluster will be dictated by the consoles needs.
 
I suppose that AMD will show us something connected not too much in the RT direction but in something that they show with this benchmark 3DMark "PCI Express feature test". Something with more objects and less RT vs Nvidia DLSS blurryness AI + RT.
In this way they will not confront directly with Nvidia and will give us more from this that they have and in the end no one will can "blame" them for that.

"DLSS blurryness"? DLSS 2.0 looks amazing with many people that have tried it saying it looks even better than native res (I've only used it in Death Stranding; to me it looks identical to native res but with better performance).


2080 Ti is about 50% better than the 5700 XT.

Meaning, if the rumors are true and Big Navi is 2x the performance, we would be looking at around 50% better than 2080 Ti perf.

2x 5700xt performance in this chart would be:

92fps average and 60fps min vs 70fps average and 55fps min on the 2080ti. That comes out with Big Navi being about 30% faster average fps and 9% faster for min fps. That wouldn't surprise me too much and sounds like it would match the rumors that are out there for the 3080. Not bad at all if true.
 
Last edited:
I think they are cherry picking their numbers, like big font “225% performance increase”, small font “in ray tracing performance”. So I am sure the numbers are accurate for the scenarios they were generated, the question is how useful are those scenarios.

Pretty much. Anything before RTX was horrible with ray tracing, even if they could run it in software. Now that they are packing ray tracing hardware it's bound to be a massive jump.
 
I suppose that AMD will show us something connected not too much in the RT direction but in something that they show with this benchmark 3DMark "PCI Express feature test". Something with more objects and less RT vs Nvidia DLSS blurryness AI + RT.
In this way they will not confront directly with Nvidia and will give us more from this that they have and in the end no one will can "blame" them for that.

You really should catch up:




Unless you confused FFX with DLSS.
 
2080ti is not double the speed of 5700xt. Common now that’s just silly thing to say. Have you actually seen the numbers?

Everyone knows that an AMD product has to be at least 25% faster in everything and cheaper to be considered "equal" to a competing Nvidia product...
 
You forgot features, power consumption and the AMD user base 😏
The only things AMD currently doesn't have an equivalent for is DLSS and RTX. Power consumption isn't that different (yes AMD is on a smaller node and "should" have an advantage).
RTX in its current form is meaningless to me anyways, because I would need a 2080ti to run games at 120hz @1080p with RTX on.

I don't buy GPUs for more than 300 (or at least haven't yet).

If you want to get technical the RX5700 has PCIe 4.0 and Turning doesn't. But you see its almost meaningless.
 
The only things AMD currently doesn't have an equivalent for is DLSS and RTX. Power consumption isn't that different (yes AMD is on a smaller node and "should" have an advantage).
RTX in its current form is meaningless to me anyways, because I would need a 2080ti to run games at 120hz @1080p with RTX on.

I don't buy GPUs for more than 300 (or at least haven't yet).

If you want to get technical the RX5700 has PCIe 4.0 and Turning doesn't. But you see its almost meaningless.

Agreed that 20xx cards ray tracing is badly underpowered compared to their shading capacity. However the 12nm to 7nm process shrink gives nvidia enough addtional transistors to play with at the same die size to plausibly deliver 300% faster ray tracing and 50% faster shader performance without any architectural level gains. (In practice I expect they will have architectural level gains too, but spend them to reduce die sizes; Turing was comparatively huge compared to previous generations.) That'd be enough of an uplift of the former to do current levels of ray tracing at card price/level appropriate performance. ie the 3080/ti will be able to do both at 4k60 or 1440p120, and so on down the product line.

That level of commitment would give RT a huge push towards becoming mainstream, and would make their gaming GPUs unattractive to ponzicoin miners. As a gamer I count both of those as huge wins. The only real danger they face is that if AMD only delivers a proof of concept level RT implementation similar to what the RTX 20xx series offers, they run a larger risk of being overtaken for the large number of games currently making little or no use of RT.
 
Agreed that 20xx cards ray tracing is badly underpowered compared to their shading capacity. However the 12nm to 7nm process shrink gives nvidia enough addtional transistors to play with at the same die size to plausibly deliver 300% faster ray tracing and 50% faster shader performance without any architectural level gains. (In practice I expect they will have architectural level gains too, but spend them to reduce die sizes; Turing was comparatively huge compared to previous generations.) That'd be enough of an uplift of the former to do current levels of ray tracing at card price/level appropriate performance. ie the 3080/ti will be able to do both at 4k60 or 1440p120, and so on down the product line.

That level of commitment would give RT a huge push towards becoming mainstream, and would make their gaming GPUs unattractive to ponzicoin miners. As a gamer I count both of those as huge wins. The only real danger they face is that if AMD only delivers a proof of concept level RT implementation similar to what the RTX 20xx series offers, they run a larger risk of being overtaken for the large number of games currently making little or no use of RT.

So quite possibly AMD could corner the mining market?
It could potentially make them more $$$ than gaming if mining keeps building up again and they might just be cool with that.
 
The only things AMD currently doesn't have an equivalent for is DLSS and RTX. Power consumption isn't that different (yes AMD is on a smaller node and "should" have an advantage).
RTX in its current form is meaningless to me anyways, because I would need a 2080ti to run games at 120hz @1080p with RTX on.

I don't buy GPUs for more than 300 (or at least haven't yet).

If you want to get technical the RX5700 has PCIe 4.0 and Turning doesn't. But you see its almost meaningless.

The "only" things is quite misleading.
Not only is DXR and DLSS groundbreaking, but those are not the only things.

https://www.tomshardware.com/features/amd-vs-nvidia-gpus
 
The only things AMD currently doesn't have an equivalent for is DLSS and RTX. Power consumption isn't that different (yes AMD is on a smaller node and "should" have an advantage).
RTX in its current form is meaningless to me anyways, because I would need a 2080ti to run games at 120hz @1080p with RTX on.

I don't buy GPUs for more than 300 (or at least haven't yet).

If you want to get technical the RX5700 has PCIe 4.0 and Turning doesn't. But you see its almost meaningless.

At $300, the RX5700 has been a budget champ almost since launch. The power consumption difference between Navi and Turing at similar performance numbers favors Nvidia by maybe 5W. I looked up the difference a few months ago for a different thread, and it's just not accurate to label Navi a power hog compared to Turing. If RDNA2 comes anywhere near their claim of 50% less power, they might actually be more power efficient than Ampere.

I don't have high hopes that either camp is going to release a sub $300 card anytime soon. I'm guessing that the 3060 will probably be in the $350-400 range as will the RDNA2 based 5700 replacements. Hopefully, if they do it isn't just rebranded last gen cards.
 
I don't have high hopes that either camp is going to release a sub $300 card anytime soon.

Going by past release cadences, $300 next gen card with ray tracing could take another year

AMD is rumored to be planning a refresh of 5700XT , that could bring the price of non-ray traced cards to below $300
 
Back
Top