HWUB:Radeon RX 5700 XT Overclocked, RTX 2080-Like Performance?

Gamers nexus shows that there is a ceiling on performance Vs the mhz you enter in for overclock. In other words, a higher number overclock does not necessarily mean better performance. (maybe a different happy medium, for every card?).

 
Yeah, I haven't gotten that impression either. Seems they aren't leaving much in the table anymore, which is good because you don't need to void a warranty or tweak forever to get the most out of your product, but some people enjoy that aspect and will surely miss it.
 
Yeah, I haven't gotten that impression either. Seems they aren't leaving much in the table anymore, which is good because you don't need to void a warranty or tweak forever to get the most out of your product, but some people enjoy that aspect and will surely miss it.

This isn't necessarily true. Over time there can be small tweaks or revisions to the chips which can potentially increase performance. Tweaks and revisions to the process the chips are manufactured on can also give better yields. Over time yields are likely to increase to a certain point and part of the increased yields tend to be more "golden samples". Any of these can allow for higher clocks or better efficiency or a myriad of other advantages.

There is always the possibility to see better results (up to a point especially depending on what is the bottleneck or limiter) with better silicon or more mature processes. There is no guarantee that you'll see better clocks down the road. There's also no guarantee that better clocks will increase performance considerably if the bottleneck wasn't the frequency of the GPU but something else such as RAM bandwidth.

I think the potential for overclocking is not going away. However, we're getting much closer to the physical limits of silicon which makes getting a lot more out of any items more and more difficult. That's not even taking into account how mature fabrication processes are compared to what they were even 10 years ago. I wouldn't be surprised if the variances between the good silicon parts are very small which wasn't the case 10 or 20 years ago. Binning obviously still happens but I wouldn't be surprised if the difference between the upper and lower end is quite a bit smaller than it used to be and what we've been used to in regards to overclocking headroom is simply gone because of this.
 
There appears to be an issue with the memory controller for the architecture more than the memory itself.

All Ram should have some form of overclocking headroom, but from what I have seen, there is almost none.

There are some users claiming something like a 7 % memory overclock on memory, but is this 100% stable as the results shown by gamersnexus or hardwareunboxed? Unlikely.

It's insane how quickly the positive news of igors mode without people question the results or authenticity when there were some clear red flags.

Not only was only 1 game tested, a 25% increase in frequency, added voltage and a massive power limit increase only resulted in 37 more watts or 17% more power doesn't appear physically possibly. Bumping the power limit up alone would increase the power consumption to that much. Add volts and a 25% overclock and your power consumption would climb significantly. Optimistically speaking, perhaps igor made a mistake and report the GPU only power consumption.

Other add things is the timing as Igor was able to get a full waterblock only 3 days(probably got the block before launch since testing takes time) and the close communication between him and RTG group directly.

It shows how insane AMD viral marketing department is, as the news of igors review spread like wildfire but the corrections much less so. The truth the reviews are showing is that the RTX 2070 super has the same type of overclocking headroom and can do it with the stock cooler with minimal increase in noise unlike the 5700xt which needs high fan speeds, aftermarket cooler or a waterblock. Sure might just be a 10% difference when both cards are OCed, but add the game bundle, RTX and the better noise and power characteristics when both are OCed and AMD might be coming a bit short to win over the market.


Again, nobody cares about the $499 RTX2070 SUPER... Turing's antiquated architecture doesn't do anything for gamers.

Sadly, Your arguments are so focused on TOP performance, that you forgot about mainstream performance. And you have to admit, for $349+ the 5700 Series is punching way above it's weight (hence the title). AIB's are incoming and possibly faster GDDR6... again, punching above it's weight.

AMD will probably stop selling their own blower design once AIB land.



Navi out performs Vega in games and is cheaper. It is only going to get worse for Nvidia.
 
This isn't necessarily true. Over time there can be small tweaks or revisions to the chips which can potentially increase performance. Tweaks and revisions to the process the chips are manufactured on can also give better yields. Over time yields are likely to increase to a certain point and part of the increased yields tend to be more "golden samples". Any of these can allow for higher clocks or better efficiency or a myriad of other advantages.

There is always the possibility to see better results (up to a point especially depending on what is the bottleneck or limiter) with better silicon or more mature processes. There is no guarantee that you'll see better clocks down the road. There's also no guarantee that better clocks will increase performance considerably if the bottleneck wasn't the frequency of the GPU but something else such as RAM bandwidth.

I think the potential for overclocking is not going away. However, we're getting much closer to the physical limits of silicon which makes getting a lot more out of any items more and more difficult. That's not even taking into account how mature fabrication processes are compared to what they were even 10 years ago. I wouldn't be surprised if the variances between the good silicon parts are very small which wasn't the case 10 or 20 years ago. Binning obviously still happens but I wouldn't be surprised if the difference between the upper and lower end is quite a bit smaller than it used to be and what we've been used to in regards to overclocking headroom is simply gone because of this.

I don't think manufacturers want to leave room on the table for overclocking. It isn't as obvious in GPUs but if you look at Ryzen, each generation has made overclocking more and more pointless. With third generation Ryzen, your best option is essentially to just leave them alone.

The GPUs here have a small amount more to tweak for but you are getting very little for big Power outlays, so it makes sense that AMD left that on the table.
 
I don't think manufacturers want to leave room on the table for overclocking. It isn't as obvious in GPUs but if you look at Ryzen, each generation has made overclocking more and more pointless. With third generation Ryzen, your best option is essentially to just leave them alone.

The GPUs here have a small amount more to tweak for but you are getting very little for big Power outlays, so it makes sense that AMD left that on the table.
I think they would love the headroom... for their own use. It's just not there, for us or them. At least not on current processes.
 
Again, nobody cares about the $499 RTX2070 SUPER... Turing's antiquated architecture doesn't do anything for gamers.

Sadly, Your arguments are so focused on TOP performance, that you forgot about mainstream performance. And you have to admit, for $349+ the 5700 Series is punching way above it's weight (hence the title). AIB's are incoming and possibly faster GDDR6... again, punching above it's weight.

AMD will probably stop selling their own blower design once AIB land.



Navi out performs Vega in games and is cheaper. It is only going to get worse for Nvidia.
Turing is antiquated now? It’s 10 months old. New architectures take time you know. You should definitely know if you’re an AMD fan about architecture taking time. 3 years between Fury and Vega, 2 years to Navi.

Doesn’t matter what good points you make if you preface them with a silly biased opinion like calling Turing Antiquated.
 
HWUB tests the Powerplay table hack with a liquid cooler to see if it lives up to all the hype.

Short version it doesn't.

While he gets near 20% clockspeed boost, he only gets about 7% increase in game performance, but power usage increased by nearly 40% going from 186 watts to 258 watts in Far Cry:ND.



Edit:

Text version that was not done when originally posted:
https://www.techspot.com/review/1883-overclocking-radeon-rx-5700/


And as he explained, he said it was probably because he could not overclock the ram and thinks the card is bandwidth starved.
 
And as he explained, he said it was probably because he could not overclock the ram and thinks the card is bandwidth starved.
Here's to hoping they can get some faster memory. Good news is if it's memory restricted, then we don't need a big bump in frequency to get faster speeds as long as they can do something with the memory.
 
Turing is antiquated now? It’s 10 months old. New architectures take time you know. You should definitely know if you’re an AMD fan about architecture taking time. 3 years between Fury and Vega, 2 years to Navi.

Doesn’t matter what good points you make if you preface them with a silly biased opinion like calling Turing Antiquated.


I say that because it is not ten months old. It is ten months old in gaming...

I don't care about bit-mining, or Folding, or world records… as a Gamer all I care about is stable FPS & Price/Performance. Turing was not conceived, nor derived as a "Gaming GPU". They are cut-down version of cast-offs from the business world. All scarred up and propagated to be deemable for profit as an ad-hoc jumbled GPU, for Gamers. AMD did this with the broken Mi50's into Radeon 7's.


Compared to RDNA, Turing is indeed antiquated. Nobody knew that AMD (Dr Lisa Su) was releasing Navi with a brand new architecture. There are still so many unanswered questions, but I have read nearly everything & nobody can find fault with RDNA. It shows substantial progress if little ole 251mm^2 Navi 10 is beating up on bigger 331mm^2 Vega 20, both on the same 7nm node.... and with the amount of patents and Industry Giants all leaping on RDNA.. not-to-mention already programming for/with RDNA. Everyone is looking forward, not back.

Turing in the gaming Industry has fizzled out. Ampere better be on message.
 
Last edited:
correct me if I am wrong, but seems like even going back to the Radeon 5xxx generation it seemed that everyone and their horse was calling the Radeons "memory clock speed/bandwidth starved" yet I do not recall any reviewer etc showing that it is in fact "starved" i.e reduce core clock X speed bump memory clock at least this much should show an immediate uplift (really does not..does it?)

I don't know about much of this "theory" stuff to be honest, I know basically forever ATi and AMD for sure have always been about "accurate/repeatable" not "fast for a cost" (not talking power use here)


Radeons tend to do VERY well with performance increase via overclock (my experience) on core as well as memory speeds (seemingly boost up from increase of one and not the other as well) there must be some odd clock tables or something that certain speeds "unlock" while other speeds put clamps on power or some shit like that.


I just would find incredibly odd AMD would release something brand spanking new they took the time to go over the Uarch with a fine toothed comb (many many Uarchs to use as nspiration) for all the reviewers to come up to the same conclusion.

AMD have lambasted them for many years over "memory bottleneck, needs more speed" .. maybe folks are not testing the Radeons like they are Radeons (workhorses) and instead test them like they are a Geforce (speed with "features")


not 100000% sure but they ARE different just like AMD CPU is different then Intel ones, so why does the industry test direct apple to apple, yet when comes to other "features" because AMD does not do like they do "they suck" .. maybe if tested to the strengths vs almost every test/game there likely would be different conclusions that would be extolled.
 
Last edited:
Again, nobody cares about the $499 RTX2070 SUPER... Turing's antiquated architecture doesn't do anything for gamers.

Sadly, Your arguments are so focused on TOP performance, that you forgot about mainstream performance. And you have to admit, for $349+ the 5700 Series is punching way above it's weight (hence the title). AIB's are incoming and possibly faster GDDR6... again, punching above it's weight.

AMD will probably stop selling their own blower design once AIB land.



Navi out performs Vega in games and is cheaper. It is only going to get worse for Nvidia.

LMAO, I hope this is a troll account. If you had a lick of sense you would realize that just because a product was released more recently does not make it superior to an older product. Or else why is Navi selling for for a mere $399, instead of the $1000+ that 2080 Ti commands? I mean, it's newer so it must be better, right? Surely AMD can sell the newer, superior Navi for more money than the "old news" Turing competition, right?
 
LMAO, I hope this is a troll account. If you had a lick of sense you would realize that just because a product was released more recently does not make it superior to an older product. Or else why is Navi selling for for a mere $399, instead of the $1000+ that 2080 Ti commands? I mean, it's newer so it must be better, right? Surely AMD can sell the newer, superior Navi for more money than the "old news" Turing competition, right?
That is correct, and also the reason the 2080 super costs more than the 2080, right? Not that I disagree with you, but there's more to it than that.
 
A bunch of silliness going on in this thread. The 5700 is good for what it is, it competes with the 2070/2070 super for less. Its a good card. So it cant reach a 2080, in most cases it is 300 dollars less. 2070 super is slightly better, and slightly more expensive. Noone could be faulted for going either way in a purchase. I will be upgrading from a 980 ti to a 5700xt, not because I worship the CEO of a greedy company, but because its more interesting to me and finally AMD has a card that is not stupid to buy, and has competitive performance at its price. Nvidia is still top dog, but they have priced themselves out of my build, and so dont get my money. If the 2080 ti was 700, NVIDIA would get my money. The fanboy garbage is annoying, root for competition.
 
LMAO, I hope this is a troll account. If you had a lick of sense you would realize that just because a product was released more recently does not make it superior to an older product. Or else why is Navi selling for for a mere $399, instead of the $1000+ that 2080 Ti commands? I mean, it's newer so it must be better, right? Surely AMD can sell the newer, superior Navi for more money than the "old news" Turing competition, right?

Yes, he is definitely a troll and should be ignored.

A bunch of silliness going on in this thread. The 5700 is good for what it is, it competes with the 2070/2070 super for less. Its a good card. So it cant reach a 2080, in most cases it is 300 dollars less. 2070 super is slightly better, and slightly more expensive. Noone could be faulted for going either way in a purchase. I will be upgrading from a 980 ti to a 5700xt, not because I worship the CEO of a greedy company, but because its more interesting to me and finally AMD has a card that is not stupid to buy, and has competitive performance at its price. Nvidia is still top dog, but they have priced themselves out of my build, and so dont get my money. If the 2080 ti was 700, NVIDIA would get my money. The fanboy garbage is annoying, root for competition.

Well said - we knew that AMD was going to target the mid range and they settled in well. Hopefully they can bring some pricing competition to the top end in the future so I don't have to sell my remaining kidney for the next top end card.
 
Back
Top