AMD RX 5700 XT GPUs reaching 110°C “expected and within spec” for gaming

I just got my loop up and running after leak testing it over the weekend...I ran the Handbrake stalibity test using 16 threads and as much memory as it could use along with the CPUID PowerMax GPU test for 5 hours this morning....
M
3700X was running around 65C by the end of the test while my 5700xt was running at 2125Mhz (all stock settings in Wattman including 0% extra power) with the GPU temp leveling out at 47C with a T junction temp of 84C...This is about as stressful as you can get, and will far exceed any kind of gaming load...

I realize this is only semi relavant to the thread, but it at least highlights how great WC'ing is, and that we should be seeing more cards come with WC'ing as a stock option ala 295x2/and the various Nvidia partner hybrid cards...I do have to say the 50th AV ED backplate looks lovely with the EK block...I had to do some hunting for some screws with enough length but found some that allowed me to place a 1mm thick thermal pad on the back of the VRM area of the PCB since AMD did not feel it was necessary to ship it that way from the factory....Ignore the FragHarder Disco Lights, I had forgotten to turn them off at first boot. And the fill tube on my res:

received_491442491644148.jpeg
 
You mean high heat of 110C to 130C damages the silicon?!?!! Not according to AMD...
Not sure where you got 130C, but 110C as a *junction* or *hot spot* temperature should not damage the silicon. This is what AMD is saying, and have been confirmed by several other people, including Steve from GN.

You have to expect that the hot spot temp may be 20C or even 30C above the edge temperature. So when you see 110C (if it ever even got that high) this really is 80C or 90C of edge temp (which is what Nvidia reports, or AMD pre-Vega).

Beyond all that, I ran FurMark for 5 minutes on stress test and my 5700 XT didn't even break 100C (actually 99C max), which is fine. The fan wasn't even running that loud either.
 
Not sure where you got 130C, but 110C as a *junction* or *hot spot* temperature should not damage the silicon. This is what AMD is saying, and have been confirmed by several other people, including Steve from GN.

You have to expect that the hot spot temp may be 20C or even 30C above the edge temperature. So when you see 110C (if it ever even got that high) this really is 80C or 90C of edge temp (which is what Nvidia reports, or AMD pre-Vega).

Beyond all that, I ran FurMark for 5 minutes on stress test and my 5700 XT didn't even break 100C (actually 99C max), which is fine. The fan wasn't even running that loud either.

I know all that.

I was responding to a comment about AMD cards having triple the failure rate, to which the poster stated "yeah some miners are stupid and fried their cards"... an ironic statement as AMD and the Apologists are all saying "It's fine to run hot"...

The logical posters have been saying "the hotter you run it, the faster it will die", as evidenced by the triple failure rate observation.
 
Yeah, I agree. I think mining, in bad conditions, can certainly damage cards.

Maybe not the silicon, but other components, as Steve mentions, I can believe.

In the case of mining, though, I think it had more to do with the situation they were in, e.g. overclocking, inadequate cooling, bad air flow, etc. and not just that the cards got hot under normal operating conditions.
 
Yeah, I agree. I think mining, in bad conditions, can certainly damage cards.

Maybe not the silicon, but other components, as Steve mentions, I can believe.

In the case of mining, though, I think it had more to do with the situation they were in, e.g. overclocking, inadequate cooling, bad air flow, etc. and not just that the cards got hot under normal operating conditions.

Serious miners run their cards open air undervolted and as cool as possible

I'd trust a miners card much more than a gamers card that was stuffed into a dusty mini ITX case with a zero rpm fan profile that's for sure
 
I'd trust a miners card much more than a gamers card that was stuffed into a dusty mini ITX case with a zero rpm fan profile that's for sure
Well, gamers aren't running their GPUs at 100% load 24/7 for months at a time. Surely that has to do with it.
 
Serious miners run their cards open air undervolted and as cool as possible

I'd trust a miners card much more than a gamers card that was stuffed into a dusty mini ITX case with a zero rpm fan profile that's for sure

Having been in the mining game for a while now (Started when BTC was under $10), I think I can honestly say that there has been a HUGE focus on perf/w over the last ~2 years....Back when we were all mining LTC, most people either ran their 290s at stock, possibly with a small undervolt, or went all out and paid for theStilt's mining bios that disabled half of the ROPs which in turn reduced power usage and temperatures a fair amount.

Stuffing 4-6 290s with reference cooling into a single system produced an insane amount of heat. I had 3 290s under water in my main gaming/mining rig and even with the mining bios and other associated tweaks my rad would get hot enough that keeping your hand on it for more then 15 seconds was uncomfortable and 30 seconds was mild burn level hot. We have come a long way since then. I loved my 290s but wish AMD had been able to launch them with the same respun die that found itself in the 390/x. Those cards clocked very well, even on air, and that much faster memory really made a decent difference performance wise.
 
I just got my loop up and running after leak testing it over the weekend...I ran the Handbrake stalibity test using 16 threads and as much memory as it could use along with the CPUID PowerMax GPU test for 5 hours this morning....
M
3700X was running around 65C by the end of the test while my 5700xt was running at 2125Mhz (all stock settings in Wattman including 0% extra power) with the GPU temp leveling out at 47C with a T junction temp of 84C...This is about as stressful as you can get, and will far exceed any kind of gaming load...

I realize this is only semi relavant to the thread, but it at least highlights how great WC'ing is, and that we should be seeing more cards come with WC'ing as a stock option ala 295x2/and the various Nvidia partner hybrid cards...I do have to say the 50th AV ED backplate looks lovely with the EK block...I had to do some hunting for some screws with enough length but found some that allowed me to place a 1mm thick thermal pad on the back of the VRM area of the PCB since AMD did not feel it was necessary to ship it that way from the factory....Ignore the FragHarder Disco Lights, I had forgotten to turn them off at first boot. And the fill tube on my res:

View attachment 183085

Excellent. I see you also went with a Seasonic Power Supply, which are high quality. :)
 
Your post brings me back to the 290x when people chanted the same thing about 95C on the die. "AMD knows what they're doing" kind of posts. Some of us were skeptical noting physics and other components on the card. Then the card failed at ~3x the rate of any other card at the time...

This is a little different situation but I don't ever go by "AMD knows what they are doing" argument since we've seen them completely maul launches over and over.

I addressed this before and you ignored my post. The failures weren't down to the heat. The 290 cards were perfectly capable of running at 95 degrees. If you want me to link to my previous response to this, I will.
 
I addressed this before and you ignored my post. The failures weren't down to the heat. The 290 cards were perfectly capable of running at 95 degrees. If you want me to link to my previous response to this, I will.

Ignoring and dismissing are two different things ;).

Just kidding, I just missed it.

I was looking earlier and it seems traditionally AMD has much higher failure rates, but I can’t find anything to refrute your claims. I am an EE and I am very skeptical of 95C just from a physics / what I know of components. Regardless, AMD has mauled enough other things I still am surprised people take their claims blindly. That applies to any corporation to be honest.
 
Excellent. I see you also went with a Seasonic Power Supply, which are high quality. :)

Yeah it's a SS FP Gold 1kW unit...SS gave me to after my 3rd FP 850W gold unit failed. It was a freak thing that was isolated to an old Asus Gtx 970 (ripple issue) and Vega 56/64/VII (current spikes according to SS)...

Was a maddening experience but I have had so few issues with SS units that I just rolled with it.
 
Triple this for nvidia.
Ask apple, or more recently the initial 2000s owners.
Not really.

You can name specific incidents to fuel your whataboutism, when in comparison, it's hard to name AMD products that did not have notable issues upon release. The difference is exceedinly stark.

BBBBBuuuutttt, we are not talking about Nvidia, we are talking about AMDDDDDDDDD. :D :) I have three AMD machines, zero serious issues and the images look sharp and clean. Nothing is overheating or causing 110C of heat, anywhere.
 
BBBBBuuuutttt, we are not talking about Nvidia, we are talking about AMDDDDDDDDD. :D :) I have three AMD machines, zero serious issues and the images look sharp and clean. Nothing is overheating or causing 110C of heat, anywhere.

I have... four Nvidia, one of which is also AMD and one of which is also Intel via Optiumus, and two that are just Intel. They look the same, as they should.


As stated before, if there is a difference, there is a setup or hardware problem involved. We'd be glad to help you troubleshoot.
 
Not really.

You can name specific incidents to fuel your whataboutism, when in comparison, it's hard to name AMD products that did not have notable issues upon release. The difference is exceedinly stark.

No, you are digging unsubstantiated rumors from an unconfirmed old source, while the newer problems were seen even by Kyle here so suck it, this time it's actually apples to apples.

Edit :heck the 2000s problem was big enough for nvidia to make a public announcement, or you forgot that one part intentionally? Tit for tat.

Edit2: https://www.extremetech.com/gaming/...tx-2080-ti-gpus-are-defective-promises-remedy
August 2018 confirmed is more relevant than a rumor, since it was never actually confirmed, about a card from November 2013 since we are in 2019. (and prior to that we had the apple lawsuit indeed so again in proven history, team Green has been more factually problematic)
 
Last edited:
No, you are digging unsubstantiated rumors from an unconfirmed old source, while the newer problems were seen even by Kyle here so suck it, this time it's actually apples to apples.

Edit :heck the 2000s problem was big enough for nvidia to make a public announcement, or you forgot that one part intentionally? Tit for tat.

Edit2: https://www.extremetech.com/gaming/...tx-2080-ti-gpus-are-defective-promises-remedy
August 2018 confirmed is more relevant than a rumor, since it was never actually confirmed, about a card from November 2013 since we are in 2019. (and prior to that we had the apple lawsuit indeed so again in proven history, team Green has been more factually problematic)

Your first sentence is ironic considering the rest of your post lacks any sort of stastically valid data at all.
 
When there is admission of fault no data is required.

And since the gpu market is basically a duopoly of you have to take a critical look at the one without hard proven admission of hardware failures, then truly you must triple that look at the one that has a very recent one (6y vs 1y)

Edit:
Also I let go of the 970s case since it was technically not a faulty hardware, yet dishonest hardware. Again, team Green.
Admission of guilt = the proof is not required anymore. Let me know when amd video cards get multiple class action lawsuits against them.
Edit2: https://www.google.co.ve/amp/s/www....ion-lawsuit-over-macbook-pro-gpus/amp/?espv=1
Again https://www.polygon.com/platform/amp/2016/7/28/12315238/nvidia-gtx-970-lawsuit-settlement

Team Green has the proven corner cutting hardware track record, you can try and scream your head off but it's known as true, if the amd one was true you would have read more than a supposed unproven vendor. And in team Green's case the latest fault was with their newest architecture.
 
Last edited:
Also, not a rumor. I had to RMA my first 2080 Ti, as did many people on this site, including Kyle.
 
Their history of execution of products should lend caution to their customers, for sure- even when those products are knockouts.

He's talking about AMD.

Triple this for nvidia.
Ask apple, or more recently the initial 2000s owners.

Not really.

You can name specific incidents to fuel your whataboutism, when in comparison, it's hard to name AMD products that did not have notable issues upon release. The difference is exceedingly stark.

Note: In this reply he wasn't ignoring or disputing the initial issues that the 20xx cards had.

No, you are digging unsubstantiated rumors from an unconfirmed old source, while the newer problems were seen even by Kyle here so suck it, this time it's actually apples to apples.

Edit :heck the 2000s problem was big enough for nvidia to make a public announcement, or you forgot that one part intentionally? Tit for tat.

Edit2: https://www.extremetech.com/gaming/...tx-2080-ti-gpus-are-defective-promises-remedy
August 2018 confirmed is more relevant than a rumor, since it was never actually confirmed, about a card from November 2013 since we are in 2019. (and prior to that we had the apple lawsuit indeed so again in proven history, team Green has been more factually problematic)

Old issues with AMD unsubstantiated and unconfirmed? go back 10 years and read AMD related news and forum posts..

Your edits are pointless, IdiotinCharge never said there were no issues with the release of 20xx's... to which we have never had official failure numbers, not any that I have seen anyway.
 
He's talking about AMD.





Note: In this reply he wasn't ignoring or disputing the initial issues that the 20xx cards had.



Old issues with AMD unsubstantiated and unconfirmed? go back 10 years and read AMD related news and forum posts..

Your edits are pointless, IdiotinCharge never said there were no issues with the release of 20xx's... to which we have never had official failure numbers, not any that I have seen anyway.

So, how many dead cards or major issues that resulted in across the board rma's have you seen for the 5700 series yet? :rolleyes: Oh, and the person you quoted is correct, there was no substantiation to the claim of 3 times the failure rate, if that is what he is referring too.
 
So, how many dead cards or major issues that resulted in across the board rma's have you seen for the 5700 series yet? :rolleyes: Oh, and the person you quoted is correct, there was no substantiation to the claim of 3 times the failure rate, if that is what he is referring too.

The post that was being replied to said "history of execution"... that means past performance.

All it refers to regarding the 5700's is a word of caution, to which time will tell. A well earned word of caution.
 
Serious miners run their cards open air undervolted and as cool as possible

I'd trust a miners card much more than a gamers card that was stuffed into a dusty mini ITX case with a zero rpm fan profile that's for sure

Well, gamers aren't running their GPUs at 100% load 24/7 for months at a time. Surely that has to do with it.
Running the card 24/7 at constant load is better than stressing it daily through thermal cycling. And miners (unlike many gamers) indeed care about perf/W, not maximum performance, except if they are stealing electricity somewhere.

The only thing you need to beware of is the fans running at 100% all the time may have taken a toll on the bearings, and the thermal paste may have dried up. Some maintenance on the cooler may therefore be advisable. The mining card itself will most likely be in good condition.
 
The melting point of silicon is 1400c

I'm not worried about 100c.

I know, I just tried doing a higher overclock with my Sapphire RX 5700, ended up with an junction temp of 101C and did not once think, oh know, I have killed my GPU! :D I did get some occasional screen flashing though, so the GPU was not stable at those speeds.

Edit: Fixed it to say Junction or hotspot temp.
 
Last edited:
I know, I just tried doing a higher overclock with my Sapphire RX 5700, ended up with an edge temp of 101C and did not once think, oh know, I have killed my GPU! :D I did get some occasional screen flashing though, so the GPU was not stable at those speeds.

Yeah one time my wattman profile defaulted to some awful flat non curve setting and my xt hit 107c or something like that. If you can handle the noise then a custom fan profile will keep temps in 70s.
 
Yeah one time my wattman profile defaulted to some awful flat non curve setting and my xt hit 107c or something like that. If you can handle the noise then a custom fan profile will keep temps in 70s.

Oh, I meant hotspot or junction temp but, I agree with what you are saying. :)
 
I know all that.

I was responding to a comment about AMD cards having triple the failure rate, to which the poster stated "yeah some miners are stupid and fried their cards"... an ironic statement as AMD and the Apologists are all saying "It's fine to run hot"...

The logical posters have been saying "the hotter you run it, the faster it will die", as evidenced by the triple failure rate observation.

[/QUOTE]

Silicone hot spot temp, and blowing up VRM's due to bad mining practices are different.

I am going to guess you haven't run a scaled out mining farm have you?
 
Last edited:
Silicone hot spot temp, and blowing up VRM's due to bad mining practices are different.

I am going to guess you haven't run a scaled out mining farm have you?

Not really sure why you quoted me, since I have no idea what you are saying.
 
So, how many dead cards or major issues that resulted in across the board rma's have you seen for the 5700 series yet? :rolleyes: Oh, and the person you quoted is correct, there was no substantiation to the claim of 3 times the failure rate, if that is what he is referring too.

This is exactly what I was referring to.

And even then the issues were never litigation worthy levels of widespread so, again, if in a duopoly you want to claim that the least terrible maker deserves a doubly hard look then the proven worse one deserves a triple hard look.

Edit: again, specially deserves triple the hard look since it's issues have been proven and in recent architectures/releases, so they aren't old history.
 
The melting point of silicon is 1400c

I'm not worried about 100c.

It may not 'melt', but the fine nanometer scale transistors are still damaged. Many overclockers have "fried" their CPU's, and it did not take 1400C to do it....

Why do you think AMD throttles it at 110C Tjunction?
 
It may not 'melt', but the fine nanometer scale transistors are still damaged. Many overclockers have "fried" their CPU's, and it did not take 1400C to do it....

Why do you think AMD throttles it at 110C Tjunction?

Gotta read the replies since that comment was made my man!
 
I just got my loop up and running after leak testing it over the weekend...I ran the Handbrake stalibity test using 16 threads and as much memory as it could use along with the CPUID PowerMax GPU test for 5 hours this morning....
M
3700X was running around 65C by the end of the test while my 5700xt was running at 2125Mhz (all stock settings in Wattman including 0% extra power) with the GPU temp leveling out at 47C with a T junction temp of 84C...This is about as stressful as you can get, and will far exceed any kind of gaming load...

I realize this is only semi relavant to the thread, but it at least highlights how great WC'ing is, and that we should be seeing more cards come with WC'ing as a stock option ala 295x2/and the various Nvidia partner hybrid cards...I do have to say the 50th AV ED backplate looks lovely with the EK block...I had to do some hunting for some screws with enough length but found some that allowed me to place a 1mm thick thermal pad on the back of the VRM area of the PCB since AMD did not feel it was necessary to ship it that way from the factory....Ignore the FragHarder Disco Lights, I had forgotten to turn them off at first boot. And the fill tube on my res:

View attachment 183085

you're missing a clamp off the resevoir on the right side.
 
It may not 'melt', but the fine nanometer scale transistors are still damaged. Many overclockers have "fried" their CPU's, and it did not take 1400C to do it....

Why do you think AMD throttles it at 110C Tjunction?

Because AMD knows what their cards are capable of handling, the whole card, not just the GPU, unlike us here on the forums.
 
Back
Top