AMD RX 5700 XT GPUs reaching 110°C “expected and within spec” for gaming

Yeah but not everyone who walks into Best Buy or Microcenter to buy a card is going to know any better, or buy one to watercool it. They listen to their buddies like travm who tell them "it's all fine, amd says it's fine, those people on the forums can't prove anything, but I know what's best so yeah, go buy that". Then it dies just outside the warranty period, or starts artifacting. HIgh heat weakens solder joints over time (intense heat, then cools, intense heat, cools, this causes solder cracks), but AMD say's it's fine.

I would never recommend that blower card to anyone, and would actively point out to avoid it.

Anyone can feel free to disagree, or call me a hypocrite.

I'm just waiting on a corsair block to be available for mine. Until then it's a hit potato but a custom fan curve and it's fine. People want a card to run at 50c at 1db of noise lol
 
Yeah but not everyone who walks into Best Buy or Microcenter to buy a card is going to know any better, or buy one to watercool it. They listen to their buddies like travm who tell them "it's all fine, amd says it's fine, those people on the forums can't prove anything, but I know what's best so yeah, go buy that". Then it dies just outside the warranty period, or starts artifacting. HIgh heat weakens solder joints over time (intense heat, then cools, intense heat, cools, this causes solder cracks), but AMD say's it's fine.

I would never recommend that blower card to anyone, and would actively point out to avoid it.

Anyone can feel free to disagree, or call me a hypocrite.

Who does recommend blower cards in typical vented case anyway ? The fact you mention artifact in your reply makes me think you also think AMD statement mean 110C is meant for the RAM ? The rest doesn't even warrant a reply.
 
Yeah but not everyone who walks into Best Buy or Microcenter to buy a card is going to know any better, or buy one to watercool it. They listen to their buddies like travm who tell them "it's all fine, amd says it's fine, those people on the forums can't prove anything, but I know what's best so yeah, go buy that". Then it dies just outside the warranty period, or starts artifacting. HIgh heat weakens solder joints over time (intense heat, then cools, intense heat, cools, this causes solder cracks), but AMD say's it's fine.

I would never recommend that blower card to anyone, and would actively point out to avoid it.

Anyone can feel free to disagree, or call me a hypocrite.

Well, no one is going to walk into Best Buy and buy a 5700, since they do not sell them in store. As for those who walk into Microcenter? That is what they have the knowledgeable associates for and no, I am not being sarcastic, the ones in the component department seem to know what they are doing. These cards are not going to die just outside of the warranty period, just as AMD cards have not done so. Oh, and as for solder cracks, everyone else does not use the solder that Nvidia does and causes HP laptops to die, with Nvidia graphics in them.

If you do not recommend blower cards, that is fine and also, pointless. Therefore, much ado about nothing. Besides, your comment appears to have nothing to do with what I said and it appears, to me, that you would recommend a custom card to a water cooling enthusiast just so they can waste their money on a cooler they will not be using. Your opinion must have been pointed at someone else because it has nothing to do with my post.

Here is a quote of my post, just in case you do not get it:

And just how have you determined that those other folks are smarter and more experienced than he is? You know the poster you quoted personally? Dude, it is much ado about nothing, as evidenced by the Aftermarket cards selling out and the reference cards now staying in stock. (Reference cards are far better for water cooling, however.) I see no one claiming that you should just buy a reference card, because AMD says it is ok, no problem. Their reference cards are most likely going to sell to people with a water cooling loop now. After all, it is a waste of money to remove a custom cooler if you are going to water cool it anyways.
 
So AMD is now redefining what safe operating voltages and temperatures are. Should we assume they created some type of super silicon that prevents degradation over time?

No, we should definitely relegate what temps are safe to a random tech forum poster rather than the company who designed the architecture...
 
LOL. oh where to begin...

ManofGod , I would never tell someone to remove a custom cooler... no idea where you got that from anything I posted. Someone said "maybe someone wants to buy one (the AMD refernce design) to watercool", and if so, fine. Changes the price/performance equation, but that's another topic for another thread.

On the solder, the cracking happens to lead-free solder much moreso that older lead based solder. Leaded solder is no longer used as the discarded electronics in landfills poisons the nearby underground water with lead. This is bad. This isn't nVidia vs AMD...

Zion Halcyon Silicon melts at over 1000C. Now, this doesn't mean that high heat is acceptable or desirable. Heat causes electrons to randomly travel thru the material in unintended directions, one of the reasons we have to cool all silicon based, densely packed transistor circuits. They start malfunctioning otherwise. I would take AMD at their word that this chip will not malfunction at 110C, but it does start throttling, this is the behavior AMD has programmed in. If heat is such a non-issue, or if "forum posters" are exaggerating, why does AMD bother with throttling? Could it be that because at 111C, (or 115C, some temp close to 110C) the chip malfunctions? Let's assume this to be true. If you can run your chip at 87C junction temps vs 110C, or borderline malfunctioning, why wouldn't you?

Not to mention that the heat is then baking your cards other components on it's way out.

What does high heat cause silicon chips/transistors to experience? Why does AMD throttle the card at 110C junction?
  1. Higher leakage currents: this can lead to more heating issues and can easily result in thermal runaway.
  2. Signal to noise ratio will decrease as thermal noise increases: This can result in a higher bit error rate, this will cause a program to be misread and commands to be misinterpreted. This can cause "random" operation, i.e. malfunctions.
  3. Dopants become more mobile with heat. When you have a fully overheated chip the transistor can cease being transistors.This is irreversible.
  4. Uneven heating can make the crystalline structure of Si break down. A normal person can observe this by putting glass through temperature shock: It will shatter. A bit extreme, but it illustrates the point. This is irreversible.
So the points the 'random tech poster's have been making, is that the hotter your card runs, the sooner it will fail. This is fact. If your chip is throttling, you are going to lose performance. This is fact. If your junction temps hit 110C, the chip is still "ok", never said otherwise. But your shit will run slower, card will die sooner.

AMD fans can give AMD a pass for anything it seems. I wouldn't recommend AMD's blower cards to anyone, nor would I recommend any such card made by nVidia, or anyone else for that matter. <- Ramdom Forum Guy Opinion, so your mileage may vary.
 
just a joke

upload_2019-8-20_12-0-24.png
 
Also, the 5700 XT doesn't even get to 110C, at least mine doesn't. So Steve is right, it is not normal.

Mine can reach about 90C in a normal game, or 99C after 5 minutes of FurMark. Maybe if I ran FurMark all night it would get hotter, but it seemed to level out at 99C.

The main point of the blog post was to educate people about junction/hot spot temperature so they weren't comparing it to edge temp on older cards.

I know a lot about computers, and I was still shocked and afraid when I saw 90C during a normal game session. After reading the blog and watching Steve's video, it all makes sense.
 
ho the points the 'random tech poster's have been making, is that the hotter your card runs, the sooner it will fail. This is fact.
For a post filled with facts it's ironic the point you specifically call out as, is not.

Cards die because their fans quit, memory chips become unreliable, and power supply components reach EOL. Just because the theoretical life expectancy of the silicon might (or might not, it's not nearly as simple as your making it out to be) be reduced, does not mean the useful life of the card is reduced.
If I get higher performance by reducing the expected life span of 1 component from 75yrs to 35 yrs, on an assembly that can't reasonably be expected to last 25 years, sign me up.
 
I'm something of an AMD shill and even I think 110c is a tad much....

Yeah, the thing is, this news is really much ado about nothing. Those who own this card, the reference version, do not even hit above 100C on the Hotspot. Even if someone where to hit it, it would end up probably being brief and not damaging to the gpu and the components around it. Afterall, this is just the hotspot temp and not the temperature of the entire GPU, or edge temp.
 
This is supposed to be a straight talk forum not a shill forum. A novice should be able to come here and say "my 5700 xt seems to run to hot, its a reference card." And expect answers like "yea, its a good card but the blower design is subpar, you could water cool it or try an aftermarket solution." However, I suspect that any question about AMD products from here on out will result in a firestorm of bullshit and defensiveness. This is a sad situation.
 
This is supposed to be a straight talk forum not a shill forum. A novice should be able to come here and say "my 5700 xt seems to run to hot, its a reference card." And expect answers like "yea, its a good card but the blower design is subpar, you could water cool it or try an aftermarket solution." However, I suspect that any question about AMD products from here on out will result in a firestorm of bullshit and defensiveness. This is a sad situation.

If you say so, if you say so........ I suggest doing a closer inspection of what you are saying but hey........ On the other hand, this is much ado about nothing and has zero effect on the AMD Navi sales. On the other hand, from what I am seeing, reference cards are not really being sold out anymore, well aftermarket cards are being sold out left and right, which appears to be the exact opposite of what some claim is or will occur.
 
Yeah, the thing is, this news is really much ado about nothing. Those who own this card, the reference version, do not even hit above 100C on the Hotspot. Even if someone where to hit it, it would end up probably being brief and not damaging to the gpu and the components around it. Afterall, this is just the hotspot temp and not the temperature of the entire GPU, or edge temp.

Totally get it - and maybe it is much ado about nothing, but IIRC don't temperature deltas cause wear on GPU's (or any component)? Meaning if you have a long gaming session then stop, the 110c (or even 100c) to idle temps to zero are going to be a major temperature difference which may cause problems in the long run...........I think?

Maybe I'm wrong, I just recall hearing that was an issue.... not necessarily the temperature itself but the deltas.

Edit: One anecdotal bit of evidence for this is that I have a ton of DDR3 RAM.... the failure rate among the RAM that hasn't lived under a heat spreader all of its life is WAAAAAAAAAY higher than the RAM that has lived under a heat spreader
 
Last edited:
Totally get it - and maybe it is much ado about nothing, but IIRC don't temperature deltas cause wear on GPU's (or any component)? Meaning if you have a long gaming session then stop, the 110c (or even 100c) to idle temps to zero are going to be a major temperature difference which may cause problems in the long run...........I think?

Maybe I'm wrong, I just recall hearing that was an issue.... not necessarily the temperature itself but the deltas.

Edit: One anecdotal bit of evidence for this is that I have a ton of DDR3 RAM.... the failure rate among the RAM that hasn't lived under a heat spreader all of its life is WAAAAAAAAAY higher than the RAM that has lived under a heat spreader

That is quite anecdotal, considering that we do not know the environment of the ram used, the systems they were used in, the voltages they were run at, what brand of ram, what the system temps were overall and so on. Once again though, this is a hotspot temperature and not the edge temp, when means in it only in one very minute, specific spot and not the whole gpu. :) As for delta's my guess is that would effect all items and especially CPU's and yet, they tend not to fail.
 
That is quite anecdotal, considering that we do not know the environment of the ram used, the systems they were used in, the voltages they were run at, what brand of ram, what the system temps were overall and so on. Once again though, this is a hotspot temperature and not the edge temp, when means in it only in one very minute, specific spot and not the whole gpu. :) As for delta's my guess is that would effect all items and especially CPU's and yet, they tend not to fail.

Right, totally anecdotal and correlation is not necessarily causation etc but i thought it was worth mentioning.

Plus maybe (probably) AMD built their Gpus with this hotspot condition in mind and it won't be a problem.

IMO most folks here would actively take measures to cool the thing down anyway; i.e. better case ventilation or aftermarket coolers so I doubt we're going to see much issue come of it.
 
I bet people would be surprised, even shocked to find out that Nvidia cards run the same temperature, they're just metered differently through software.

I would not be surprised or shocked at all. However, the chance that we will get official confirmation of that is between slim and none.
 
I bet people would be surprised, even shocked to find out that Nvidia cards run the same temperature, they're just metered differently through software.

It's most assured that Intel chips, nVidia GPU's, can also reach junction temps of 110C, when the factors are right. Stock or other poor performing heatsink on an overclocked CPU (AMD/Intel), overclocked and/or overvolted GPU (AMD/nVidia) without adequate cooling, such as a poor performing blower cooler. The measured edge temp will be 20C to 25C less, based on readings observed on the navi gpu's, so edge temps of 85C to 90C.

So, what to do? Use good heatsinks on any CPU, even if you don't overclock, or watercool. Buy GPU's outfitted with well performing cooling solutions.
 
It's most assured that Intel chips, nVidia GPU's, can also reach junction temps of 110C, when the factors are right. Stock or other poor performing heatsink on an overclocked CPU (AMD/Intel), overclocked and/or overvolted GPU (AMD/nVidia) without adequate cooling, such as a poor performing blower cooler. The measured edge temp will be 20C to 25C less, based on readings observed on the navi gpu's, so edge temps of 85C to 90C.

So, what to do? Use good heatsinks on any CPU, even if you don't overclock, or watercool. Buy GPU's outfitted with well performing cooling solutions.

Which has absolutely all to do with nothing with what the person said. Fact is, we have not and probably will never see an official measurement from Nvidia on the hot spot temp or specifically, between slim and none. Also, the Navi cards, even the Reference XT version, does not reach 110C in normal gaming loads, although it will not hurt it or the card if it were to do so, it would just not run at full tilt.
 
Which has absolutely all to do with nothing with what the person said.

Other than directly addressing it.

Fact is, we have not and probably will never see an official measurement from Nvidia on the hot spot temp or specifically, between slim and none.

And you know this how? You an engineer for nVidia? It's a new sensor AMD decided to build in. Why after 20+ years making graphics chips, did AMD decide to add this now? Probably need the additional temperature information to keep it from burning itself up... they haven't been getting the high performance that they want/need to be competitive, so they have to run the silicon at it's absolute limit... to do that they needed more detailed temperature resolution, hence new sensors placed throughout the die.

Your post implies that somehow nVidia is being dishonest for not having the same sensors. It's just never been needed, as their GPU's are better performing and are not pushed so close to the edge when running at official spec. This is also why they OC so well, as there's plenty of headroom with those designs.

With more densely packed transistors, I suspect the added sensors will eventually be needed on all high powered chips on 7nm or smaller nodes, including CPU's.
 
Last edited:
I'm sure this has been brought up before, but more after market cooling options that are "easy" would be nice for graphics cards. Might not be practical, but what if it were more like the CPU and you had to make a choice about how to cool it?

(probably a pipe (pun) dream)
 
Other than directly addressing it.



And you know this how? You an engineer for nVidia? It's a new sensor AMD decided to build in. Why after 20+ years making graphics chips, did AND decide to add this now? Probably need the additional temperature information to keep it from burning itself up... they haven't been getting the high performance that they want/need to be competitive, so they have to run the silicon at it's absolute limit... to do that they needed more detailed temperature resolution, hence new sensors placed throughout the die.

Your post implies that somehow nVidia is being dishonest for not having the same sensors. It's just never been needed, as their GPU's are better performing and are not pushed so close to the edge when running at official spec. This is also why they OC so well, as there's plenty of headroom with those designs.

With more densely packed transistors, I suspect the added sensors will eventually be needed on all high powered chips on 7nm or smaller nodes, including CPU's.

Just to add, intel's cpu temp sensor are actually Tjunction based for each core... it has been that way since C2Q era and can be checked in every product page..


Also people in this forum have to learn to not follow Manofgod replies as he it's well known for being an AMD fan apologist, everything made by AMD it's perfect, everything done by AMD it's perfect and no other competitor have products anyway close to the high quality AMD offer both in CPU or GPU market.. he it's just a blind fan of AMD and Microsoft no matter to any topic in particular he will be defending his masters..
 
Just to add, intel's cpu temp sensor are actually Tjunction based for each core... it has been that way since C2Q era and can be checked in every product page..


Also people in this forum have to learn to not follow Manofgod replies as he it's well known for being an AMD fan apologist, everything made by AMD it's perfect, everything done by AMD it's perfect and no other competitor have products anyway close to the high quality AMD offer both in CPU or GPU market.. he it's just a blind fan of AMD and Microsoft no matter to any topic in particular he will be defending his masters..

That's nice, at least you notice me, it is nice to be noticed. On a different note, I have never implied what you are implying and have said as much but hey, what do I know, I am just a peasant, after all. :D Follow or do not follow, I give my opinion and what I think, I see no reason to change it, just because the greenies do not agree with me. :) I will say it again, this is much ado about nothing and the sold out aftermarket cards are evidence of this. (Notice that the reference cards are not sold out now, which is the exact opposite of what some think should be happening.)
 
I'm sure this has been brought up before, but more after market cooling options that are "easy" would be nice for graphics cards. Might not be practical, but what if it were more like the CPU and you had to make a choice about how to cool it?

(probably a pipe (pun) dream)

Well, for GPU's it's the whole card that has to be cooled, and with every different PCB design, the cooler would have to change. With CPU's, its just the socket, and with simple adapters a heatsink can fit multiple sockets.
WIth all the surfaces you have to mate with, it would be more precarious.. people would be damaging their expensive videocards..

So it's just not as feasible.
 
So it's just not as feasible.

I figured it wasn't practical. Maybe we need better motherboard and card designs. It's just bad that the barn blazing hot thing isn't the CPU, which has been adequately addressed (generally speaking).
 
No, we should definitely relegate what temps are safe to a random tech forum poster rather than the company who designed the architecture...

Your post brings me back to the 290x when people chanted the same thing about 95C on the die. "AMD knows what they're doing" kind of posts. Some of us were skeptical noting physics and other components on the card. Then the card failed at ~3x the rate of any other card at the time...

This is a little different situation but I don't ever go by "AMD knows what they are doing" argument since we've seen them completely maul launches over and over.
 
Last edited:
Your post brings me back to the 290x when people chanted the same thing about 95C on the die. "AMD knows what they're doing" kind of posts. Some of us were skeptical noting physics and other components on the card. Then the card failed at ~3x the rate of any other card at the time...

This is a little different situation but I don't ever go by "AMD knows what they are doing" argument since we've seen them completely maul launches over and over.

I have not nor ever saw anything about the 290x failing at 3x the rate of any other card. Have any proof of that? Oh, and without them being installed in a mining farm? I had a 290x and it never had any issues and it was a reference design. (Proof of what you are saying, not the standard, "Oh, he had one that worked and claimed everyone else had no issues because of his card working" response please, thanks.)
 
I have not nor ever saw anything about the 290x failing at 3x the rate of any other card. Have any proof of that? Oh, and without them being installed in a mining farm? I had a 290x and it never had any issues and it was a reference design. (Proof of what you are saying, not the standard, "Oh, he had one that worked and claimed everyone else had no issues because of his card working" response please, thanks.)

Previous installment: AMD Failure rates

We're back again, this time April 2015 to October 2015.
Cartes graphiques - Les taux de retour des composants (14) - HardWare.fr

The numbers:

AMD: 3.947% avg
Nvidia: 1.825% avg

Once again, AMD's failure rates top double Nvidia's. Notably, the numbers are down for both brands.

A poor showing from the 290 and 290X, which is unsurprising.
You can see MASSIVE improvement on Grenada thanks to improved board/cooler designs from the AIBs.

3ABB8732-230D-445A-BD65-008CA6591BE8.jpeg




Most interesting is the 390 (which had a way better cooler) being in line with the 980ti and vs the 290/290x.

Now, is 95C the 290x ran at really 130C junction temp? Maybe. In all reality the 5700 is probably perfectly fine. My gripe was with trusting a corporation.
 
Last edited:
Your post brings me back to the 290x when people chanted the same thing about 95C on the die. "AMD knows what they're doing" kind of posts. Some of us were skeptical noting physics and other components on the card. Then the card failed at ~3x the rate of any other card at the time...


I have killed video cards, but I have not managed to kill any 290/390's, even in my mining rigs, in a hot garage, on air cooling, for years. I even took over some cards from an incompetent mining friend who cooked the thermal paste on his Hawaii cards. Cleaned them up and the cards worked fine.

Where are those failure rates from? I think I have killed more Nvidia cards with abusive mining than AMD.......
 
One further note, you posted stats from shortly after the first big mining boom, when only AMD cards were used from mining, and Nvidia cards were useless for mining.
 
I have killed video cards, but I have not managed to kill any 290/390's, even in my mining rigs, in a hot garage, on air cooling, for years. I even took over some cards from an incompetent mining friend who cooked the thermal paste on his Hawaii cards. Cleaned them up and the cards worked fine.

Where are those failure rates from? I think I have killed more Nvidia cards with abusive mining than AMD.......

Post 113 has a link. 390s don’t count.
 
Post 113 has a link. 390s don’t count.

Shall we go into the Nvidia Space Invaders editions 2000 series? Or shall we cover the cracking solder from Nvidia that was an actual thermal issue? That article also gives 0 sources for where they got their info from as no one reports their failure rate to the media.
 
Shall we go into the Nvidia Space Invaders editions 2000 series? Or shall we cover the cracking solder from Nvidia that was an actual thermal issue? That article also gives 0 sources for where they got their info from as no one reports their failure rate to the media.

Why would we go into nVidia issues? This is about AMD and temperature.

Discredit the data if you want, it still doesn’t change physics and what temperature does to the die and components around the die.
 
You are still ignoring data from the first mining boom that was massively profitable and ONLY INCLUDED AMD cards.

Many miners are stupid and fried their cards.
 
Back
Top