AMD RX 5700 XT GPUs reaching 110°C “expected and within spec” for gaming

Hotspot temps are whatever.

Worry about wattage flowing through the GPU- power in, heat out- and whether that is causing issues.
 
You are correct in the fact that many had faulty Elipda memory and then tons of cards were bricked during the flashing daus...


AMD allowed a custom Bios to be signed that disabled 1/2 of the ROPs on the cards and changed memory straps/timings. The reduction in ROPs allowed the card to run about 45% less power with virtually no mining performance hit ....

AMD didn't even enforce BIOS signing until Polaris. I had a 290 and did much BIOS modding myself -- never had to mess with any signing. In fact I used to mod the BIOS on all of my older AMD cards. I miss having that ability :/
 
Hotspot temps are whatever.

Worry about wattage flowing through the GPU- power in, heat out- and whether that is causing issues.
Agree, but I know for the 5700 (the ones I'm interested in) it was only a $9 increase to get the sapphire with better cooling (as long as your case is well ventilated). If you can find one that's good fans for not much more than blower it'd be a good investment I think, otherwise it's subjective on how annoying the blower really is. Some people say it doesn't bother them and some find it annoying. The actual hotspot temperature isn't much of a concern as Nvidia cards are most likely similar, they just won't tell you what they are and only report edge temps. So, compare edge temps to other edge temps in cards, not hot spot temps vs edge temps...
 
The actual hotspot temperature isn't much of a concern as Nvidia cards are most likely similar, they just won't tell you what they are and only report edge temps. So, compare edge temps to other edge temps in cards, not hot spot temps vs edge temps...

Temperature is really meaningless here, except when comparing like for like- the same sensor in the same spot in the same part with the same load, etc. It is not at all indicative of the amount of energy flowing through the part on a comparative basis otherwise.
 
Temperature is really meaningless here, except when comparing like for like- the same sensor in the same spot in the same part with the same load, etc. It is not at all indicative of the amount of energy flowing through the part on a comparative basis otherwise.
Unfortunately, yeah. Can be useful for determining whether you have a good seat and tim coverage, but otherwise not much more useful than edge temp.
 
AMD didn't even enforce BIOS signing until Polaris. I had a 290 and did much BIOS modding myself -- never had to mess with any signing. In fact I used to mod the BIOS on all of my older AMD cards. I miss having that ability :/


If you wanted to disable ROPs, at least at the beginning, you had to buy a copy of TheStilt's mining bios for btc/ltc.

He had a contact at AMD that helped him get in the doo and AMD realized the windfall they could get with mining performance so it was approved.

You could adjust memory straps and such without his BIOS, but I have never seen someone be able to disable ROPs without his BIOS.

AMD backtracked on this with the open source Polaris Bios Editor.
 
If you wanted to disable ROPs, at least at the beginning, you had to buy a copy of TheStilt's mining bios for btc/ltc.

He had a contact at AMD that helped him get in the doo and AMD realized the windfall they could get with mining performance so it was approved.

You could adjust memory straps and such without his BIOS, but I have never seen someone be able to disable ROPs without his BIOS.

AMD backtracked on this with the open source Polaris Bios Editor.

Interesting, I never heard about that before.
 
Interesting, I never heard about that before.


Yeah that Bios was what put him on the map so to speak. He is extremely talented, as his work on Ryzen's cache and memory systems have shown..

I am sure he made a fortune selling those BIOS files....Everyone that was mining back then during the LTC and early days of ETH used AMD 290/390s.
 
Yeah that Bios was what put him on the map so to speak. He is extremely talented, as his work on Ryzen's cache and memory systems have shown..

I am sure he made a fortune selling those BIOS files....Everyone that was mining back then during the LTC and early days of ETH used AMD 290/390s.

Yeah in fact my custom 290 BIOS had his memory timings in it, amongst other stuff like customized powerplay limits, custom fan curve, custom clock speeds, etc.
 
I owned a Ref 290 and that thing was hot , but Bit Coin saved the day and about $300 profit later I bought a R9 280 while I waited and then I picked up my Tri - X 290X for $269 .. as I still own both today and nice to drop one in to see how the Ryzen 5 3600 acts with older cards . say what you want but Hawaii has it's own look in imaged render as you can tell it was high end for it's time without color compression but how to tame the beast = 375watt card and never been a part .


but comparing a Ref 290x to a Ref RX 5700 = It's noting like that in heat made as surface area of 28nm is greater then 7nm under the lid

 
qd5byhza00h31.jpg
 
Thanks for the video. It makes a lot more sense now.

To be honest, I think the blog was helpful in getting people to understand junction temperature (which I didn't fully understand before) but I agree some quotes from the article were misleading or false.
 


Not sure what he is really going on about, since I have not heard anyone, including AMD, excusing poor cooler designs nor minimizing the impact of OEM aftermarket cards and their coolers. The junction temp information was really good but, the rest was much ado about nothing.
 
AMD didn't even enforce BIOS signing until Polaris. I had a 290 and did much BIOS modding myself -- never had to mess with any signing. In fact I used to mod the BIOS on all of my older AMD cards. I miss having that ability :/

And that stinks, because it means I cannot under volt my Power Color RX580 Red Devil via the bios and therefore, have to use software to down volt it and get those fans under control. (No, the Reference Vega 56 and RX 5700 fans are not noisy but, those fans on my RX 580? Those are extremely noisy at full load and no custom fan curve.)
 
Not sure what he is really going on about, since I have not heard anyone, including AMD, excusing poor cooler designs nor minimizing the impact of OEM aftermarket cards and their coolers. The junction temp information was really good but, the rest was much ado about nothing.

Did you watch the whole video? Because he did an excellent job of explaining it. They are already seeing AMD defenders in other forums (Reddit I think) defending AMD's shit blower cooler and saying 110C is fine. He explains why it is not fine, how the general public has taken it out of context (easy to do thanks to poor wording or outright misleading statement by AMD).

Every added 10C a piece of electronics run at, reduces it's expected lifetime by half. (rule of thumb). Point being that the Sapphire card with 87C junction temps is far better than the reference card with 110C junction temps, and the sapphire achieved it at 40Db fan noise. For one, it will throttle once it hits 110C, do you really want to be bumping up against that? The edge where the device is threatening to go up in actual flames? (not the silicon itself, but the surrounding materials)
High Temperatures: In general, in chemistry and electronics, operating at high temperatures results in effects that can significantly reduce the lifetime and reliability of a component. ... The usual rule of thumb based on the Arrhenius equation is that the lifetime of a device halves for every 10C rise in temperature.

It's also common sense...
 
Not sure what he is really going on about, since I have not heard anyone, including AMD, excusing poor cooler designs nor minimizing the impact of OEM aftermarket cards and their coolers. The junction temp information was really good but, the rest was much ado about nothing.

deaf dumb and blind in spirit.jpg
 

Did you watch the whole video? Because he did an excellent job of explaining it. They are already seeing AMD defenders in other forums (Reddit I think) defending AMD's shit blower cooler and saying 110C is fine. He explains why it is not fine, how the general public has taken it out of context (easy to do thanks to poor wording or outright misleading statement by AMD).

Every added 10C a piece of electronics run at, reduces it's expected lifetime by half. (rule of thumb). Point being that the Sapphire card with 87C junction temps is far better than the reference card with 110C junction temps, and the sapphire achieved it at 40Db fan noise. For one, it will throttle once it hits 110C, do you really want to be bumping up against that? The edge where the device is threatening to go up in actual flames? (not the silicon itself, but the surrounding materials)


It's also common sense...

Goes to reddit, AMD section, switches to hot posting and.........? nope, not there and scroll, scroll, scroll, much ado about nothing. If it was a big deal, it would be the hottest thread in the AMD section, oh well.
 
Goes to reddit, AMD section, switches to hot posting and.........? nope, not there and scroll, scroll, scroll, much ado about nothing. If it was a big deal, it would be the hottest thread in the AMD section, oh well.

Do you have the XT?
 
Do you have the XT?

I have the Sapphire RX 5700 only that I got for $284. I do not run furmark, since it can kill cards and my only point was to the topic itself, not whether an XT can run hot or not ;)

People are buying up the aftermarket cards and mostly not the reference now, so nothing is diminishing the partners cards.
 
The aftermarket don't seem to be available yet. $284 is a fantastic price for an RX 5700. I was really torn between getting the cheapest reference 5700 since it's not supposed to be that hot biy ultimately decided to wait for something with better cooling and noise levels at all times.
 
110C hotspot temp. Now you actually have insight as to what the hottest temp is (well Vega has it as well) whereas before you didn't .... Shit was running hotter than it displayed before in every other design, you just didn't know. It's not like AMD is redefining what it safe for silicon and 110C is just fine, it's hot, sure but it's safe. Can't believe everyone is freaking out over this it's not even news.
Definition of news
1a : a report of recent events
b : previously unknown information
c : something having a specified influence or effect
Can't believe you... no wait, EVERYONE :eek: is freaking out about news posted in the news section.
 
Can't believe you... no wait, EVERYONE :eek: is freaking out about news posted in the news section.

Hotspot temp has been a thing since Vega, for 2 years. Also the thermal limits of silicon haven't changed in the last 2 years either. Nope, not news.
 
Hotspot temp has been a thing since Vega, for 2 years. Also the thermal limits of silicon haven't changed in the last 2 years either. Nope, not news.
Just because you know about it, or don't want to hear about it, doesn't mean everybody else does or doesn't.
 
Not sure what he is really going on about, since I have not heard anyone, including AMD, excusing poor cooler designs nor minimizing the impact of OEM aftermarket cards and their coolers. The junction temp information was really good but, the rest was much ado about nothing.
I saw the whole thing too. I think digital jesus (as some refer to him here) is nice and everything, but that video was a bit rambling... 110c not optimal, but acceptable.. thats all.
 
And that stinks, because it means I cannot under volt my Power Color RX580 Red Devil via the bios and therefore, have to use software to down volt it and get those fans under control. (No, the Reference Vega 56 and RX 5700 fans are not noisy but, those fans on my RX 580? Those are extremely noisy at full load and no custom fan curve.)

If you are 100% sure of some settings that you card cab do 100% stable (undervolt, core/men OC, memory timings (wouldn't mess with this for gaming)) then you can pull the bios, edit it with Polaris Bios Editor, and then "flash" the edited BIOS to the card so it works with your custom timings from the gate.


If you have a dual bios card, there is zero risk. Backup the stock bios, edit a copy and flash that, and then enjoy not having to ever afjuet anything via software.

If you run into any issue with a new game etc, then you can switch to your 2bf BIOS, boot and then flash your stock bios back.

If you can't find a copy of PBE, I can send you a copy at the end of the week when I hsv access to my gaming rig again .
 
If you are 100% sure of some settings that you card cab do 100% stable (undervolt, core/men OC, memory timings (wouldn't mess with this for gaming)) then you can pull the bios, edit it with Polaris Bios Editor, and then "flash" the edited BIOS to the card so it works with your custom timings from the gate.


If you have a dual bios card, there is zero risk. Backup the stock bios, edit a copy and flash that, and then enjoy not having to ever afjuet anything via software.

If you run into any issue with a new game etc, then you can switch to your 2bf BIOS, boot and then flash your stock bios back.

If you can't find a copy of PBE, I can send you a copy at the end of the week when I hsv access to my gaming rig again .

Yeah, I tried that but, I was not able to figure out how to make sure the zero fan control was permanently disabled nor how to adjust the GPU vcore in the bios. (It comes up with a bunch of numbers, like 65282, and I could not track down a table to convert the number to a millivolt number.) Any suggestions? Thanks.

Oh, and the Power Color Red Devil RX 580 does have a dual bios switch. :)
 
Not sure what he is really going on about, since I have not heard anyone, including AMD, excusing poor cooler designs nor minimizing the impact of OEM aftermarket cards and their coolers. The junction temp information was really good but, the rest was much ado about nothing.

No it's not. Skylake ran hot, had bad tim and needed to be delidded to get good cooling results. That was not nothing, it was news. Navi runs too hot, needs watercooling or aftermarket cooler to to run at normalized temps. AMD makes a blog post implying that 110 is normal and expected, it is neither. I have a 5700xt, but this blind fanboy garbage needs to stop. Root for competition and enthusiasts not red or green.
 
No it's not. Skylake ran hot, had bad tim and needed to be delidded to get good cooling results. That was not nothing, it was news. Navi runs too hot, needs watercooling or aftermarket cooler to to run at normalized temps. AMD makes a blog post implying that 110 is normal and expected, it is neither. I have a 5700xt, but this blind fanboy garbage needs to stop. Root for competition and enthusiasts not red or green.
Amd can tell us this is normal and expected. You cannot tell us it isn't.
When piles of 5700 start dying it may be said, but as of right now AMD is the manufacturer of this chip and shall be trusted.
For me Navi is too expensive for a mid range chip so I'll not likely get to test this . But there is no reason not to believe this is acceptable limits.
 
Amd can tell us this is normal and expected. You cannot tell us it isn't.
When piles of 5700 start dying it may be said, but as of right now AMD is the manufacturer of this chip and shall be trusted.
For me Navi is too expensive for a mid range chip so I'll not likely get to test this . But there is no reason not to believe this is acceptable limits.

It's not going to just up and die with 110C junction temps.. we never said it would. It will "wear out" at a faster pace @110C, than the same card kept cooler.
If there had been any OC headroom left in the chip, you need it to be cool(er than 110C) to be able to tap into that (sadly there's not).
Better cooling means it can run "quieter" and not give up anything in performance.

And what Tech Jesus was saying, was that AMD blind fanbois will just parrot "AMD says 110C is OK!" and it isn't really the case, and can be easily misconstrued. 110C is ok as Junction temp and ONLY Junction temp. And by "ok", it just means the silicon can handle it. You really would not want this to be your operating temp, as all the heat has to go somewhere, and it goes into your case and heats the surrounding components on the card. This affects the lifespan of everything.

So of course AMD is going to say "It's ok" because they gotta sell out of the shit blower cards they made or lose money. And the cards will likely last thru the warranty period. In 5 years, compare how many of these cards have died vs the better cooled brothers, and you will see the point we are trying to make.

Believe what you want though, it's a free country. Ignore the advice and recommendations of people smarter and more experienced than you, that's all the circle of life stuff. :)
 
It's not going to just up and die with 110C junction temps.. we never said it would. It will "wear out" at a faster pace @110C, than the same card kept cooler.
If there had been any OC headroom left in the chip, you need it to be cool(er than 110C) to be able to tap into that (sadly there's not).
Better cooling means it can run "quieter" and not give up anything in performance.

And what Tech Jesus was saying, was that AMD blind fanbois will just parrot "AMD says 110C is OK!" and it isn't really the case, and can be easily misconstrued. 110C is ok as Junction temp and ONLY Junction temp. And by "ok", it just means the silicon can handle it. You really would not want this to be your operating temp, as all the heat has to go somewhere, and it goes into your case and heats the surrounding components on the card. This affects the lifespan of everything.

So of course AMD is going to say "It's ok" because they gotta sell out of the shit blower cards they made or lose money. And the cards will likely last thru the warranty period. In 5 years, compare how many of these cards have died vs the better cooled brothers, and you will see the point we are trying to make.

Believe what you want though, it's a free country. Ignore the advice and recommendations of people smarter and more experienced than you, that's all the circle of life stuff. :)

And just how have you determined that those other folks are smarter and more experienced than he is? You know the poster you quoted personally? Dude, it is much ado about nothing, as evidenced by the Aftermarket cards selling out and the reference cards now staying in stock. (Reference cards are far better for water cooling, however.) I see no one claiming that you should just buy a reference card, because AMD says it is ok, no problem. Their reference cards are most likely going to sell to people with a water cooling loop now. After all, it is a waste of money to remove a custom cooler if you are going to water cool it anyways.
 
It's not going to just up and die with 110C junction temps.. we never said it would. It will "wear out" at a faster pace @110C, than the same card kept cooler.
If there had been any OC headroom left in the chip, you need it to be cool(er than 110C) to be able to tap into that (sadly there's not).
Better cooling means it can run "quieter" and not give up anything in performance.

And what Tech Jesus was saying, was that AMD blind fanbois will just parrot "AMD says 110C is OK!" and it isn't really the case, and can be easily misconstrued. 110C is ok as Junction temp and ONLY Junction temp. And by "ok", it just means the silicon can handle it. You really would not want this to be your operating temp, as all the heat has to go somewhere, and it goes into your case and heats the surrounding components on the card. This affects the lifespan of everything.

So of course AMD is going to say "It's ok" because they gotta sell out of the shit blower cards they made or lose money. And the cards will likely last thru the warranty period. In 5 years, compare how many of these cards have died vs the better cooled brothers, and you will see the point we are trying to make.

Believe what you want though, it's a free country. Ignore the advice and recommendations of people smarter and more experienced than you, that's all the circle of life stuff. :)

Well to be fair a lot of cards include 3y warranty and to be honest, digital jesus (as you put it) also shown that none ran at 110C so if yours do run that high, well you have another problem.
Also, from my limited experience, cards died from memory artefacts or fans wayyy before actual GPU IC.

Im not defending AMD here, their choice of words was poor at best but I do understand that they need to make one bold statement that every average Joe can understand. PR dept will do some OT to fix this.
 
you really would not want this to be your operating temp, as all the heat has to go somewhere, and it goes into your case and heats the surrounding components on the card.

Believe what you want though, it's a free country. Ignore the advice and recommendations of people smarter and more experienced than you, that's all the circle of life stuff. :)

Your talking like heat and temperature are the same thing. They are not.
All of this and your closing statement, c'mon.
This is on a cutting edge process, AMD has years of experience with hot chips. Ignore the advice of people smarter than you!? Hypocrite much?
 
Your talking like heat and temperature are the same thing. They are not.
All of this and your closing statement, c'mon.
This is on a cutting edge process, AMD has years of experience with hot chips. Ignore the advice of people smarter than you!? Hypocrite much?

One thing I didn't articulate well (I was in a rush to get out of the office), was that there is a higher heat density if you allow the chip to get that hot, surrounding parts will suffer on the card. If the heat exits out the back (not 100% sure as I do not own one), then the card itself is solely subjected to higher temps. This is harder on your card no matter how you or AMD spins it. now I will read the rest of the replies
 
And just how have you determined that those other folks are smarter and more experienced than he is? You know the poster you quoted personally? Dude, it is much ado about nothing, as evidenced by the Aftermarket cards selling out and the reference cards now staying in stock. (Reference cards are far better for water cooling, however.) I see no one claiming that you should just buy a reference card, because AMD says it is ok, no problem. Their reference cards are most likely going to sell to people with a water cooling loop now. After all, it is a waste of money to remove a custom cooler if you are going to water cool it anyways.

Yeah but not everyone who walks into Best Buy or Microcenter to buy a card is going to know any better, or buy one to watercool it. They listen to their buddies like travm who tell them "it's all fine, amd says it's fine, those people on the forums can't prove anything, but I know what's best so yeah, go buy that". Then it dies just outside the warranty period, or starts artifacting. HIgh heat weakens solder joints over time (intense heat, then cools, intense heat, cools, this causes solder cracks), but AMD say's it's fine.

I would never recommend that blower card to anyone, and would actively point out to avoid it.

Anyone can feel free to disagree, or call me a hypocrite.
 
Back
Top