Krenum
Fully [H]
- Joined
- Apr 29, 2005
- Messages
- 19,193
So, should I wait for a good blower, or pull the trigger on the cheapest sucker I can find?
rx 5700 without the xt
I would buy the Sapphire model.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
So, should I wait for a good blower, or pull the trigger on the cheapest sucker I can find?
rx 5700 without the xt
You are correct in the fact that many had faulty Elipda memory and then tons of cards were bricked during the flashing daus...
AMD allowed a custom Bios to be signed that disabled 1/2 of the ROPs on the cards and changed memory straps/timings. The reduction in ROPs allowed the card to run about 45% less power with virtually no mining performance hit ....
Agree, but I know for the 5700 (the ones I'm interested in) it was only a $9 increase to get the sapphire with better cooling (as long as your case is well ventilated). If you can find one that's good fans for not much more than blower it'd be a good investment I think, otherwise it's subjective on how annoying the blower really is. Some people say it doesn't bother them and some find it annoying. The actual hotspot temperature isn't much of a concern as Nvidia cards are most likely similar, they just won't tell you what they are and only report edge temps. So, compare edge temps to other edge temps in cards, not hot spot temps vs edge temps...Hotspot temps are whatever.
Worry about wattage flowing through the GPU- power in, heat out- and whether that is causing issues.
The actual hotspot temperature isn't much of a concern as Nvidia cards are most likely similar, they just won't tell you what they are and only report edge temps. So, compare edge temps to other edge temps in cards, not hot spot temps vs edge temps...
Unfortunately, yeah. Can be useful for determining whether you have a good seat and tim coverage, but otherwise not much more useful than edge temp.Temperature is really meaningless here, except when comparing like for like- the same sensor in the same spot in the same part with the same load, etc. It is not at all indicative of the amount of energy flowing through the part on a comparative basis otherwise.
AMD didn't even enforce BIOS signing until Polaris. I had a 290 and did much BIOS modding myself -- never had to mess with any signing. In fact I used to mod the BIOS on all of my older AMD cards. I miss having that ability :/
If you wanted to disable ROPs, at least at the beginning, you had to buy a copy of TheStilt's mining bios for btc/ltc.
He had a contact at AMD that helped him get in the doo and AMD realized the windfall they could get with mining performance so it was approved.
You could adjust memory straps and such without his BIOS, but I have never seen someone be able to disable ROPs without his BIOS.
AMD backtracked on this with the open source Polaris Bios Editor.
Interesting, I never heard about that before.
Yeah that Bios was what put him on the map so to speak. He is extremely talented, as his work on Ryzen's cache and memory systems have shown..
I am sure he made a fortune selling those BIOS files....Everyone that was mining back then during the LTC and early days of ETH used AMD 290/390s.
The compression is lossless. Stop spreading fud.say what you want but Hawaii has it's own look in imaged render as you can tell it was high end for it's time without color compression but how to tame the beast = 375watt card and never been a part .
I would buy the Sapphire model.
AMD didn't even enforce BIOS signing until Polaris. I had a 290 and did much BIOS modding myself -- never had to mess with any signing. In fact I used to mod the BIOS on all of my older AMD cards. I miss having that ability :/
Not sure what he is really going on about, since I have not heard anyone, including AMD, excusing poor cooler designs nor minimizing the impact of OEM aftermarket cards and their coolers. The junction temp information was really good but, the rest was much ado about nothing.
High Temperatures: In general, in chemistry and electronics, operating at high temperatures results in effects that can significantly reduce the lifetime and reliability of a component. ... The usual rule of thumb based on the Arrhenius equation is that the lifetime of a device halves for every 10C rise in temperature.
Not sure what he is really going on about, since I have not heard anyone, including AMD, excusing poor cooler designs nor minimizing the impact of OEM aftermarket cards and their coolers. The junction temp information was really good but, the rest was much ado about nothing.
Did you watch the whole video? Because he did an excellent job of explaining it. They are already seeing AMD defenders in other forums (Reddit I think) defending AMD's shit blower cooler and saying 110C is fine. He explains why it is not fine, how the general public has taken it out of context (easy to do thanks to poor wording or outright misleading statement by AMD).
Every added 10C a piece of electronics run at, reduces it's expected lifetime by half. (rule of thumb). Point being that the Sapphire card with 87C junction temps is far better than the reference card with 110C junction temps, and the sapphire achieved it at 40Db fan noise. For one, it will throttle once it hits 110C, do you really want to be bumping up against that? The edge where the device is threatening to go up in actual flames? (not the silicon itself, but the surrounding materials)
It's also common sense...
Goes to reddit, AMD section, switches to hot posting and.........? nope, not there and scroll, scroll, scroll, much ado about nothing. If it was a big deal, it would be the hottest thread in the AMD section, oh well.
Do you have the XT?
110C hotspot temp. Now you actually have insight as to what the hottest temp is (well Vega has it as well) whereas before you didn't .... Shit was running hotter than it displayed before in every other design, you just didn't know. It's not like AMD is redefining what it safe for silicon and 110C is just fine, it's hot, sure but it's safe. Can't believe everyone is freaking out over this it's not even news.
Can't believe you... no wait, EVERYONE is freaking out about news posted in the news section.Definition of news
1a : a report of recent events
b : previously unknown information
c : something having a specified influence or effect
Can't believe you... no wait, EVERYONE is freaking out about news posted in the news section.
Just because you know about it, or don't want to hear about it, doesn't mean everybody else does or doesn't.Hotspot temp has been a thing since Vega, for 2 years. Also the thermal limits of silicon haven't changed in the last 2 years either. Nope, not news.
I saw the whole thing too. I think digital jesus (as some refer to him here) is nice and everything, but that video was a bit rambling... 110c not optimal, but acceptable.. thats all.Not sure what he is really going on about, since I have not heard anyone, including AMD, excusing poor cooler designs nor minimizing the impact of OEM aftermarket cards and their coolers. The junction temp information was really good but, the rest was much ado about nothing.
And that stinks, because it means I cannot under volt my Power Color RX580 Red Devil via the bios and therefore, have to use software to down volt it and get those fans under control. (No, the Reference Vega 56 and RX 5700 fans are not noisy but, those fans on my RX 580? Those are extremely noisy at full load and no custom fan curve.)
If you are 100% sure of some settings that you card cab do 100% stable (undervolt, core/men OC, memory timings (wouldn't mess with this for gaming)) then you can pull the bios, edit it with Polaris Bios Editor, and then "flash" the edited BIOS to the card so it works with your custom timings from the gate.
If you have a dual bios card, there is zero risk. Backup the stock bios, edit a copy and flash that, and then enjoy not having to ever afjuet anything via software.
If you run into any issue with a new game etc, then you can switch to your 2bf BIOS, boot and then flash your stock bios back.
If you can't find a copy of PBE, I can send you a copy at the end of the week when I hsv access to my gaming rig again .
Not sure what he is really going on about, since I have not heard anyone, including AMD, excusing poor cooler designs nor minimizing the impact of OEM aftermarket cards and their coolers. The junction temp information was really good but, the rest was much ado about nothing.
Amd can tell us this is normal and expected. You cannot tell us it isn't.No it's not. Skylake ran hot, had bad tim and needed to be delidded to get good cooling results. That was not nothing, it was news. Navi runs too hot, needs watercooling or aftermarket cooler to to run at normalized temps. AMD makes a blog post implying that 110 is normal and expected, it is neither. I have a 5700xt, but this blind fanboy garbage needs to stop. Root for competition and enthusiasts not red or green.
Amd can tell us this is normal and expected. You cannot tell us it isn't.
When piles of 5700 start dying it may be said, but as of right now AMD is the manufacturer of this chip and shall be trusted.
For me Navi is too expensive for a mid range chip so I'll not likely get to test this . But there is no reason not to believe this is acceptable limits.
It's not going to just up and die with 110C junction temps.. we never said it would. It will "wear out" at a faster pace @110C, than the same card kept cooler.
If there had been any OC headroom left in the chip, you need it to be cool(er than 110C) to be able to tap into that (sadly there's not).
Better cooling means it can run "quieter" and not give up anything in performance.
And what Tech Jesus was saying, was that AMD blind fanbois will just parrot "AMD says 110C is OK!" and it isn't really the case, and can be easily misconstrued. 110C is ok as Junction temp and ONLY Junction temp. And by "ok", it just means the silicon can handle it. You really would not want this to be your operating temp, as all the heat has to go somewhere, and it goes into your case and heats the surrounding components on the card. This affects the lifespan of everything.
So of course AMD is going to say "It's ok" because they gotta sell out of the shit blower cards they made or lose money. And the cards will likely last thru the warranty period. In 5 years, compare how many of these cards have died vs the better cooled brothers, and you will see the point we are trying to make.
Believe what you want though, it's a free country. Ignore the advice and recommendations of people smarter and more experienced than you, that's all the circle of life stuff.
It's not going to just up and die with 110C junction temps.. we never said it would. It will "wear out" at a faster pace @110C, than the same card kept cooler.
If there had been any OC headroom left in the chip, you need it to be cool(er than 110C) to be able to tap into that (sadly there's not).
Better cooling means it can run "quieter" and not give up anything in performance.
And what Tech Jesus was saying, was that AMD blind fanbois will just parrot "AMD says 110C is OK!" and it isn't really the case, and can be easily misconstrued. 110C is ok as Junction temp and ONLY Junction temp. And by "ok", it just means the silicon can handle it. You really would not want this to be your operating temp, as all the heat has to go somewhere, and it goes into your case and heats the surrounding components on the card. This affects the lifespan of everything.
So of course AMD is going to say "It's ok" because they gotta sell out of the shit blower cards they made or lose money. And the cards will likely last thru the warranty period. In 5 years, compare how many of these cards have died vs the better cooled brothers, and you will see the point we are trying to make.
Believe what you want though, it's a free country. Ignore the advice and recommendations of people smarter and more experienced than you, that's all the circle of life stuff.
you really would not want this to be your operating temp, as all the heat has to go somewhere, and it goes into your case and heats the surrounding components on the card.
Believe what you want though, it's a free country. Ignore the advice and recommendations of people smarter and more experienced than you, that's all the circle of life stuff.
Your talking like heat and temperature are the same thing. They are not.
All of this and your closing statement, c'mon.
This is on a cutting edge process, AMD has years of experience with hot chips. Ignore the advice of people smarter than you!? Hypocrite much?
And just how have you determined that those other folks are smarter and more experienced than he is? You know the poster you quoted personally? Dude, it is much ado about nothing, as evidenced by the Aftermarket cards selling out and the reference cards now staying in stock. (Reference cards are far better for water cooling, however.) I see no one claiming that you should just buy a reference card, because AMD says it is ok, no problem. Their reference cards are most likely going to sell to people with a water cooling loop now. After all, it is a waste of money to remove a custom cooler if you are going to water cool it anyways.