This is why GTX590s blow up. Infrared thermography: GTX590 vs HD6990

Greg b has been informed.


Both ATI & NV dual cards are not really up to scratch this time around for various reasons.

They rarely are. Too many design compromises need to be made in order to create them. Many of those design compromises are either unacceptable or simply short sighted. They tend to take what should be a great card and make a mediocre card at best. Sure they tend to rip up the benchmarks but the enthusiast isn't that stupid. We know what the numbers mean and can look beyond the marketing slides. When you do, you quickly discover that these dual GPU cards are generally a bad choice in most circumstances. Their high end single-GPU offerings in SLI or Crossfire tend to be far better performers.

You can wait for companies like ASUS to take a no holds barred approach to dual GPU cards but unfortunately you end up with cards like the ASUS MARS which are too late to the party to have any significant relevance in the enthusiast market. These solutions tend to be overpriced so that argument against them factors in for some. Typically that's not their main problem but the price combined with late release it certainly becomes an issue. What we've really seen is a shift in strategy by AMD and NVIDIA. Rather than milking a GPU architecture for several generations they try to concentrate on getting a new one out every two years at most with the upgrades in between being incremental at best. They tend to use some kind of dual GPU solution to pad that six month refresh cycle just so they've got a new product to pimp. This is why I tend to avoid the product refresh cycles and await entirely new architectures. However, a refreshed product can sometimes be compelling enough for me to upgrade to it. Though this is an extremely rare occurrence. The GTX 580 is one example of a refresh that was worth while and the 8800 Ultra is an example of one that was not.

That being said, not all dual GPU cards have been trash, but they've not really been on par with what I've thought they should be. The Radeon HD 5970 was probably the last dual GPU card I liked, but I was disappointed by it's stock clocks. Easily remedied, but we have no such remedy for the GTX 590. With their current cooling solution I'd dare say that GTX 580 clocks are probably impossible to achieve in most if not any circumstances. At least realistic scenarios where the computer could be used daily inside of a full case with normal ambient temperatures.
 
Now this is the reason why GTX 590 blow up:

GTX 590 3dMark03 run from the folks over at lab501.ro.

Same guys who tried 1.3V, and whose card blew at "stock".

Must be weak VRM :confused:
 
They rarely are. Too many design compromises need to be made in order to create them. Many of those design compromises are either unacceptable or simply short sighted. They tend to take what should be a great card and make a mediocre card at best. Sure they tend to rip up the benchmarks but the enthusiast isn't that stupid. We know what the numbers mean and can look beyond the marketing slides. When you do, you quickly discover that these dual GPU cards are generally a bad choice in most circumstances. Their high end single-GPU offerings in SLI or Crossfire tend to be far better performers.

You can wait for companies like ASUS to take a no holds barred approach to dual GPU cards but unfortunately you end up with cards like the ASUS MARS which are too late to the party to have any significant relevance in the enthusiast market. These solutions tend to be overpriced so that argument against them factors in for some. Typically that's not their main problem but the price combined with late release it certainly becomes an issue. What we've really seen is a shift in strategy by AMD and NVIDIA. Rather than milking a GPU architecture for several generations they try to concentrate on getting a new one out every two years at most with the upgrades in between being incremental at best. They tend to use some kind of dual GPU solution to pad that six month refresh cycle just so they've got a new product to pimp. This is why I tend to avoid the product refresh cycles and await entirely new architectures. However, a refreshed product can sometimes be compelling enough for me to upgrade to it. Though this is an extremely rare occurrence. The GTX 580 is one example of a refresh that was worth while and the 8800 Ultra is an example of one that was not.

That being said, not all dual GPU cards have been trash, but they've not really been on par with what I've thought they should be. The Radeon HD 5970 was probably the last dual GPU card I liked, but I was disappointed by it's stock clocks. Easily remedied, but we have no such remedy for the GTX 590. With their current cooling solution I'd dare say that GTX 580 clocks are probably impossible to achieve in most if not any circumstances. At least realistic scenarios where the computer could be used daily inside of a full case with normal ambient temperatures.

I do own 2 sapphire 5970 toxic's 4gb.

Dual GPU cards should only be used when going Trifire/Quadfire/Quadsli or when wanting dual GPUS but space for 2 cards or the amount of PCIE slots is a real issue.
 
http://techreport.com/discussions.x/20677

The rumor mill wasn't finished, though. Another assertion of problems reached us yesterday afternoon via different channels, based in part on these sweet thermal camera readings, which clearly show temperatures as high as 112° C at the center of the GTX 590. This info, we were told, proves the GPU's on-die thermal sensors are being programmed to under-report temperatures. The solution? We should try an infrared thermometer aimed at the back of the card, instead.
So we did.

Probably didn't need to, though. If you look carefully at those camera readings, you'll see that the highest temperatures reached in their measurements are for the power regulation circuitry at the center of the card, not the two GPUs on the sides. Our own readings with an IR thermometer showed that the metal plates behind the GPUs were cooler than our prior temperature sensor readings had been. In other words, the sensors were probably not reporting artificially low results. Yes, the power circuitry gets hotter—up to 106° C, in our measurements—but we have no sense that such temperatures constitute a problem. Hot VRMs aren't exactly uncommon.

All of which leads us back to exactly where we started, with no evidence of basic problems in the GTX 590's operation beyond, you know, the initial exploding drivers. Heh. We do have some evidence of additional, sloppily made insinuations of problems, which I suppose shouldn't be too surprising.
Trouble is, a lot of these forum rumors tend to be given tremendous credence by a lot of folks. Heck, every once in a blue moon, one of those rumors blows up into something big because there's a real problem underlying it. That could yet be the case with the GTX 590—or the Radeon HD 6990 or, well, I guess Sandy Bridge already went there. Such rumors are also an intriguing source of information because so many of 'em seem to be planted by a major, engineering-focused organization—you know, a competing firm.
 
Hrm, besides your video-card melting I'm a bit more concerned about the other things attached to it or sitting beside it that warrenty won't cover... Interesting to see what the long term effects would be on them due to the elevated temps that are unavoidable with this card.
 
... sure... relying on a driver so you don't blow up your card is a way to end all the rumors ... :x

We shouldn't blame Nvidia for the bad power circuit design that they did on the card hm hm ... no sir, what we should is to thank Nvidia for launching such good drivers ...

bahh :x
 
VRMs will definately degrade when they get hot....especially at those temps.....
 
I do own 2 sapphire 5970 toxic's 4gb.

Dual GPU cards should only be used when going Trifire/Quadfire/Quadsli or when wanting dual GPUS but space for 2 cards or the amount of PCIE slots is a real issue.

Can you please post your 5970 VRM temp. during Furmark?
If it's not asking too much..

thanks.gif



@YeuEmMaiMai

Do you by any chance know which card is this :rolleyes:

thermic-imaging-radeon-hd-4870.jpg
 
Last edited:
It's not the temps that are the problem here, children. it's the piss poor WEAK VRM's.

Are people even old enough to remember the VRM's reaching 125 C on the HD 4870's in furmark, and crashing? And yes, people were volt modding those cards, too. But those cards didn't blow up. They just crashed. Yes, I think I remember cards dying, but that was usually from repeated abuse. They didn't just go "Poof" within minutes like we're seeing with these 590's or 570's.

The GPUs on the 590's can easily handle 1.2v. You just have to keep them cool. The mosfets can't handle the current.
 
A Shameful Display this card has been, mine just died on me!! Was playing Shogun 2 and all of a sudden the screen went blank and heard a loud pop. Going to test all my other hardware out but so far I am back up with a GTX 465 just fine. Going to put back in my XFX 6970's
 
Check the topic name - this thread is about the temperatures.
Weak VRM. Would you even try to substantiate this?

Weak because they can't take 1.2V?
1.2V is not safe for 570, nor for 580. Why should it be possible on 0.91V dual GF110?

But...
Actually it is a matter of temps.
And VRM's can go even 1.2 with good cooling.
Which should be obvious soon enough with blocks from EK, Danger Den and Co.

And when they do get overvolted at 1.2V, by your standards that would mean 6990 is weak if it cant overvolt at 1.47V. Good luck with that. Highest i've seen is 1.30V.
 
Check the topic name - this thread is about the temperatures.
Weak VRM. Would you even try to substantiate this?

Weak because they can't take 1.2V?
1.2V is not safe for 570, nor for 580. Why should it be possible on 0.91V dual GF110?

But...
Actually it is a matter of temps.
And VRM's can go even 1.2 with good cooling.
Which should be obvious soon enough with blocks from EK, Danger Den and Co.

And when they do get overvolted at 1.2V, by your standards that would mean 6990 is weak if it cant overvolt at 1.47V. Good luck with that. Highest i've seen is 1.30V.

I would say weak because they cannot handle the sustained heat......
 
A Shameful Display this card has been, mine just died on me!! Was playing Shogun 2 and all of a sudden the screen went blank and heard a loud pop. Going to test all my other hardware out but so far I am back up with a GTX 465 just fine. Going to put back in my XFX 6970's

Could you help us out and take a picture of the backside of the card if it is the same component that has been documented in other failures.

Thanks
 
They shouldve made a process shrink first before releasing these two cards. They are so hang up with the fastest card in the world tag that they skimp on the proper testing of the things.
 
A Shameful Display this card has been, mine just died on me!! Was playing Shogun 2 and all of a sudden the screen went blank and heard a loud pop. Going to test all my other hardware out but so far I am back up with a GTX 465 just fine. Going to put back in my XFX 6970's

Ouch! Were you overclocking/volting at all?
 
Check the topic name - this thread is about the temperatures.
Weak VRM. Would you even try to substantiate this?

Weak because they can't take 1.2V?
1.2V is not safe for 570, nor for 580. Why should it be possible on 0.91V dual GF110?

But...
Actually it is a matter of temps.
And VRM's can go even 1.2 with good cooling.
Which should be obvious soon enough with blocks from EK, Danger Den and Co.

And when they do get overvolted at 1.2V, by your standards that would mean 6990 is weak if it cant overvolt at 1.47V. Good luck with that. Highest i've seen is 1.30V.

Weak because it's the VRM's EXPLODING.
Not the chips degrading.

I suggest you study overclocking and the effects of eletromigration, and also the effects of cooling IC's. Yes the mosfets can be cooled, but they should be able to take the load of supplying 1.2v to a power hungry GPU. Otherwise, you split the load among a split plane (or whatever it's called), the same way that adding more wires to a connector lowers the amount of heat on each wire. I'm not a tech wizard, but it's clear that someone skimped on the mosfet quality, because they didn't WANT the cards to be running at these voltages. I mean, do you REALLY think a 590 can't be designed with mosfets that can handle a 500W power draw? If you actually believe that, I'll sell you some land in Florida.

If the GPU can't handle 1.2v, then the GPU ITSELF would die or degrade, or start giving flicking polys or other weird stuff. Not the freaking mosfet. The GPU and the PWMs are TWO DIFFERENT things.
If you overclock a sandy bridge too far, the chip will degrade on its maximum clocks. The mosfets won't explode, unless you're using a board that wasn't meant to handle that type of power draw. A few people made their MSI boards explode by setting the phases (a setting that should NEVER have been in the software) to 1 phase. MSI promptly pulled the software and released a version with that functionality removed.
But more to the point, why do you think high end mainboards like the UD7 and M4E have a robust phase system?
 
Weak because it's the VRM's EXPLODING.
Not the chips degrading.

I suggest you study overclocking and the effects of eletromigration, and also the effects of cooling IC's. Yes the mosfets can be cooled, but they should be able to take the load of supplying 1.2v to a power hungry GPU. Otherwise, you split the load among a split plane (or whatever it's called), the same way that adding more wires to a connector lowers the amount of heat on each wire. I'm not a tech wizard, but it's clear that someone skimped on the mosfet quality, because they didn't WANT the cards to be running at these voltages. I mean, do you REALLY think a 590 can't be designed with mosfets that can handle a 500W power draw? If you actually believe that, I'll sell you some land in Florida.

If the GPU can't handle 1.2v, then the GPU ITSELF would die or degrade, or start giving flicking polys or other weird stuff. Not the freaking mosfet. The GPU and the PWMs are TWO DIFFERENT things.
If you overclock a sandy bridge too far, the chip will degrade on its maximum clocks. The mosfets won't explode, unless you're using a board that wasn't meant to handle that type of power draw. A few people made their MSI boards explode by setting the phases (a setting that should NEVER have been in the software) to 1 phase. MSI promptly pulled the software and released a version with that functionality removed.
But more to the point, why do you think high end mainboards like the UD7 and M4E have a robust phase system?

Anything past 8+2 phases for SB is overkill...but back on topic...

The 570 and 580 should be able to withstand the measly 1.2v

The first generation of Fermi was able to take the voltages...Why? because of the phases that delivered the power.

The 570s main weakness in overclockability would be it's weak phases...Which Falkentyne explained pretty well.

Also...a thread about temps? Really? GPU's are able to withstand near boiling point. why make a thread about it's temperatuures.

Some VRMs are able to handle up to 110c on AMD cards. After that, then i would worry about lowering it.

And really. That's not the correct way to measure temps.
 
It's not only about "handle the 110c" KicktheCan ...

It's abut handle 110ºc without decreasing wattage efficiency!
Some vrms can deliver 40W at 60º but at 110 can only handle 10 of heat dissipation ... :x
 
Ouch! Were you overclocking/volting at all?

Heck now and the place where it has the smoking crap is where the backplate is unfortunately... I called EVGA and they are going to get me a replacement. Going then sell that mofo...
 
Weak because it's the VRM's EXPLODING.
Not the chips degrading.

I suggest you study overclocking and the effects of eletromigration, and also the effects of cooling IC's. Yes the mosfets can be cooled, but they should be able to take the load of supplying 1.2v to a power hungry GPU. Otherwise, you split the load among a split plane (or whatever it's called), the same way that adding more wires to a connector lowers the amount of heat on each wire. I'm not a tech wizard, but it's clear that someone skimped on the mosfet quality, because they didn't WANT the cards to be running at these voltages. I mean, do you REALLY think a 590 can't be designed with mosfets that can handle a 500W power draw? If you actually believe that, I'll sell you some land in Florida.

If the GPU can't handle 1.2v, then the GPU ITSELF would die or degrade, or start giving flicking polys or other weird stuff. Not the freaking mosfet. The GPU and the PWMs are TWO DIFFERENT things.
If you overclock a sandy bridge too far, the chip will degrade on its maximum clocks. The mosfets won't explode, unless you're using a board that wasn't meant to handle that type of power draw. A few people made their MSI boards explode by setting the phases (a setting that should NEVER have been in the software) to 1 phase. MSI promptly pulled the software and released a version with that functionality removed.
But more to the point, why do you think high end mainboards like the UD7 and M4E have a robust phase system?

So they can deliver greater amounts of power with greater efficiency and survive?

But back to the point. I have Jen-Hsun here with me, and was wondering if you guys have any other wishes. Besides 2 full GF110 GPUS with 512 CUDA cores, quietest dual GPU in a last decade, about the same price and performance as competition, 3D yadi yadiya...

Would 1.2V really be OK? I mean there were 580's with 1.6V and more then 1600MHz, and this card is even more expensive then 580.

In what language should those final voltages/clocks be communicated to you, since English can't reach some of you nor those 5 reviewers.

Would the final recommended voltages/cloks really be final?
If so, and since exploding VRM are unacceptable, in what particular way do you want the card to croak?
 
Last edited:
Yes it is asking to much as i don't use any GPU stressing apps as I'm more interested in there game performance,

I run a 5970 with liquid @ 1000/1250 and IF I run furmark, I could melt the polar ice caps... I've seen my primary vtt1165 reach 115C+. Scary... I can keep those clocks and run any games or 3dmark and not see more than 60-70C on the vregs. I dont know if its just the dangerden block i have that does JACK for cooling the vregs, or if its all of them.
 
I run a 5970 with liquid @ 1000/1250 and IF I run furmark, I could melt the polar ice caps... I've seen my primary vtt1165 reach 115C+. Scary... I can keep those clocks and run any games or 3dmark and not see more than 60-70C on the vregs. I dont know if its just the dangerden block i have that does JACK for cooling the vregs, or if its all of them.

I'm going to try to put these on water in the new build & i will monitor the temps with MSI afterburner while gaming.
 
Mine burned in the same place, no overclock or overvolt


Stock or tbreak.com, SWE, Brasilian MASTER BLASTER, W1zzard, Lab501.ro "stock"

I run a 5970 with liquid @ 1000/1250 and IF I run furmark, I could melt the polar ice caps... I've seen my primary vtt1165 reach 115C+. Scary... I can keep those clocks and run any games or 3dmark and not see more than 60-70C on the vregs. I dont know if its just the dangerden block i have that does JACK for cooling the vregs, or if its all of them.

I've never seen them 130C+ though :D
 
1st of all that fudzilla, and 2ndly I'am not blaming anyone.
But it turns does turn out that quite a few allegedly stock, were really a "stock"

@Loafdogg420 Same here. I'd never run it on anything that I don't consider ROCK stable. Dunno exactly, but I've heard some of those VRM are 160C+
 
1st of all that fudzilla, and 2ndly I'am not blaming anyone.
But it turns does turn out that quite a few allegedly stock, were really a "stock"

@Loafdogg420 Same here. I'd never run it on anything that I don't consider ROCK stable. Dunno exactly, but I've heard some of those VRM are 160C+

oooo, that makes me feel a tiny bit better. I'm the type that will take the whole box apart for 1C so.. :D
 
I run a 5970 with liquid @ 1000/1250 and IF I run I dont know if its just the dangerden block i have that does JACK for cooling the vregs, or if its all of them.

It's a well known issue with your DD waterblock. I don't have high-temps like thatwith Furmark with an EK-5970 at 1000/1200. VRM temps never goes over 80 celsius on mine. Your block is not cooling the VRM enough.
 
Now this is the reason why GTX 590 blow up:

GTX 590 3dMark03 run from the folks over at lab501.ro.

Same guys who tried 1.3V, and whose card blew at "stock".

Must be weak VRM :confused:
That card is still alive and kicking. He doesn't say that the card died. He analyzes the build quality,extreme thermal and OC behaviour.

The OC tests were done with 2 TRUE coolers and a max of 1.1V (all written in the test). He didn't try 1.3V because of the poor construction. He didn't do an endurance test for the same reason . He says in the article " I did anything I could not to destroy the board, a concession 590 does not deserve".

Anyway, poparamiro is a cooling freak. That's his passion in testing. Also build quality. He doesn't have any opinions there about performance, about Nvidia that or ATI that. His tests there are "coolers", "thermal compounds" and " extreme OC on video cards".
He loves too see well built quality hardware and testing it to the limit.
See for example Gigabyte GTX SOC series, Nvidia boards well praised by him for their construction.

Also,they do have a stock card that died, the one used in their 590 vs 6990 review.

There's also another romanian site that reviewed the cards...they seemed fantastic to them...after they published the review both their cards died, but still, in their opinion the card remained fantastic and the bad part was only added in a footnote .
 
lab501.ro is quoted around as dead@stock.
Are u saying that in the middle of all those overclocks that particular card missed all of them and died at stock?

U also think that blasting a building quality of card which comes 0.91-0.95V that cant do 1.2V,
and at the same time praising VRM which does 1.125 ->1.30V in that OC test is normal?
 
2 more dead @ guru3d

latin-cross.gif


One manned about it, and claims 1.05V 772MHz,
the other, u know the usual crap, mild OC/stock... 710MHz
 
@YeuEmMaiMai

Do you by any chance know which card is this :rolleyes:

thermic-imaging-radeon-hd-4870.jpg

Obviously since my 4870 is still going strong since launch way back in 2008, it can be safely assumed that ATi does indeed use quality VRMs in their design and one does not have to worry about said video card exploding.........
 
We're talking component temperatures, right?

Saying

VRMs will definately degrade when they get hot....especially at those temps.....

and then in a next sentence claiming that that doesn't apply to your card is a bit weak.

And you guys have yet to make the case that non-overvolted / non-OC-ed cards explode.
 
We're talking component temperatures, right?

Saying



and then in a next sentence claiming that that doesn't apply to your card is a bit weak.

And you guys have yet to make the case that non-overvolted / non-OC-ed cards explode.

And how exactly are they meant to prove it to you.
Give us an example of how some ones card blowing up & then after the fact prove to you with their dead card that they did not OC or Overvolt.
 
Back
Top