90c too hot for gtx 570 on furmark?

munkle

[H]F Junkie
Joined
Jan 16, 2005
Messages
11,799
I checked some reviews and all the load temps on their gtx 570's were in the low 80's or 70's but they weren't running furmark.

This is the card I have. At idle it is around 37-38c.
 
I'm really surprised the amount of heat this thing puts out is enormous compared to my 6850. :p I have it in a test case right now and the front hard drive bays get super hot.
 
I don't get the furmark thing, seems like a good way to burn up a videocard.

I do it just to test stability on stuff I buy. I always test stuff right when I receive it, it's a lot easier to deal with newegg than to ship it to the manufacturer.
 
but doesn't furmark stress way beyond what would ever be real world?

seems like redlining the shit out of a brand new engine.
 
I do it too.. In fact its good for testing purposes depending on what you want to test.

When I recieved my additional 460 I was worried about heat inside my case.. 5 mins of furmark shut down my top card @95C. Now I know I need a new case. Playing with side open now :)


Its good to test stuff out, plus these cards have failsafes on them...
 
but doesn't furmark stress way beyond what would ever be real world?

seems like redlining the shit out of a brand new engine.

By engineering design, CPUs can run at 100% full-tilt 24/7 with no problems. So I don't understand why GPUs get a free pass on this "100% gpu load is bad" idea.

Chips are not vehicle engines. They are designed to meet a minimum cycle time, within a thermal budget, etc.
 
By engineering design, CPUs can run at 100% full-tilt 24/7 with no problems. So I don't understand why GPUs get a free pass on this "100% gpu load is bad" idea.

Chips are not vehicle engines. They are designed to meet a minimum cycle time, within a thermal budget, etc.

Yet stressing them beyond their normal thermal budget with specialized tools is not "100% load", it's "something no actual game/app ever causes to happen" and isn't really comparable.
 
Yet stressing them beyond their normal thermal budget with specialized tools is not "100% load", it's "something no actual game/app ever causes to happen" and isn't really comparable.

.... Then they should put a better cooler on it. I'm not paying for a thermal budget, I'm paying for processing power which is exactly how they are sold, by number of "cores" and the speed of them. If a cpu/gpu cannot handle the heat generated by it's own processing power then that's a design flaw.
 
.... Then they should put a better cooler on it. I'm not paying for a thermal budget, I'm paying for processing power which is exactly how they are sold, by number of "cores" and the speed of them. If a cpu/gpu cannot handle the heat generated by it's own processing power then that's a design flaw.

No CPU/GPU since years ago has been able to handle the heat from it's own processing power. Take a 5w atom and pull off the heatsink and run it 100% and it will probably fry. Things overheating are by failure of the cooler. But the problem is that the thermal enviroment that the device is placed in is up the the user, and therefore it's failure is to some significant degree user error. A card should function in a reasonable enviroment up to 100% load. But this cannot be expected to have unlimited tolerences, and if people want to stress these tolerences its up to them to upgrade the cooler beyond the factory spec. Or something.
 
No CPU/GPU since years ago has been able to handle the heat from it's own processing power. Take a 5w atom and pull off the heatsink and run it 100% and it will probably fry. Things overheating are by failure of the cooler.

Obviously I was talking about a cpu/gpu with a cooler on it since that was in the first sentence, nobody is going to run one without a heatsink.

But the problem is that the thermal enviroment that the device is placed in is up the the user, and therefore it's failure is to some significant degree user error.
If it was designed decently then yes this would be true but sometimes manufacturers just don't provide an adequate cooler for their products or they design something else badly. The stock cooler for the 2600k is pretty crap, sure it works fine when your lightly using it but do anything that loads all 4 cores that thing heats up to more than I am comfortable with.

A card should function in a reasonable enviroment up to 100% load. But this cannot be expected to have unlimited tolerences, and if people want to stress these tolerences its up to them to upgrade the cooler beyond the factory spec. Or something.
Understandable if you go past the tolerances then the part could die but if everything is left stock at factory settings I wouldn't expect something to go past it's tolerances. (as long as everything else is also resonable in your rig (cooling, psu, etc...))
 
Last edited:
Yet stressing them beyond their normal thermal budget with specialized tools is not "100% load", it's "something no actual game/app ever causes to happen" and isn't really comparable.

This is where the rub is ... the supplied GPU HSF is unable to handle the maximal thermal load generated, if not the VRMs also.

Who is to say a game won't come along that uses FurMark-like GPU resources? What about GPU computing?

(Don't even try that "sans HSF" argument, cripes that is part of the design.)
 
But it's like this. A car manafacturer designs brakes to stop a car going 100mph or something in a certain distance. But this distance will vary if its raining, on ice or downhill.

Same with a stock cooler, it can be expected to cool the unit at 100% load in a reasonable enviroment. But put it in a confined case or with a load of other hot components and it will probably not be sufficient. I was under the impression that the vapor coolers on the recent fermis were pretty decent?

Most GPU computing ive used (mainly CUDA) uses the vram more than the main cores (infact it barely uses any of that).
 
.... Then they should put a better cooler on it. I'm not paying for a thermal budget, I'm paying for processing power which is exactly how they are sold, by number of "cores" and the speed of them. If a cpu/gpu cannot handle the heat generated by it's own processing power then that's a design flaw.

Microprocessors are tested using commercially valuable tools, not specialized tools that create massive hotspots (Furmark) or other tools intended to solely produce extreme temperatures (thermal viruses). Intel, AMD, NVIDIA, all get their TDP rating from running the most intensive and USEFUL application their CPU is expected to ever run. On Intel/AMD CPUs that would be HPC stuff; on NVIDIA/ADM GPUs that would be GPGPU, games, CAD.

Furmark is a specialized to that causes hotspots in a small part of the GPU, and it's not the overall GPU temperature that's dangerous, but the temperature in THAT specific spot, which can get pretty darned toasty. If you want to test how a GPU cooler performs, run a really intensive game or do some GPGPU stuff on it.

For example, you can easily push Intel CPUs well over their TDP if you write a program to execute specific instructions at precise timings as to get the most out of each clock, and stress both the floating-point and integer units like crazy, but that's not the scope of the CPU and thus is not a design flaw.
 
Microprocessors are tested using commercially valuable tools, not specialized tools that create massive hotspots (Furmark) or other tools intended to solely produce extreme temperatures (thermal viruses). Intel, AMD, NVIDIA, all get their TDP rating from running the most intensive and USEFUL application their CPU is expected to ever run. On Intel/AMD CPUs that would be HPC stuff; on NVIDIA/ADM GPUs that would be GPGPU, games, CAD.

Furmark is a specialized to that causes hotspots in a small part of the GPU, and it's not the overall GPU temperature that's dangerous, but the temperature in THAT specific spot, which can get pretty darned toasty. If you want to test how a GPU cooler performs, run a really intensive game or do some GPGPU stuff on it.

For example, you can easily push Intel CPUs well over their TDP if you write a program to execute specific instructions at precise timings as to get the most out of each clock, and stress both the floating-point and integer units like crazy, but that's not the scope of the CPU and thus is not a design flaw.


Are you sure about that? I thought Linpack was used for intel processor testing.
 
I don't get the furmark thing, seems like a good way to burn up a videocard.

I feel the same way. I dont think Furmark represents real world in any way shape or form. I think its just asking to shorten the life of your Video Card.

Stability testing for me is I'll let Unigine run for about an hour or so. It takes my GPU to a 100% the whole time without generating those insane temps. It will still heat the card up a few degress more than a regular game. Then I'll just play a little Crysis if I dont see any artifacts or have any driver crashes the card is stable enough for gaming.

I dont use my cards for folding or anything like that though. Just gaming.
 
For my testing I changed to Heavenmark. Still works the card hard, but nearly as bad as Furmark. Plus, it tends to show a problem quicker. When I oc'd my card past 960Mhz Furmark ran fine, but Heavenmark shut down in 15 sec. After that I quit using Furmark. Unrealistic stress and doesnt catch an instability.
 
I found that the nVidia demo with the alien walking on the planet was a pretty good stress test while shooting the alien. That and an hour of Crysis.
 
but doesn't furmark stress way beyond what would ever be real world?

seems like redlining the shit out of a brand new engine.

I mean coming from doing a lot of things with cars and newly built engines, the rings seal a hell-of-a lot better then when the engine is babied for the first 100 or so miles.

Back to the gpu, i dunno bout you but if i buy a product i want to use and test it to make sure it is up to spec.
 
Furmark doesn't even work properly with GTX570 due to powerlimiting. I recently received a bum card that could run furmark or kombustor with no issue but would artifact in games. Crysis made it happen almost right away.
 
I checked some reviews and all the load temps on their gtx 570's were in the low 80's or 70's but they weren't running furmark.

This is the card I have. At idle it is around 37-38c.

I don't know the thermal throttling for the GTX 570. But for my 5970 it's at 96C,before thermal throttling kicks in. FurMark is a program to stress the card to its maximum specifications. Will this happen in a video game? I haven't seen any game that has stressed my GPUs to 96C. On average my temps on a graphically intense game like the Witcher 2 was 88C max.

The point why I use FurMark is to make sure that when I re-applied my TIM that it was correctly applied and I don't reach my GPU maximum threshold.

Just keep a log of your temps to see how you do. If you need to re-apply TIM to lower your temps. Every card is different and some of the cards I had very poorly applied TIM that was just oozing from the corners of the heatsink.
 
No CPU/GPU since years ago has been able to handle the heat from it's own processing power. Take a 5w atom and pull off the heatsink and run it 100% and it will probably fry. Things overheating are by failure of the cooler. But the problem is that the thermal enviroment that the device is placed in is up the the user, and therefore it's failure is to some significant degree user error. A card should function in a reasonable enviroment up to 100% load. But this cannot be expected to have unlimited tolerences, and if people want to stress these tolerences its up to them to upgrade the cooler beyond the factory spec. Or something.

Intel had a thermal throttling solution in 2001 on the Pentium 4: http://www.youtube.com/watch?v=06MYYB9bl70

You don't think other chip designers have built in similar techniques over the past 10 years?
 
Intel had a thermal throttling solution in 2001 on the Pentium 4: http://www.youtube.com/watch?v=06MYYB9bl70

You don't think other chip designers have built in similar techniques over the past 10 years?

intel expects their users to supply and/or install their own thermal solution. Meaning they plan for improperly installed heatsinks.

Why would AMD or NViDIA expect that? I'm suspect their graphics drivers crash long before the card is actually damaged if there is the oem heat sink installed.
 
Back
Top