I checked some reviews and all the load temps on their gtx 570's were in the low 80's or 70's but they weren't running furmark.
This is the card I have. At idle it is around 37-38c.
This is the card I have. At idle it is around 37-38c.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
It's fine for Furmark.
I'm really surprised the amount of heat this thing puts out is enormous compared to my 6850.
I don't get the furmark thing, seems like a good way to burn up a videocard.
but doesn't furmark stress way beyond what would ever be real world?
seems like redlining the shit out of a brand new engine.
By engineering design, CPUs can run at 100% full-tilt 24/7 with no problems. So I don't understand why GPUs get a free pass on this "100% gpu load is bad" idea.
Chips are not vehicle engines. They are designed to meet a minimum cycle time, within a thermal budget, etc.
Yet stressing them beyond their normal thermal budget with specialized tools is not "100% load", it's "something no actual game/app ever causes to happen" and isn't really comparable.
.... Then they should put a better cooler on it. I'm not paying for a thermal budget, I'm paying for processing power which is exactly how they are sold, by number of "cores" and the speed of them. If a cpu/gpu cannot handle the heat generated by it's own processing power then that's a design flaw.
No CPU/GPU since years ago has been able to handle the heat from it's own processing power. Take a 5w atom and pull off the heatsink and run it 100% and it will probably fry. Things overheating are by failure of the cooler.
If it was designed decently then yes this would be true but sometimes manufacturers just don't provide an adequate cooler for their products or they design something else badly. The stock cooler for the 2600k is pretty crap, sure it works fine when your lightly using it but do anything that loads all 4 cores that thing heats up to more than I am comfortable with.But the problem is that the thermal enviroment that the device is placed in is up the the user, and therefore it's failure is to some significant degree user error.
Understandable if you go past the tolerances then the part could die but if everything is left stock at factory settings I wouldn't expect something to go past it's tolerances. (as long as everything else is also resonable in your rig (cooling, psu, etc...))A card should function in a reasonable enviroment up to 100% load. But this cannot be expected to have unlimited tolerences, and if people want to stress these tolerences its up to them to upgrade the cooler beyond the factory spec. Or something.
Yet stressing them beyond their normal thermal budget with specialized tools is not "100% load", it's "something no actual game/app ever causes to happen" and isn't really comparable.
.... Then they should put a better cooler on it. I'm not paying for a thermal budget, I'm paying for processing power which is exactly how they are sold, by number of "cores" and the speed of them. If a cpu/gpu cannot handle the heat generated by it's own processing power then that's a design flaw.
Microprocessors are tested using commercially valuable tools, not specialized tools that create massive hotspots (Furmark) or other tools intended to solely produce extreme temperatures (thermal viruses). Intel, AMD, NVIDIA, all get their TDP rating from running the most intensive and USEFUL application their CPU is expected to ever run. On Intel/AMD CPUs that would be HPC stuff; on NVIDIA/ADM GPUs that would be GPGPU, games, CAD.
Furmark is a specialized to that causes hotspots in a small part of the GPU, and it's not the overall GPU temperature that's dangerous, but the temperature in THAT specific spot, which can get pretty darned toasty. If you want to test how a GPU cooler performs, run a really intensive game or do some GPGPU stuff on it.
For example, you can easily push Intel CPUs well over their TDP if you write a program to execute specific instructions at precise timings as to get the most out of each clock, and stress both the floating-point and integer units like crazy, but that's not the scope of the CPU and thus is not a design flaw.
I don't get the furmark thing, seems like a good way to burn up a videocard.
but doesn't furmark stress way beyond what would ever be real world?
seems like redlining the shit out of a brand new engine.
I checked some reviews and all the load temps on their gtx 570's were in the low 80's or 70's but they weren't running furmark.
This is the card I have. At idle it is around 37-38c.
No CPU/GPU since years ago has been able to handle the heat from it's own processing power. Take a 5w atom and pull off the heatsink and run it 100% and it will probably fry. Things overheating are by failure of the cooler. But the problem is that the thermal enviroment that the device is placed in is up the the user, and therefore it's failure is to some significant degree user error. A card should function in a reasonable enviroment up to 100% load. But this cannot be expected to have unlimited tolerences, and if people want to stress these tolerences its up to them to upgrade the cooler beyond the factory spec. Or something.
Intel had a thermal throttling solution in 2001 on the Pentium 4: http://www.youtube.com/watch?v=06MYYB9bl70
You don't think other chip designers have built in similar techniques over the past 10 years?