LOL@Evga response to 8800GT fan issue

Get it through your thick skull, they are designed to run like this.

That's what warranties are for.

Your responses are ott and unhelpful.

You are the one with a thick skull as if you know anything about silicon you'll know it is prone to problems with heat.
ie for every 10C increase in temp, the life of the silicon circuit approximately halves.

Your second response is equally unhelpful as the poster to which you replied was also concerned that the manfacturers might have dropped the ball and might be getting a lot of returns from premature silicon aging.

They posed good questions, if you need to answer like a child, keep it to yourself.
 
It looks like someone failed to read the rest of the thread.

Thanks, your response is the best of all.
 
I think I'm done here, it's like talking to a brick wall.

Good we're tired of hearing you talk.

How many people put the stock cooler on their CPU? It's the same difference. If you can do something to get cooler temps then why wouldn't you?

Warranty or not, the point is I'd rather manage my fan speed than have to send back a melted card and be without a computer. Get that through your thick skull :)
 
And I quote:

No, you're clearly trying to stir shit up.

Everything he said is spot on. The thermal profile on the 8800GT doesn't have the fan ramping up until the GPU temp goes past 100C, and that's completely normal for this card.

If you've got a card that is artifacting with the stock fan configuration then it's DEFECTIVE and you should seek out an RMA. Regardless of wether the artifacting stops when you increase the fan speed, or if it's artifacting all the time - it's DEFECTIVE. It's not a design flaw as a few shit-stirrers would like to make you believe... it's just a bad card and you had the same chance of getting a bad 8800GTS or 8800GTX or any card before it.

If you won't listen to an EVGA rep of all people, I don't know what to tell you.
 
And I quote:

If you won't listen to an EVGA rep of all people, I don't know what to tell you.

Dont tell us anything because what you spew isnt worth much.
To repeat a point that has been made many times, we dont want your point of view as it sucks.
Find a forum where people are more the age you act.
 
Dont tell us anything because what you spew isnt worth much.
To repeat a point that has been made many times, we dont want your point of view as it sucks.
Find a forum where people are more the age you act.

Read the rest of the thread, and just *maybe* you'll get my point.
 
Read the rest of the thread, and just *maybe* you'll get my point.

So your point was to dive into the thread with your first post calling someone thick for asking a legitimate question. You give an ott answer and are not helpful in any way.

Hmm, it seerms that you arent a lot of use here so why post at all.
 
How about you guys take this to PMs and quit thread crapping all over the place?
 
So your point was to dive into the thread with your first post calling someone thick for asking a legitimate question. You give an ott answer and are not helpful in any way.

Hmm, it seerms that you arent a lot of use here so why post at all.

I don't see what basis this has what-so-ever, not to mention there is no credible backing behind the theory:

Digital Viper -X- said:
well I remember that every 10 degrees a chip goes up in deg c, it halfs its life, so at 0 deg its 1000years, 10 its 500years, 20 its 250, 30 125, 40 67.5, 50 38.25, 60, 19.125, 70 9.5125yearss, 80 4.xx, 90, 2,xx, 100, 1.xx!

not sure what the initial life of the silicon is >_> but you get the idea, 100deg is bad not to mention what thats going to do to your ram and cpu sitting directly above the card where heat loves to rise,

I on the other hand would love one It would warm up my room nicely

You can accuse me for calling someone thick, thats fine and dandy, go ahead, but you can't possibly see credibly in *that*.

And if you have something else to say, PM me.
 
And I quote:

If you won't listen to an EVGA rep of all people, I don't know what to tell you.



ROFL, I am most certainly NOT en EVGA representative and given the amount of uninformed kiddies playing on daddy's PC these days I definitely never want to be one either. :p


I find myself returning to this thread popcorn and soda in hand... it's especially entertaining to see people re-using the 'for every nth degree the component lifespan decreases by half' argument and then go on to admit in the VERY SAME GODDAMNED BREATH that they actually have no baseline to make any sort of comparison. (but that's not going to stop them from desperately trying anyway)

Welcome to the intarweb folks! And there is NO NEED for namecalling... conduct yourself like adults, please!!!


Anyway like I said. If you bought an 8800GT and it's artifacting at stock speeds with the stock thermal profile driving the fan then it's a DEFECTIVE CARD and you should seek out an RMA from the board partner who branded your particular card. It makes no difference if the artifacts vanish as you increase the fan speed - that's not indicative of a thermal defect in the design of the card - it simply means YOUR card is defective and said defect is revealed through thermal changes. It means you and one or two of your intarweb buddies are UNLUCKY. Trying to stir up conspiracy theories because you're pissed off that you got a bad card isn't going to accomplish anything.
 
Sorry if that was confusing Blue Falcon, but I was referencing to the EVGA Rep in the initial post.
 
If I'm understanding correctly, Emission and Blue Falcon are very much on the same page here.

Which, incidentally, happens to be the page I'm on.
 
...8800GT would have been much better off with dual slot cooler, why Nvidia did not implement it is a mystery to me...

I don't think it's that mysterious.

My opinion is that Nvidia felt that a single slot cooler looks better (which it does), can perform at least some level of cooling, and the overall impression by the potential customer and the reviewers are positive. Let's face it, the 8800GT looks slick with the stock cooler. Who can argue with a good performing card that takes up very little space?

All this in turn leads to more sales (which has been proven considering the availability of them). Does Nvidia and the manufacturers care about more sales or do they worry about long term quality when the shelf life of these cards is only a short time? Hmmm...
 
You are the one with a thick skull as if you know anything about silicon you'll know it is prone to problems with heat. ie for every 10C increase in temp, the life of the silicon circuit approximately halves.
The question is whether or not that's actually a problem. I'd guess that NVIDIA figures chip lifetime based the percentage of time per hour that it's actually loaded. Maybe it's 10%, 20%, maybe it's 30%, so it's very likely that they're assuming that the actual core is running at full tilt less than half the time, and they have a fairly firm idea about what the chip's lifetime is in typical usage at typical temperatures. Even at 100C 24/7, how long would the chip last? What component would inevitably fail first? Do you know?

We don't know. I can't make any estimations as to what the lifetime is of the 8800 GT under any conditions, and nor can you. You're acting as if the shortened card lifetime is an actual problem, when it is, at this time, nothing more than a potential problem.

If EVGA says 100C is fine, then it's fine at 100C. If you don't want it at 100C, then turn the fan up. This costs nothing more than a few minutes of time and a bit of extra noise. Not a big deal. I've done the same damn thing with every card I've had in the past four years or so.
 
If the card is running at 100c and that is "normal" for this card, then that's fine for the card. If your system is having problems at that temp, it's probably from all the 100c air that's blowing around inside your case, heating up EVERYTHING else. If you're gonna use an 8800GT, you need decent air flow in your case (if not water cooled of course). I'm going to get an 8800GT here in a week or so and I plan to check my temps and rivatuner the fans to higher speeds if necessary. I know most of my lockups and whatnot have been caused by overheating CPU issues in the past. My Zalman took care of most of that. I may have to look into additional cooling for the 8800GT if it gets too hot and the 100 percent fan setting is too bothersome. Personally, the only time it's going to be getting hot at all is when I'm gaming and then I'm usually wearing headphones so the fans don't bother me.
 
In the defense of the guy trying to stir stuff up here, wasn't EVGA the one that said the overheating issues with the 7 series were the users fault for not having cases with enough air flow. Bet you they made a lot of fans back then (pun)!
 
Your responses are ott and unhelpful.

You are the one with a thick skull as if you know anything about silicon you'll know it is prone to problems with heat.
ie for every 10C increase in temp, the life of the silicon circuit approximately halves.

Your second response is equally unhelpful as the poster to which you replied was also concerned that the manfacturers might have dropped the ball and might be getting a lot of returns from premature silicon aging.

They posed good questions, if you need to answer like a child, keep it to yourself.

Hi Nenu... I'm a BSEE working as product test engineer with Analog Devices, Inc. You may or may not have heard of it, that is irrelevant.

I do, however, know quite a bit about silicon manufacturing processes and HTOL testing we do here on our various SiGe, BiCMOS and CMOS material. Could you please provide me with the link to where you found this tidbit of info on a 10C rise in temp halving the life of silicon today.

Also, can you tell me what process the nVidia runs on at TSMC (I assume), since you seem to know so much and have taken such a high position on this topic.

I can tell you for a fact that there are MANY silicon processes that will last 50+ years at temps up to 150C.

So please, enlighten me.
 
I'll put in my experience. As soon as I got the 8800GT SC I fired up the Crysis demo and played for an hour at 92C load constant (stock speed). No problems at all.

Then I stuck on my Accelero S1 and got 52C load, and the card runs the same with same overclocks.

imo the stock cooler is fine and all you have to do is turn up the speed if you want to.

The eVGA rep is just saying to chill, but 80+C worries me :p
 
Hi Nenu... I'm a BSEE working as product test engineer with Analog Devices, Inc. You may or may not have heard of it, that is irrelevant.

I do, however, know quite a bit about silicon manufacturing processes and HTOL testing we do here on our various SiGe, BiCMOS and CMOS material. Could you please provide me with the link to where you found this tidbit of info on a 10C rise in temp halving the life of silicon today.

Also, can you tell me what process the nVidia runs on at TSMC (I assume), since you seem to know so much and have taken such a high position on this topic.

I can tell you for a fact that there are MANY silicon processes that will last 50+ years at temps up to 150C.

So please, enlighten me.

Thanks for your input, :). Maybe everyone will stop running around like headless chickens with that in mind.
 
So your point was to dive into the thread with your first post calling someone thick for asking a legitimate question. You give an ott answer and are not helpful in any way.

Hmm, it seerms that you arent a lot of use here so why post at all.

I rarely break the rules here at the [H]ardForum, but Nenu, you surely are one dumb fucking idiot.

Seems to be that Emission is the only guy spreading common sense and truth, the rest that follow nenu nanu nene, what the fuck ever, can stick to their opinion but with their hands OFF THE KEYBOARD!

I apologize to Kyle and forum moderators.
 
Hi Nenu... I'm a BSEE working as product test engineer with Analog Devices, Inc. You may or may not have heard of it, that is irrelevant.

I do, however, know quite a bit about silicon manufacturing processes and HTOL testing we do here on our various SiGe, BiCMOS and CMOS material. Could you please provide me with the link to where you found this tidbit of info on a 10C rise in temp halving the life of silicon today.

Also, can you tell me what process the nVidia runs on at TSMC (I assume), since you seem to know so much and have taken such a high position on this topic.

I can tell you for a fact that there are MANY silicon processes that will last 50+ years at temps up to 150C.

So please, enlighten me.

Oh great you've just killed the thread~! I was looking forward to more of this; I had popcorn and all ;)
 
My 8800GT loads at 87ºC in Crysis on all high at 1680x1050. I can't make the thing get hotter. The fan automatically inreases up to about 45% speed on its own. I haven't had to tweak it at all.:)
 
Oh great you've just killed the thread~! I was looking forward to more of this; I had popcorn and all ;)

Sorry mate... lol. I guess I only have so much tolerance for people spewing bullshit. It just grates on me when people talk about things, like temp effects on modern silicon processes, in way that makes them appear as if they know a great deal, and think they can pull it off because 98% of the general populace knows next to nothing as to how wafers are actually made.

So here we get an example of Joe Jackass spouting off numbers and "facts" that appear to be believable to the unknowning, but to me, it was just a giant red flag screaming "I don't know what the fuck I'm talking about - but I can fake it".

I wouldn't sweat hot temps on these die folks, we don't even know what process the silicon is, wire bonds, package type and material it is (where the actual die sits) which all factor into any temperature limits. 100C is hot, sure, but nVidia didn't manufacture these things to run with your finger on it :rolleyes:

Some people might be (overly) concerned with how much heat is dumped into their case b/c of the hot die, but unless you're into OC'ing, don't worry about the cards burning out.
 
I agree. My 8800GT ran for quite a bit of time, loaded down hard, at 107c-108c....never whimpered. I had forgot that I had the fan speed accidentally locked at 29% via Rivatuner :D




No, you're clearly trying to stir shit up.

Everything he said is spot on. The thermal profile on the 8800GT doesn't have the fan ramping up until the GPU temp goes past 100C, and that's completely normal for this card.

If you've got a card that is artifacting with the stock fan configuration then it's DEFECTIVE and you should seek out an RMA. Regardless of wether the artifacting stops when you increase the fan speed, or if it's artifacting all the time - it's DEFECTIVE. It's not a design flaw as a few shit-stirrers would like to make you believe... it's just a bad card and you had the same chance of getting a bad 8800GTS or 8800GTX or any card before it.
 
Keep in mind, even though it's at 100C, that is no measure of the amount of heat that goes into your case. Since it only consumes a maximum of 110W at 100% 3D load, It cannot dump more than 110W of heat into your case, actually several tens of watts less because, #1 - Not all of that energy is passed on as heat (don't quote me on it, I'm not a microprocessor engineer, just going by info I've come across), and #2 - The cooler doesn't dissipate all of that heat (which is why your GPU is running at 100C).

You shouldn't have to worry either way.
 
Note the response mentions only stability. Life is not mentioned. Frankly as a EE and former designer I am horrified. The "value added" overclocked cards with stock cooling are a particular concern. I am very pleased the majority of the thread comments indicate good common sense.

My issue would be that the CPU is not the only component on the board. So if the CPU is at 95C and the fan is at 29% speed wth temp are the MosFETs running at ? Yes, MosFETs are typically rated for 120C but we have no way of learning their temp unless you do direct measurements and they still live longer if cooler ) At what temp are the caps cooking at ? What is the temp rating of the caps on the board ? 80C is standard, 105C for very high quality. What is the lifetime/operating temp derating curve for the components on the board. With the low fan speed at what temp does the heatsink start conducting/radiating heat INTO the other components due to thermal saturation?

Unless I had a lifetime warranty on the card, (and even then who wants to go through the hassle of an RMA), I would be running the fan to keep it less than 60C (not gpu core temp but that would be lowered as well) enviromental/ambient under that metal shroud to ensure long life of ALL the components. Heat kills electronics, end of story. I did not realize it was so difficult to ramp fan speed up based on a temp curve, oh wait, its not.

sluzbenik +1

So run it "stock" and return it if it artifacts if you want. I would rather keep my stuff as cool as is reasonable so I do not have to deal with failures caused by design or cost compromises and avoid the hassle.

Just my opinion and it is worth what you paid for it.

Exactly. Glad to see someone write something intelligent about this topic for once. Not all of us swap our GPUs every three months, at which rate who cares how hot it runs you'll be on your next card before you kill the current one. We'd rather ensure it lasts the year or two (or more) we plan to keep ours.
I don't really care if a processor core in an isolated, idealized state has tested to remain functioning at 100C, I'm more concerned with the entire product as a whole dealing with extreme temperatures and stresses. If all that matered was that the core could stand 100C all day for years on end, they wouldn't have bothered putting any active cooling on it to begin with.
But the fact that so many of these factory-overclocked models that still use the reference cooling are experiencing heat-related artifacting and lockups really just proves the point that it's the whole of the product that isn't designed to take such heat levels.
So telling everyone there's nothing to worry about when their cards are flaking out is just plain dumb. The fact that the reference cooling design can't even handle a modest overclock from the factory speaks volumes to the importance of keeping these devices running cool and highlights the fallacy of discussing just core temp tolerances.
The core could be good for 1,000C and it wouldn't matter one bit if other components on the board are failing by 100C and obviously on a progressive curve before failure comes a range of decreased life expectancy.
 
Keep in mind, even though it's at 100C, that is no measure of the amount of heat that goes into your case. Since it only consumes a maximum of 110W at 100% 3D load, It cannot dump more than 110W of heat into your case, actually several tens of watts less because, #1 - Not all of that energy is passed on as heat (don't quote me on it, I'm not a microprocessor engineer, just going by info I've come across), and #2 - The cooler doesn't dissipate all of that heat (which is why your GPU is running at 100C).

You shouldn't have to worry either way.

#1 is right #2 is not

Conservation of energy shows that all 110W of energy can't be converted to heat. *Some* of it has to be used in doing the actual computational work. How much? I certainly don't know :p

As for #2... all 110W or 100W or however much heat that is actually generated each second is being is dissipated each second (at least it is once your temperature has stabalized). If it wasn't you'd have the temperature continue to increase forever until your card caught on fire or melted :D

The temperature delta is directly related on how much heat energy is transfered in some unit of time. So if you generate more heat yet keep the airflow the same (and initial temperature of the cooling air the same), the chip will heat up until the difference in it's temperature and the air's temperature allows it to transfer all the heat generated to the air.
 
While I don't think 100c will hurt the GPU in the short term, even the long term, it's not the GPU that I worry about. Like some others have shown, the rest of the card, IC's, regulators, caps, etc., may NOT be able to deal with those temps long term.

That being said, I am not anal about temps, but I don't WANT to run the thing baking hot. I use rivatuner to ramp fan speed up with temps, and it never breaks 85c and at that temp the fan speed is only 52%.

And no matter what gpu cooler you use, unless you vent the hot air out immediately, the case temps will be affected the same with the stock cooler or anything else....

Let's keep the discussion intelligent and mature, there are some good voices here if we will just LISTEN....
 
With my Antec 900 case, I have the 2 front fans in the two middle slots, each blowing over a hard drive that sits in the center slot of each bay. The top bay has only a DVD drive in the top slot. What I'm thinking of doing is swapping the bottom and the top bays and keeping the dvd drive with the hard drive in the same bay and using the top fan as a blow through fan. I'll have my intake fan pulling air straight into the Zalman cpu cooler which is directed to the fan at the back of the case. This flow through combined with the top fan, which pulls air out of the case at the top, and the intake fan in the middle bay should provide sufficient cooling to the CPU and the video card. I also have a window fan that I normally only use when at LANs since it gets hotter in a room full of PCs. I could plug that back in and provide even better cooling. I might turn that around and use it for pulling heat instead of pushing air though. I dunno. Any thoughts?
 
#1 is right #2 is not

Conservation of energy shows that all 110W of energy can't be converted to heat. *Some* of it has to be used in doing the actual computational work. How much? I certainly don't know :p

As for #2... all 110W or 100W or however much heat that is actually generated each second is being is dissipated each second (at least it is once your temperature has stabalized). If it wasn't you'd have the temperature continue to increase forever until your card caught on fire or melted :D

The temperature delta is directly related on how much heat energy is transfered in some unit of time. So if you generate more heat yet keep the airflow the same (and initial temperature of the cooling air the same), the chip will heat up until the difference in it's temperature and the air's temperature allows it to transfer all the heat generated to the air.

That's what I meant to say :eek:, of course the heat is dissipated, just not as much as there should be dissipated per unit time.

Exactly. Glad to see someone write something intelligent about this topic for once. Not all of us swap our GPUs every three months, at which rate who cares how hot it runs you'll be on your next card before you kill the current one. We'd rather ensure it lasts the year or two (or more) we plan to keep ours.
I don't really care if a processor core in an isolated, idealized state has tested to remain functioning at 100C, I'm more concerned with the entire product as a whole dealing with extreme temperatures and stresses. If all that matered was that the core could stand 100C all day for years on end, they wouldn't have bothered putting any active cooling on it to begin with.
But the fact that so many of these factory-overclocked models that still use the reference cooling are experiencing heat-related artifacting and lockups really just proves the point that it's the whole of the product that isn't designed to take such heat levels.
So telling everyone there's nothing to worry about when their cards are flaking out is just plain dumb. The fact that the reference cooling design can't even handle a modest overclock from the factory speaks volumes to the importance of keeping these devices running cool and highlights the fallacy of discussing just core temp tolerances.
The core could be good for 1,000C and it wouldn't matter one bit if other components on the board are failing by 100C and obviously on a progressive curve before failure comes a range of decreased life expectancy.

I find it hard to believe that the manufacturers don't take this into account.
 
another option:

http://www.xpcgear.com/be880gtt3.html

Reminds me of the way Galaxy did their coolers on a few cards. Threw on an aftermarket cooler like a zalman and called it a day...

Palit also has a "sonic" version which appears to have the same cooling solution but the card is at 650mhz compared to 600 in regular version. Which is, of course, something you can do yourself if price turns out to be an issue.
 
What I want to know is if I have the GPU fan cranked up to control the card's temp, does that help keep my computer room cooler?

I have a gateway 24" LCD, that card, a Mac Mini running through a 19" CRT, A 6" green phosphor Compaq portable II - You can tell the difference between WoW and Hellgate by the temp in my room - but will the room temp stay lower if I keep the card temp lower? or would the heat be the same?
 
#1 is right #2 is not

Conservation of energy shows that all 110W of energy can't be converted to heat. *Some* of it has to be used in doing the actual computational work. How much? I certainly don't know :p

I am a BSEE as well in aerospace electronics and almost finished with my MSEE.

Try again...all of that 110W is dissipated as heat. That is convervation of energy. Otherwise if we could find a way to store power in the form of computations...you would be one rich bastard by now...trillions of dollars I tell you. Whether or not power does something of value is non-factor.
 
I am a BSEE as well in aerospace electronics and almost finished with my MSEE.

Try again...all of that 110W is dissipated as heat. That is convervation of energy. Otherwise if we could find a way to store power in the form of computations...you would be one rich bastard by now...trillions of dollars I tell you. Whether or not power does something of value is non-factor.

Not all, but damn near all.

Some (small) portion of that energy is going into moving molecules around.
 
Not all, but damn near all.

Some (small) portion of that energy is going into moving molecules around.

I'd listen to the engineer on this one, all of that energy is released as heat, as a result of "moving molecules around".
 
Back
Top