Fermi to be the hottest single chip ever

200W on a 40nm is not the same as 190W on a bigger die, this is common sense.
 
I would guess that if NV could hit 200w or even close to it under load, it would be very happy. It is going to be way higher than 200w TDP.

Here is a hint....
 
There has been a lot of complaining about heat lately and I just don't understand it. There are lots of cards out there if you want to make a HTPC or a laptop. But for an SLI setup? I mean we are way beyond the linear price/preformance portion of the curve. At 190 vs 240 (guessing?) you've got only a 100W difference between two cards. Even at 190 to 300W you are looking at 220 Watt difference, Quality 750-850W to Quality 1KW PSU would be under 100$. And if femri is in the 30% faster range, it is over a 50% faster system.

Show me a 500W Video card that is 5 times as fast, used 6 times the power, costs 6 times as much as the competitor's 100W video card and I'll show you the card enthusaists are going to buy. I'd venture while it's a good "look at me stat" most people interested in enthusiast class systems are much more interested in more fps. Really there is nothing efficent about SLI or crossfire systems. People do it because they want more graphics power.

However, if you really want to look at power because some people really are concerned about that, then lets have a good long look at it. And this is something you might seriously think about adding into your review(s). Joules per frame. (or frames per joule of course). If Femri takes 240Watts and delivers 60 FPS while an ATI is delivering 45FPS at 190W then the femri is actually more energy efficent!

I don't have a magic ball, nor am I under an NDA (I wish, but if I was I'd be forced to review it instead of play with it although some say that is one and the same:p), so I don't know who wins in that department. But I do know that it is how we should be comparing energy numbers.
There are review sites out there that do performance-per-watt comparisons (techPowerUp is one). Personally, I'd rather have [H] stick to its normal testing strategies and only entertain other metrics if they have time left. Anyway, considering power consumption simply as a cost or environmental factor is only scraping the tip of the iceberg I think. Really, power consumption and performance/watt comparisons show the quality of a part.

For instance, and I'll admit right here I'm a big fan of the 58xx series, I think the 5870 is a fantastic part. Here you have a single-GPU part with the same performance as the previous generations fastest dual-GPU card (GTX295) that consumes only 60-65% of the load power and ~30% of it's idle power. Quite amazing in my eyes. Now fast forward to your comparison of Fermi vs. 5870, with the same (very generous) numbers. What happens if you give the 5870 the extra 50W of power that the Fermi is using? I know my 5870 @ 1.0GHz doesn't consume 240W. I'm not at home right now so I can't get numbers, but say you have a 1GHz 5870 (~17.6% increase in clockspeed, let's say nets 15% performance boost) that consumes 220W. Now you've closed that gap and given the consumer a choice. How? Through efficiency. Efficiency yields flexibility. The reason Fermi is projected to run so hot is because NVIDIA had to crank it's clocks and power consumption to yield an acceptable performance delta. Well, any end user can do the same with Afterburner thanks to the recent edition of software voltage adjustments in mid-range and high-end cards. Now if you can at that time buy a 5870 for $300 while Fermi costs $400, guess what's going to be bought?

just put all copper heatsinks on every gpu, problem solved.
Cost + Weight = You still have a problem
 
I used to have the G80 8800GTS 320MB and that card ran so hot. It would idle at 80C and get over 100C while under load. Although it never crashed due to the heat it raised the temperature of everything in my case such as CPU, HDD, and case temperatures. I will never buy a card that runs this hot again.
 
There are review sites out there that do performance-per-watt comparisons (techPowerUp is one). Personally, I'd rather have [H] stick to its normal testing strategies and only entertain other metrics if they have time left. Anyway, considering power consumption simply as a cost or environmental factor is only scraping the tip of the iceberg I think. Really, power consumption and performance/watt comparisons show the quality of a part.

For instance, and I'll admit right here I'm a big fan of the 58xx series, I think the 5870 is a fantastic part. Here you have a single-GPU part with the same performance as the previous generations fastest dual-GPU card (GTX295) that consumes only 60-65% of the load power and ~30% of it's idle power. Quite amazing in my eyes. Now fast forward to your comparison of Fermi vs. 5870, with the same (very generous) numbers. What happens if you give the 5870 the extra 50W of power that the Fermi is using? I know my 5870 @ 1.0GHz doesn't consume 240W. I'm not at home right now so I can't get numbers, but say you have a 1GHz 5870 (~17.6% increase in clockspeed, let's say nets 15% performance boost) that consumes 220W. Now you've closed that gap and given the consumer a choice. How? Through efficiency. Efficiency yields flexibility. The reason Fermi is projected to run so hot is because NVIDIA had to crank it's clocks and power consumption to yield an acceptable performance delta. Well, any end user can do the same with Afterburner thanks to the recent edition of software voltage adjustments in mid-range and high-end cards. Now if you can at that time buy a 5870 for $300 while Fermi costs $400, guess what's going to be bought?

The reason Fermi uses more power is because of the difference in number of transistors/die size.
 
The reason Fermi uses more power is because of the difference in number of transistors/die size.

:rolleyes:

Fermi running hot has more to do with the necessary voltage required to run at the desired clock speed.
 
:rolleyes:

Fermi running hot has more to do with the necessary voltage required to run at the desired clock speed.

Which is driven by the total resistance in the circuit which is in turn driven by the number of transistors which at a given process size is given by the die size. Do continue though.
 
Which is driven by the total resistance in the circuit which is in turn driven by the number of transistors which at a given process size is given by the die size. Do continue though.
Because the design is inefficient, and hence they need more voltage and power. You could continue this though.
 
Which is driven by the total resistance in the circuit which is in turn driven by the number of transistors which at a given process size is given by the die size. Do continue though.

Nvidia could have their standard-bearer have a TDP equal to that of Cypress, but it would undoubtedly have an undesirable level of performance (for Nvidia, since it will be much closer to the 5870s) and still cost a great deal more to make than the 5870. They would have to sell Fermi-based GPUs and boards for much less than they would want.

Tesla C2070 has a TDP of 225W with only 448 cores and likely a lower clockspeed than the Geforce equivalent. It's likely that the GeForce Fermi's TDP (and actual consumption) will be much higher than that.
 
Because the design is inefficient, and hence they need more voltage and power. You could continue this though.
It's inefficient? You have magic preformance numbers and a real TDP then? If it is 30% faster than a 5870 with a TDP of 190W then a TDP of 247W is exactly the same efficency. Please continue on how you have preformance numbers and actual TDP to make your claims.

Nvidia could have their standard-bearer have a TDP equal to that of Cypress, but it would undoubtedly have an undesirable level of performance (for Nvidia, since it will be much closer to the 5870s) and still cost a great deal more to make than the 5870. They would have to sell Fermi-based GPUs and boards for much less than they would want.

Tesla C2070 has a TDP of 225W with only 448 cores and likely a lower clockspeed than the Geforce equivalent. It's likely that the GeForce Fermi's TDP (and actual consumption) will be much higher than that.
Based on core difference and assume ECC memory accounts for clock differences you'll end up with a TDP of 257W. That is of course assuming the "safety factor" on TDP between a tesla card and a geforce is the same.
 
It's inefficient? You have magic preformance numbers and a real TDP then? If it is 30% faster than a 5870 with a TDP of 190W then a TDP of 247W is exactly the same efficency. Please continue on how you have preformance numbers and actual TDP to make your claims.
If you want to depreciate the technology to something that's over four months older, you go right ahead. I thought companies were supposed to be progressive. Wotan explained it best, but it seems like they will be "overclocking" the cards to get a desirable performance delta. With every piece of silicon, you have a point where you get diminishing returns in performance for power added, they may be going past this point, given the ~250W TDP thrown around.
 
If you want to depreciate the technology to something that's over four months older, you go right ahead. I thought companies were supposed to be progressive. Wotan explained it best, but it seems like they will be "overclocking" the cards to get a desirable performance delta. With every piece of silicon, you have a point where you get diminishing returns in performance for power added, they may be going past this point, given the ~250W TDP thrown around.

You're right, we should magically expect more out of a card when it is late to market. In those four months they are going to redesign TSMC's 40nm process in less time than it took TSMC to design it in the first place so it is not a leaky POS! :rolleyes:

We'll ignore the fact that it is late but appears to be significantly faster and will instead focus on energy efficency as the primary metric of judging a video cards preformance. There's good news! The Nvidia ION is now the best high end graphics card because of it's amazing power efficency!
 
As long as my case can dissapate the heat, and my PSU can handle the load, I don't really care about power/heat. They make interesting conversation pieces and might, just maybe, sway me as a tie breaker all else being equal, but that is about it really.

Not that it matters to me how good or bad Nv cards are at this point. Until they stop their block and hinder practices, the 9800GTX+, will be the last product of theirs I ever own anyway.
 
If and when I get this card. I'll probably end up water cooling. Unfortunately will probably take a couple months for a gpu block to come out. :eek: We'll just have to see where the prices fall, if I could spend $50-75 more for another card...
 
It's quite simple. As long as it isn't turning my PC into a flamethrower and it out performs the competition, if I can cough up the Benjamins I will...

If AMD / ATI can put out a revision around the same time that out performs the Fermi, I don't care if I can cook an egg on it as long as it works I will once again cough up the Benjamins for it...

It's damn well past my upgrade time (put my car audio first) and whoever has the BEST performing product in the majority of games tested is going to receive my Benjamins... I didn't grab a well rated 1000W power supply 2 years ago to be concerned over such petty issues...

And I am far from a top end enthusiast (if only I could find a way to make Benjamin multiply faster). >:)
 
You're right, we should magically expect more out of a card when it is late to market. In those four months they are going to redesign TSMC's 40nm process in less time than it took TSMC to design it in the first place so it is not a leaky POS! :rolleyes:

We'll ignore the fact that it is late but appears to be significantly faster and will instead focus on energy efficency as the primary metric of judging a video cards preformance. There's good news! The Nvidia ION is now the best high end graphics card because of it's amazing power efficency!
Hey, you wanted to compare power efficiency, now that I did don't get your panties in a knot.
 
You're right, we should magically expect more out of a card when it is late to market...

Of course we expect more out of cards that arrive on the market later. Generally, consumers don't care whether or not the GPU has been under development for 6 months or 6 years. Consumers don't want excuses, even if those excuses are valid. Consumers only desire products that are worth buying, and if GF100 comes to market after RV870, then it has to perform better to be worth buying.

However, this is ignoring brand loyalty and marketing - I'm assuming consumers are well informed about the card's performance and impartial which is not always the case.
 
Hey, you wanted to compare power efficiency, now that I did don't get your panties in a knot.

I didn't prooving someone wrong and effective use of sarcasim constitued me getting my panties in a knot. Guess I learned something new today. :rolleyes:
 
Hmm, well, for me it's not a matter of 250W at x performance vs. 200W at y performance vs. 150W at z performance. I will not do more than 200W on a single card (and I will only have a single card), and it better run games pretty well with 2 to 4x AA. My whole computer shall never require more than a 500W power supply. Ever.

We are at a point where everything is actually becoming more and more efficient from incandescent bulbs drawing 60-100W to the energy-efficient 8-20W kind, etc. The same should occur for computer hardware, but given that computer hardware's performance is increasing at a good pace, I'm willing to settle for a 200W limit (even though before, that would be unthinkable).

So, bring it on nV, if you have a 200W card, I will be comparing it to my current 8800 GTX, and to ATI's HD5870. If you don't, then I'll be looking as many lower as it takes to find a worthy upgrade to my current card.
 
Of course we expect more out of cards that arrive on the market later. Generally, consumers don't care whether or not the GPU has been under development for 6 months or 6 years. Consumers don't want excuses, even if those excuses are valid. Consumers only desire products that are worth buying, and if GF100 comes to market after RV870, then it has to perform better to be worth buying.

However, this is ignoring brand loyalty and marketing - I'm assuming consumers are well informed about the card's performance and impartial which is not always the case.
Most people buying graphics cards are not well informed. Most consumers don't know that the GTX 280/5 has been out for ~18 months now and the 5870 was "just" released.

Of the informed masses, they still don't know when it was released, most simply will read a few benchmarks to see what card is faster. It's the reason the GTS 250 is still selling well against the 5750. Yes the 5750 has better power, DX11 capable, but the GTS 250 is head to head in preformance and can often be found for less money. A quick check of newegg shows the GTS 250 for 109 w/o AR, GTS 250 OC for 105 AR, 5750 for 130 w/o AR, and 5750 for 120 AR. 20$ w/o AR or 15$ w/ AR difference on a ~100$ purchase.
 
Hmm, well, for me it's not a matter of 250W at x performance vs. 200W at y performance vs. 150W at z performance. I will not do more than 200W on a single card (and I will only have a single card), and it better run games pretty well with 2 to 4x AA. My whole computer shall never require more than a 500W power supply. Ever.

We are at a point where everything is actually becoming more and more efficient from incandescent bulbs drawing 60-100W to the energy-efficient 8-20W kind, etc. The same should occur for computer hardware, but given that computer hardware's performance is increasing at a good pace, I'm willing to settle for a 200W limit (even though before, that would be unthinkable).

So, bring it on nV, if you have a 200W card, I will be comparing it to my current 8800 GTX, and to ATI's HD5870. If you don't, then I'll be looking as many lower as it takes to find a worthy upgrade to my current card.
Not trying to be a dick, so please don't take this that way. But if your comp is never going to require more than a 500W PSU why does your sig say a 620W PSU?
 
I didn't prooving someone wrong and effective use of sarcasim constitued me getting my panties in a knot. Guess I learned something new today. :rolleyes:
"I didn't prooving [sic] someone wrong?" Yah, you really proved that one.
It's the idle power consumption that I worry more about since my PC is on 24/7 but I'm not gaming 24/7.
That will be interesting to see as well. AMD's 5xxx series set the bar quite high.
 
The only thing you can come up with is I double tap a letter? That's not funny, that's just plain sad.
You also forgot proper punctuation after didn't and you misspelled sarcasm as well (which is sad, considering most browsers have built in spellcheckers). Considering your name is "vengence," I shouldn't be surprised. Either way, the forums aren't your place to have a temper tantrum. Go do another activity to blow off some steam then come back to a discussion about video cards, as I won't be responding to anymore of your childish banter.
 
so Fermi haters here think that a 40nm gpu with 200W will run as hot as the 90nm with 200W?

lol.
 
You also forgot proper punctuation after didn't and you misspelled sarcasm as well (which is sad, considering most browsers have built in spellcheckers). Considering your name is "vengence," I shouldn't be surprised. Either way, the forums aren't your place to have a temper tantrum. Go do another activity to blow off some steam then come back to a discussion about video cards, as I won't be responding to anymore of your childish banter.

You've managed to prove I don't care enough about your comments to bother proof reading or spell checking. Congrats! You've proved I stopped caring about anything you had to say a long time ago.

This sub-forum is the place to discuss graphics cards. Something I've been doing despite your presence in the thread. You came into this thread and decided to come into the middle of a conversation. You attempted to make a point, it was shown invalid and you've spent the last several posts making ad hominem attacks while telling me I have a "childish banter" and should "blow off some steam". It's a shame you spoiled a decent thread with your trolling.
 
Do toilets really flush the opposite way in the southern hemisphere? Would that mean, unlike threads in the northern hemisphere, threads in the southern hemisphere would start out terrible, then get better?

And what does this have to do with the price of tea in China?
 
so Fermi haters here think that a 40nm gpu with 200W will run as hot as the 90nm with 200W?

lol.
What's "lol" about this is that they'll both produce the exact same amount of heat. Basic thermodynamics.
 
Yo Fermi, I'm really happy for you and I'mma let you get released, but the Geforce FX 5800 Ultra was the hottest gpu of all time, of all time!
 
If your worried about a leak there are non corrosive, non conductive fluids you can buy to run in your system. I think even pure H2O would work if I remember correctly, but check first.
 
If your worried about a leak there are non corrosive, non conductive fluids you can buy to run in your system. I think even pure H2O would work if I remember correctly, but check first.
Only if you can keep it pure during a spill, which you realistically can't, unless the inside of your case is a cleanroom. ;)

-Will
 
Not trying to be a dick, so please don't take this that way. But if your comp is never going to require more than a 500W PSU why does your sig say a 620W PSU?

They're essentially the same. I was actually aiming for a 520W, and then NCIX had a 620W on sale for cheap, so I got this one. That doesn't mean that I'm going to push it to its limit. I think my system is 300W max. In fact, I don't think it will even get to 300W (honestly).
 
They're essentially the same. I was actually aiming for a 520W, and then NCIX had a 620W on sale for cheap, so I got this one. That doesn't mean that I'm going to push it to its limit. I think my system is 300W max. In fact, I don't think it will even get to 300W (honestly).

Fair enough. I've got a corsair in my system and have never had a problem with it. I think I'm only pulling 350W full load. You have an OC on the 920 or is it stock?
 
Back
Top