Dual GF100 boards? (warning, its from Fudzillia)

Hmmm. So just how much power does Fermi draw? Obviously they are drawing more than 300 watts total for anything as that the maximum power a PCIe card with 75W from the bus, 75 with a 6 pin connector and 150 watts with a 8 pin connector, unless they are going with two 8 pin connectors, then that's 375 W. I can't see them using more than a 6 and an 8. Two 8 eights would draw a LOT of attention.
 
your title says it all:
"its from fudzilla"

wait until an actual product launches, see what reality dictates, fud is full of it 99% of the time

and heatless, what would be the issue with 2x8, anyone buying gear at that level should be using a PSU of equally high quality, its really no biggie
 
A dual GPU config would be crazy. I doubt it could be done with anything less than 2x8-pin connectors. Cooling would be another issue. A dual slot cooler would have to be LOUD to cool the thing. Maybe they are going to go some non-traditional route? I could see them doing a special edition water cooled card. This would give them plenty of cooling and since the single GPU version is rumored to be $500+, putting a watercooling setup on the card and selling it for $1000 doesn't seem too outrageous.
 
your title says it all:
"its from fudzilla"

wait until an actual product launches, see what reality dictates, fud is full of it 99% of the time

and heatless, what would be the issue with 2x8, anyone buying gear at that level should be using a PSU of equally high quality, its really no biggie

That's just a lot of power. If the performance is there then it isn't a huge issue, but that's four 8 pin power connectors that you'd need for 2 cards, at bit on the high side even with some higher end PSUs.
 
That's just a lot of power. If the performance is there then it isn't a huge issue, but that's four 8 pin power connectors that you'd need for 2 cards, at bit on the high side even with some higher end PSUs.

I'm not saying it isn't a lot of power, but seriously, what would be the sole target audience for a dual GF100 board? "extreme power users", Alienware types, if you can afford the dual boards, the $300-$400 PSU is peanuts

the only people who would likely care are people with systems that can't support it, and need another reason to whine about why its the devil
 
Why even post something from Fudzilla? They totally full of shit for their reviews and sources.

But yeah I think the card would PSU's for breakfast.
 
I'm not saying it isn't a lot of power, but seriously, what would be the sole target audience for a dual GF100 board? "extreme power users", Alienware types, if you can afford the dual boards, the $300-$400 PSU is peanuts
The thing is you wouldn't even need a $300-$400 PSU. People are running the 5970 off of 650w power supplies. Even if the power budget is twice what the GF100 is, something like the $140 Corsair TX950 would run it just fine. The bigger issue is cooling the thing since even dual slot coolers have their limits. Without sounding like a jet engine, I don't see how they could do it in a two slot config with air cooling.
 
Maybe its time for them to move onto watercooling as standard cooling? :p

Imagine, $400 Fermi Dual Core card with a complete watercooling kit. (Drool)
 
I'm not saying it isn't a lot of power, but seriously, what would be the sole target audience for a dual GF100 board? "extreme power users", Alienware types, if you can afford the dual boards, the $300-$400 PSU is peanuts

the only people who would likely care are people with systems that can't support it, and need another reason to whine about why its the devil

Sometimes the price debates around here do turn into class warfare. I'll always recommend a product on the grounds of of price versus quality, but sometimes you just want the best you can afford. Some people can always afford the best, unfortunately that's not me, but I do spend a lot on my computers and gadgets, its my hobby and my job and its what I like. Fortunately I'm not into cars, houses or other expensive things so its my choice.

And anyone that buys three GF100s at launch probably like I will certainly knows how fast this stuff depreciates, but getting first is fun!
 
The thing is you wouldn't even need a $300-$400 PSU. People are running the 5970 off of 650w power supplies. Even if the power budget is twice what the GF100 is, something like the $140 Corsair TX950 would run it just fine. The bigger issue is cooling the thing since even dual slot coolers have their limits. Without sounding like a jet engine, I don't see how they could do it in a two slot config with air cooling.

shows how much I've been looking at PSU pricing in that range recently :eek:

as far as "not sounding like a jet engine", my 4870X2 does that at full RPM, pretty sure 5970 probably does too, FX 5900 did as well, etc

sorta goes with the territory imho, would be nice to see it change, but I'm really doubting it

was thinking, a dual card might not be so bad, separate the load out a bit more, but who knows (who knows if its even legitimate, consider, it is fud)

Sometimes the price debates around here do turn into class warfare. I'll always recommend a product on the grounds of of price versus quality, but sometimes you just want the best you can afford. Some people can always afford the best, unfortunately that's not me, but I do spend a lot on my computers and gadgets, its my hobby and my job and its what I like. Fortunately I'm not into cars, houses or other expensive things so its my choice.

And anyone that buys three GF100s at launch probably like I will certainly knows how fast this stuff depreciates, but getting first is fun!

yes, a statement that this product targets power users is obviously "class warfare" :rolleyes:

the high end GF100 products do not target mom and pop smith with a Pentium 4 based Dell, they target users like you, who want the best, and are willing to pay for it, where exactly is this "class warfare", and where exactly does my logic fall apart?
 
Hmmm. So just how much power does Fermi draw? Obviously they are drawing more than 300 watts total for anything as that the maximum power a PCIe card with 75W from the bus, 75 with a 6 pin connector and 150 watts with a 8 pin connector, unless they are going with two 8 pin connectors, then that's 375 W. I can't see them using more than a 6 and an 8. Two 8 eights would draw a LOT of attention.

well the PCIe specs put a limit of 300 watts, not that limits can't be broken but they would loose calling a PCI card (officially that is) in this case its not a question of money rather if this is true its going past common sense, even on the enthusiast level. like said above it would have to be water cooled
 
your title says it all:
"its from fudzilla"

wait until an actual product launches, see what reality dictates, fud is full of it 99% of the time

and heatless, what would be the issue with 2x8, anyone buying gear at that level should be using a PSU of equally high quality, its really no biggie

this isn't just fud (though that is why I put the disclaimer in there) other have been talking about this for a while, I had just assumed that it had been nixed.
 
Not sure what they plan to do with PCI-E 2.0 Specifications of 300W limit per card...presumably just ignore it totally?
 
Not sure what they plan to do with PCI-E 2.0 Specifications of 300W limit per card...presumably just ignore it totally?

Downclock ridiculously. As long as it's possible to overclock (without nuking your face) the target audience shouldn't care.
 
Sounds like your are denying fud with more fud. You don't know how much power a GF100 takes? As I understand it the 280W comes from seeing the power connectors on an engineering sample GF100 as CES.

some of it is also from the Tesla specification.

the Tesla Fermi with only 448 cores is a 224W power card, its obvious that the full 512 card will be > than 224W and not even too far of a stretch to put it at 250W+, especially if Nvidia is trying to push it to really be faster than the 5970.

the 224W comes from Nvidia's product page for the Tesla's Fermi
 
some of it is also from the Tesla specification.

the Tesla Fermi with only 448 cores is a 224W power card, its obvious that the full 512 card will be > than 224W and not even too far of a stretch to put it at 250W+, especially if Nvidia is trying to push it to really be faster than the 5970.

the 224W comes from Nvidia's product page for the Tesla's Fermi

Exactly. Assuming equal clocks the 512SP version will be 256 watts (the new architecture scales down across the board equally), but chances are it will have higher clocks (and probably a small voltage bump) so 280w TDP is not an unreasonable assumption to make.
 
Don't forget that it will be few revisions of silicon later then tesla so with equal speeds it should be more optimized.
 
Sounds like your are denying fud with more fud. You don't know how much power a GF100 takes? As I understand it the 280W comes from seeing the power connectors on an engineering sample GF100 as CES.

the power figure has been out for a while now with others reporting it. and as pointed out the Tesla cards are already close to it. in other words its not really fud anymore (though I would not put money on it being the final number)
 
Somehow I don't think we'll see a dual Fermi-based-GPU product until PCI-SIG releases PCI-E 3.0. I can't seem to find any solid details regarding maximum power consumption for PCI-E 3.0 compliant cards, but it appears the spec allows for triple-slot form factors which might make air-cooling a 400-500W card a lot more feasible. While initially scheduled for Q2 2010, the release date has been pushed back. With luck, Sandy Bridge will support it.
 
280W comes from the guy who seems to have his own personal anti nvidia crusade.

its not just from charlie, but to be fair (and that is admittedly something charlie is not) he has been pretty spot on about this deal for a while though I am sure that he has put the worst spin possible on it.
 
They could just not have it comply with the PCIE spec. That might not make some OEM's happy but are they really the ones this sort of product is targeted toward?
 
Not sure what they plan to do with PCI-E 2.0 Specifications of 300W limit per card...presumably just ignore it totally?
Is there a 300w PCIe hard limit? I thought it was just based off of the power connectors provided. With a 75w limit for the slot, 75w limit for each 6-pin connector, and 150w limit for an 8-pin connector. So it is 300w with 8-pin + 6-pin, but if the board has 2x8-pin it could do 375w or if they put 3x8-pin it would still be in spec drawing 525w. :eek:
 
They could just not have it comply with the PCIE spec. That might not make some OEM's happy but are they really the ones this sort of product is targeted toward?

i think it would upset some AIBs too.

the issue is that once you get out of spec, all sorts of warranty and other support issues probabyl pop up.


my guess is it'll probably be similar to ATI's solution

a downclock or my guess would be more of a dual low binned/cut down 448 chipped card.
 
Going over 300w:

No PCI-SIG sticker + Liability Insurance issues

High end cooler something like Artic Cooling uses, maybe even watercooled

Another stepping

Lets cross our fingers my fellow Nvidia fans, close our eyes and hope this wet dream comes true.
 
I don't know where everyone is getting this 224W number. On Nvidia's site they say that the Tesla C2050/C2070 use approx. 190W. Thus, with scaling + downclock, <375W is easily possible.

Proof? A GTX275 pulls 219W based on Nvidia's website. The "X2" version, or the GTX295, pulls 289W.
 
Is there a 300w PCIe hard limit? I thought it was just based off of the power connectors provided. With a 75w limit for the slot, 75w limit for each 6-pin connector, and 150w limit for an 8-pin connector. So it is 300w with 8-pin + 6-pin, but if the board has 2x8-pin it could do 375w or if they put 3x8-pin it would still be in spec drawing 525w. :eek:

don't hold me to this but I think its a hard limit per device slot. after that you would have to call it something else like kadozer said.


it is certainly a practical limit though, dispersing 300 watts in that small of an area is bad enough without trying to top it. even if they do add another PCI connector your still going to have to cool that thing (I am thinking its going to be a hair drier) I just can't see this until the next die shrink.
 
This is my million dollar question. If they release a dual GPU Fermi solution capable of supporting multi-monitor on it's own, I'd be stoked. I could stick with an Intel 775 board and not have to sacrifice my quad OC making the move to nForce.
 
The initial GTX295 had some major crippling heat/power issues, and it wasn't until the second revision that things were cleared up a bit. I don't see this happening with current tech, and definitely not a month after the single GPU Fermi is released, unless there is some factor we're not aware of.
 
Back
Top