NVIDIA's Fermi GF100 Facts & Opinions @ [H]

Status
Not open for further replies.
I was never a great ATI fan, destroyed a couple of X series card back in the day with the overdrive utility, bought Nvidia and never went back. I was nearly considering getting an additional couple of 285 if Fermi ended up being shit...But you have to admit it looks impressive atm. We'll have to wait for the review I suppose...
 
Oh, yeah, that was another minor issue that hasn't been mentioned. Is anyone bothered by this?

As for being bothered with the need for multiple GPUs for multi-monitor gaming. Yes and no. Yes in the fact that it is needed. No in that performance should awesome overall I would think.

I've pretty much accepted the fact that if I want great gaming performance overall multi-GPU is kind of must, it you want all the settings on max, even in some games at 1920x1200 you need some serious power to drive some games. 3 x 1920 x 1200 needs more power than a single 5870 can deliver in most games I saw in [H]'s Eyefinity review at max settings IIRC.
 
Problem trading as in spending $30 for an adapter vs. $600 for another card?

Active adapters are more like $100 dollars instead of $30. Monitors with displayport are all ips or pva and thus are about $300 dollars or more per monitor.

Nvidia Surround is going to work on both 200 series cards and 300 series cards. You can get brand new 260s for about 195 with shipping.

And eventually the 300 series will drop in price from there initial high to something more affordable. The only reason they won't is if they perform so much better than the ati cards. If so the ati cards will drop in price (eventually) to offer the "better value". The die size of the 5870 is only 27% bigger than the 4870 (the miracle of die shrinks) and the type of memory is the same as the 4870. Thus once they fix the 40nm yield problems there is no reason ATI can't lower the price to $299 the original msrp of the 4870 (a year and a half ago), the will lower the price of the 5870 if they have to, but they will keep the card at current prices if they are competitive at that price level and the card is selling well enough to warrant the high price tag.
 
As Kyle, spigzone, and many others have stated AMD is working to get a reasonable priced adapter out.
 
Well my 260 is still alive. Maybe it can hang on until March. However, the problem with this release of information is what's not mentioned, namely power, heat, and price/performance. If this video card generates it's own atmospheric jet stream I may have to pass, especially if I have to upgrade my power supply just to use it.

I'm not that hardcore of a gamer but I do want high performance for 1080 resolutions contained within a package that's reasonable. I've since started a new build and would love to include this card in it just as long as this thing doesn't protrude out of the front of my case and sound like a 787 taking off.
 
As Anand says:

"In short, here’s what we still don’t know and will not be able to cover today:

1.Die size
2.What cards will be made from the GF100
3.Clock speeds
4.Power usage (we only know that it’s more than GT200)
5.Pricing
6.Performance"
..........

That's only everything a buyer might find worth knowing about Fermi.

How to read it but an Nvidia substance-free spin play to try and keep people from bolting to AMD?

Bottom line remains the same, no substance = no substance. If they had it they would be using it.
 
Last edited:
Nice read and yes it's about what we expected from all the rumors the past months. I can't wait for a more real launch with real facts and figures from Kyle and company. Good stuff guys.
 
So in other words.... $100

I've seen post stating that people are able to find them for around $50, but $100 seems to the norm still. AMD has promised, and has been for months, that they will work with manufactures to bring the price down to the $20-30 range. But, we're still waiting. Even if they stay in the $50-100 price range, it's still far more accessible than buying two cards for the majority of gamers, even here on [H].
 
Bigger chip, so it's power use and heat exhaust should be higher. As to the fan noise level, we can't know until reviewers actually have cards to test.
 
Even if they stay in the $50-100 price range, it's still far more accessible than buying two cards for the majority of gamers, even here on [H].

Hell buying 3 monitors is more than I can afford. Although I do like that it works with the card I already have, so I could just add an extra one for cheap off of eBay. But I can't stand the bezels in my view.
 
Doesn't look that great to me. Its the same pattern every time. Nvidia has a faster single GPU and Ati always get a dual card that beats the single. If the price is $600 for Fermi, which is expected, Ati's dual card will beat it and still be about the same price.
 
Hell buying 3 monitors is more than I can afford. Although I do like that it works with the card I already have, so I could just add an extra one for cheap off of eBay. But I can't stand the bezels in my view.

I'm not a big fan of triple screen gaming, but I can tell you that when you're actually gaming on 3 monitors, the bezels don't bother you. Since your eyes are focusing on the action on the screens, your brain kinda of ignores them. While working in 2D with a largely static screen, it can and probably will be very annoying.
 
On page 1:

"This is all theoretical of course until we actual test the GF100’s performance in games. "

actual ---> actually

 
Last edited:
Very interesting read...
Disapointed that they aren't releasing more information (speeds, power, temp, product lineup).
After all the time that has passed, im getting a feeling that nvidia is STILL trying to buy itself more time...

I'm with you on this one. I was a bit disappointed by this article's information which of course the fault going to nVidia. Kyle gave us the information he had, but, it's really not as much as I wanted. Really, all the hype and waiting to really 'see' what Fermi can do and well we still don't know what Fermi can do. I mean, there's powerpoint slides saying Fermi can do 1.4x a GT200, etc but does a GT200 refer to a GTX 285 or a GTX 240?

It seems like it'll be fast, but, no word on power consumption, temperature, etc. I mean, will it be 30% faster than the 5870 but 30% more expensive*

*not including cost of additoinal 200 watts on psu and additional two cooling fans :D
 
This really bothers me. I have recently purchased a Dell P2310H monitor (display port) and am planning on ordering another 2 with the intention of running an eyefinity setup. Should I still swing for the HD5870 or wait for the new GF100? I don't think I can afford two cards at this time...not unless I could reuse my existing GTX260.
 
We have enough information to know that Fermi is fast, considerably faster than the current competition. And it supports the coolest new features (Eyefinity) that our competitor has. That was really the purpose of the press release. Just enough to whet our appetite and perhaps slow down AMD's momentum just a bit.

But now its time for some more details however this is only the beginning. Cards are either just getting production up and started or that will soon need to be the case in order to get these puppies out by late March/ early April.

I want to see what the future holds for these cards. This part looks EXTREMELY interesting now and I just have to wait and see before I make my next major GPU upgrade. It will be interesting to say the least.
 
This really bothers me. I have recently purchased a Dell P2310H monitor (display port) and am planning on ordering another 2 with the intention of running an eyefinity setup. Should I still swing for the HD5870 or wait for the new GF100? I don't think I can afford two cards at this time...not unless I could reuse my existing GTX260.

If you want multi-monitor gaming on a single card it looks like we do have enough information to know that that 5000s are your only option for that still: http://hardocp.com/image.html?image=MTI2MzYwODIxNHh4VHN0ekRuc2RfMl8yOV9sLmdpZg==
 
We have enough information to know that Fermi is fast, considerably faster than the current competition. And it supports the coolest new features (Eyefinity) that our competitor has. That was really the purpose of the press release. Just enough to whet our appetite and perhaps slow down AMD's momentum just a bit.

But now its time for some more details however this is only the beginning. Cards are either just getting production up and started or that will soon need to be the case in order to get these puppies out by late March/ early April.

I want to see what the future holds for these cards. This part looks EXTREMELY interesting now and I just have to wait and see before I make my next major GPU upgrade. It will be interesting to say the least.

Actually, we don't know that Fermi will be "considerably" faster than AMD's current offering. We only have information that suggest it could be faster. There are still too many unknown variables to make any kind of assumption of actual gaming performance. I do agree that this was only done in order to slow 5800 adoption. It certainly do that for diehard Nvidia fans, but since there is no concrete performance numbers, and the fact that most people are very impatient, it might not make much of a difference to other consumers. I'm almost certain it'll beat the 5870 by around 10-30%(if not, then fail is the right term), but the 5970 will still be the top performer.
 
Actually, we don't know that Fermi will be "considerably" faster than AMD's current offering. We only have information that suggest it could be faster. There are still too many unknown variables to make any kind of assumption of actual gaming performance.

I have to disagree a little with this. nVidia did release a couple of benchmarks beating the
5870 by a third. I'm not saying that that's what GF100 will actually do but I am saying that nVidia has now kind of set a bar. GF100 better come in closer to beating the 5870 by 30% than by 10% for nVidia's sake. I think with all that they said in this release is that GF100 is simply going to rock. The details about the rest are yet to come.
 
nice info. so based on the Dark Void benchmark my next card will be 80% faster than my current GTX 285 :D, NVIDIA has said numerous times that this will be cost competitive with similarly-performing ATI solutions so I hope it’ll have a good value.

GF100 will have 512 CUDA cores, which more than doubles its cores compared to the GeForce GTX 285 GPU’s 240 core. There are 64 texture units, compared to the GTX 285’s 80, but the Texture Units have been moved inside the Third Generation Streaming Multiprocessors (SM)for improved efficiency and clock speed. In fact, the Texture Units will run at a higher clock speed than the core GPU clock. There are 48 ROP units, up from 32 on the GTX 285. The GF100 will use 384-bit GDDR5, so depending on clock speeds it actually operates at, there is potential for high memory bandwidth. These changes seem logical, and encouraging, but without knowing clock speeds actual shader performance is anyone’s guess.

makes this (leaked here first) look true?

2gtoy6q.jpg


"Do I wait for GF100 or do I purchase a Radeon 5000 series card now?" In my opinion, the answer is quite simple right now. With all these unknown variables I would buy a Radeon 5000 series video card right now and enjoy gaming with the fastest current GPU for gaming, and enjoy an Eyefinity experience. If when GF100 is released, it turns out to offer more than the Radeon HD 5000 series for the factors that matter most to me, then I would sell my Radeon HD 5000 series video card and upgrade to the GF100. If however, it turns out it doesn’t offer what I need, then I would rest happy that I made a good buying decision.

"quite simple" for a rich eyefinity user who can buy an expensive video card to use it for 3 months then sell it, just look at the views number and comments in this topic and similar topics at other tech sites, it is not a "quite simple" answer, in my opinion.
 
I have to disagree a little with this. nVidia did release a couple of benchmarks beating the
5870 by a third. I'm not saying that that's what GF100 will actually do but I am saying that nVidia has now kind of set a bar. GF100 better come in closer to beating the 5870 by 30% than by 10% for nVidia's sake. I think with all that they said in this release is that GF100 is simply going to rock. The details about the rest are yet to come.

The problem is that you can't set anything other than expectations without cards being in the hands of reviewers. We don't know if those slides represent theoretical performance gains or true performance across the board. A demo is great here and there. Just about any information regarding this card is definitely welcome, we just need more.
 
I have to disagree a little with this. nVidia did release a couple of benchmarks beating the
5870 by a third. I'm not saying that that's what GF100 will actually do but I am saying that nVidia has now kind of set a bar. GF100 better come in closer to beating the 5870 by 30% than by 10% for nVidia's sake. I think with all that they said in this release is that GF100 is simply going to rock. The details about the rest are yet to come.


Putting that much faith in benchmark scores released by the IHV is a little naive. As I've said in another thread, Nvidia is in the market to sell more cards than the competition. As history has shown, both AMD and Nvidia are more than willing to "fudge" numbers in order to accomplish this. Right now, all Nvidia cares about is stopping AMD from getting a larger share of the market due to Nvidia's delays. If that means over stating their performance, then that's exactly what they'll do.
But, as many others have said, this is all speculation till someone trustworthy(Hello Kyle/Brent) actually gets production level hardware in their hands and answers everyone's questions once and for all. I will say this, I truly hope thier tessellation implamentation is really that big of an advance. It'll prove that tessellation can make a large difference, and game developers will take advantage of it. I'm sure AMD's next release will fix any shortcomings in their current design, and it should be out by the end of this year/very early next year.
 
Putting that much faith in benchmark scores released by the IHV is a little naive. As I've said in another thread, Nvidia is in the market to sell more cards than the competition. As history has shown, both AMD and Nvidia are more than willing to "fudge" numbers in order to accomplish this. Right now, all Nvidia cares about is stopping AMD from getting a larger share of the market due to Nvidia's delays. If that means over stating their performance, then that's exactly what they'll do.
But, as many others have said, this is all speculation till someone trustworthy(Hello Kyle/Brent) actually gets production level hardware in their hands and answers everyone's questions once and for all. I will say this, I truly hope thier tessellation implamentation is really that big of an advance. It'll prove that tessellation can make a large difference, and game developers will take advantage of it. I'm sure AMD's next release will fix any shortcomings in their current design, and it should be out by the end of this year/very early next year.


Exactly. FX5800 rings alot of bad bells.

Wait tell Kyle gets it in his hand to see if its worth it.

Paper Launch FTL
 
One thing you have to consider is that by the time Fermi hits retailers in ample supply I'd wager a bet that ATI's next refresh won't be too far off. While Nvidia might have a higher performing product, it is going to live and die by its price.

From looking at the numbers, performance-wise Fermi seems to sit between a 5870 and 5970. Good but not great, considering the card is going to be 6 months late to market. Pricing this card at $600 might be a tough sell and I'm sure Nvidia can't afford to take much of a loss, which doesn't bode well with the fact that Fermi is apparently quite costly to produce.
 
nice info. so based on the Dark Void benchmark my next card will be 80% faster than my current GTX 285 :D, NVIDIA has said numerous times that this will be cost competitive with similarly-performing ATI solutions so I hope it’ll have a good value.



makes this (leaked here first) look true?

2gtoy6q.jpg




"quite simple" for a rich eyefinity user who can buy an expensive video card to use it for 3 months then sell it, just look at the views number and comments in this topic and similar topics at other tech sites, it is not a "quite simple" answer, in my opinion.
old and fake pic. if you read todays article you would know there will be 64 tmus not 96 or 128.
 
As Anand says:

"In short, here’s what we still don’t know and will not be able to cover today:

1.Die size
2.What cards will be made from the GF100
3.Clock speeds
4.Power usage (we only know that it’s more than GT200)
5.Pricing
6.Performance"
..........

That's only everything a buyer might find worth knowing about Fermi.
+1 here.

Though PCperspective said the die size will be over 500mm^ and they use 384bit memory bus...traditional nVidia monolithic beast GPU designs.

No doubt these are probably going to be faster than 5870 but they ain't going to be cheap to produce, especially when we're taking TSMC 40mm yeilds into consideration.

Honestly this looks like another GTX 280 vs 4870 dejavu.(expect nVIdia is 6months late)
nVidia for absolute best performance, ATi for price/performance affordability.
 
+1 here.

Though PCperspective said the die size will be over 500mm^ and they use 384bit memory bus...traditional nVidia monolithic beast GPU designs.

No doubt these are probably going to be faster than 5870 but they ain't going to be cheap to produce, especially when we're taking TSMC 40mm yeilds into consideration.

Honestly this looks like another GTX 280 vs 4870 dejavu.(expect nVIdia is 6months late)
nVidia for absolute best performance, ATi for price/performance affordability.

It does look like another 280 vs 4870 scenario. However, biggest change this time around is that aside from Crysis, no games are truly stressing current hardware. Nothing looks to be particularly demanding in the future either, with majority of devs focusing on multi-platform friendly experiences. The only real difference between PC and console titles these days is sharper, high-res textures. It's going to be hard to justify paying the extra premium unless you're gaming with multiple monitors.

When the next Crysis in terms of requirements *does* come out, we'll probably already have entered in the next GPU generation. So I think Fermi is in a tight spot if it cannot compete with ATI on price.
 
This really bothers me. I have recently purchased a Dell P2310H monitor (display port) and am planning on ordering another 2 with the intention of running an eyefinity setup. Should I still swing for the HD5870 or wait for the new GF100? I don't think I can afford two cards at this time...not unless I could reuse my existing GTX260.
buy a used gtx 260 and sli it instead. Nvidia Surround can only support two displays per card (inc. Fermi), but support should be added to GT200 chips later, after all GT200-based quadros have SLI Mosaic already.

Come to think of it isn't Nvidia Surround a 'rebranded' SLI Mosaic?
 
well it seems like nvidia made a faster video card but still nothing breath taking.

What happen to the god old times where video card of the next gen gets at least double the amount of fps. while everything be set to max
 
Awesome news, I'm glad I have a 5870. Late march at earliest means I've had my card for a good while :D
 
Status
Not open for further replies.
Back
Top