NVIDIA's Fermi GF100 Facts & Opinions @ [H]

Status
Not open for further replies.

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,601
NVIDIA's Fermi GF100 Facts & Opinions - NVIDIA's "Fermi" next generation GF100 GPU isn’t here yet. Nope, we don't have hardware. But NVIDIA has given us an in-depth look at the specifics behind the architecture as it relates to gaming. NVIDIA certainly remembered us gamers and the fact that we like lots and lots of polygons.

We have come away more excited about GF100 (Fermi) than we have ever been. The design of the Fermi architecture is very innovative and NVIDIA engineers get big kudos for thinking outside of the box, or in this case, the traditional graphics pipeline. This thing has got several magazine covers in its future and an engineering award or two for sure.
 
Last edited:
Pretty exciting. It's nice to see NVIDIA moving in new directions rather than just beefing up an existing architecture. Hopefully it pays off in performance.

Fixed, thanks Kyle
 
Last edited by a moderator:
Wow, it looks like Fermi is going to be freeking awesome. nVidia has now set the bar high, they better execute accordingly.
 
I agree that NV Surround is a terrible name. As names go, I think they struck out all around (though Tesla's not bad). In my opinion, they definitely should have gone with Kyle's name—nFinity—for their three monitor technology. It's actually a better name than Eyefinity, and under the circumstances at least we could applaud them for having a sense of humor about an obvious situation.
 
Very interesting read...
Disapointed that they aren't releasing more information (speeds, power, temp, product lineup).
After all the time that has passed, im getting a feeling that nvidia is STILL trying to buy itself more time...
 
Very interesting read...
Disapointed that they aren't releasing more information (speeds, power, temp, product lineup).
After all the time that has passed, im getting a feeling that nvidia is STILL trying to buy itself more time...

Dont think you are alone in that feeling,
 
Interesting... still not the least bit interested in multi-screen gaming, though. I'd rather have one exceptionally pretty image.
 
WANT!!!

In all seriousness, depending on the price I think I will probably get one. I decided to get a 5870 since I'm a high end hardware junkie and it was the fastest single card around. Well, the thing is fast, no question, crunches through anything I throw at it with ease. However the quality is just not up to what my 280 had. I get the gray screen crashes and the drivers lack all the features I liked with nVidia and so on.

So if this card is as good as they claim, and isn't too expensive, I'll probalby have to get one.
 
Given the lack of clockspeed and power numbers, I sense this architecture looks like it might go down the same path the GTX2xx did. It is going to make a great high end card for those that want to put up with it, however the technology will not scale down easily, and production costs will be high and this tech will go unused by many. Nvidia was (rumored) selling GTX260 cards at a loss for a while just to keep up with AMD. Heck, GTX260 cards are selling for more right now they then were 8 and 12 months ago. The the large die and poor yields of the GTX2xxx series were holding it back, plus I think a lot of people would agree many early GTX260 and GTX280 cards were unreliable and unable to handle the temps they ran at.

So now Nvidia comes back with another expensive large die size high-end architecture packed with features and absolutely no mention of power improvements or improved production processes? Its like they took a look at what they did with the GTX series, decided why the approach was bad, then did the exact same thing with the GF100. Color me disappointed.

Sure I still want one of these, but I think only playing in the higher end of the market is going to kill Nvidia. They need a scalable architecture instead of refreshing the high end, and then re badging G92 cards for the low and mid range markets.
 
The thought of games shipping with higher res details for the GF100 has my interest.
 
Still need to hear more,canceled 2x 5970 to wait to see more about the Fermi,or wait for the 5870 six edition,I'll probably pick up whichever of the two releases first.
 
The question I have is and this seems to be the biggest issue, is Nvidias only advantage being able to render tons of polygons and did they focus so much tessellation that it hurts them in other ways? They only seem to be showing things about polygons, probably all their benchmarks they are going to be showing are scenes where there are tons of polys. While thats not a bad thing since I love extra polygons and thats why I loved the idea of tessellation, but if they are lagging on other things maybe all that poly power won't mean as much.
 
WANT!!!

In all seriousness, depending on the price I think I will probably get one. I decided to get a 5870 since I'm a high end hardware junkie and it was the fastest single card around. Well, the thing is fast, no question, crunches through anything I throw at it with ease. However the quality is just not up to what my 280 had. I get the gray screen crashes and the drivers lack all the features I liked with nVidia and so on.

So if this card is as good as they claim, and isn't too expensive, I'll probalby have to get one.

Ditto... looking forward to going back to an nV card myself after going from GTX 280 to Radeon 5870. I've encountered zillions of quirks/issues that simply didn't happen on the nV setups, including a few examples: http://www.hardforum.com/showpost.php?p=1035198065&postcount=59

*IF* Fermi ends up as good as it currently looks to be shaping up as, this looks like another G80 to me.
 
Sure I still want one of these, but I think only playing in the higher end of the market is going to kill Nvidia.
NVIDIA's well-supplemented with its other product lines, however. They're apparently investing quite a bit in Tegra, and according to Anand (who seems to get overly excited about this kind of stuff), Tegra 2's going to be a home run for 'em. They also have Ion, which seems to be doing acceptably in the netbook/nettop world so far.

I think that so long as NVIDIA has cards to cover all the price points, they'll be "alright", even if they aren't particularly great cards. Their re-branding efforts keep consumers guessing and confused, which is probably enough to keep them going at those price points with adequate sales.

The question I have is and this seems to be the biggest issue, is Nvidias only advantage being able to render tons of polygons and did they focus so much tessellation that it hurts them in other ways?
As Brent suggested, it's so incredibly dependent on clock speeds that it's hard to speculate.
 
NVIDIA's well-supplemented with its other product lines, however. They're apparently investing quite a bit in Tegra, and according to Anand (who seems to get overly excited about this kind of stuff), Tegra 2's going to be a home run for 'em. They also have Ion, which seems to be doing acceptably in the netbook/nettop world so far.

I think that so long as NVIDIA has cards to cover all the price points, they'll be "alright", even if they aren't particularly great cards. Their re-branding efforts keep consumers guessing and confused, which is probably enough to keep them going at those price points with adequate sales.

The GF100 architecture sounds like it was designed, from reading the articles around the net, to scale down very well. It is divisible in many ways ;), so I think we'll later see plenty of scaled-down Fermi's vs. GT200 where we saw no low/mid-end models.
 
I'm interested in seeing all the things this chip does beyond the ancient 2D gaming we've been using all these years. It's going to make current cards look antique.

:cool:
 
The question I have is and this seems to be the biggest issue, is Nvidias only advantage being able to render tons of polygons and did they focus so much tessellation that it hurts them in other ways? They only seem to be showing things about polygons, probably all their benchmarks they are going to be showing are scenes where there are tons of polys. While thats not a bad thing since I love extra polygons and thats why I loved the idea of tessellation, but if they are lagging on other things maybe all that poly power won't mean as much.

Hence my comment about the unknown variable of pixel shader performance. Now, it has more than double the shader units compared to GTX285, but the clock domains and speeds are currently an unknown. I tend to lean toward they know what they are doing and wouldn't starve the engine of pixels, but I guess until the facts are there we just don't know.
 
I won't link the other site, but it stated games are made with much higher detail than they are released with. If nVidia can convince the devs to release those as a patch or include them and allow a hardware check decide the level offered in game... Should see some nice and very visible differences in screen shots.
 
Ditto... looking forward to going back to an nV card myself after going from GTX 280 to Radeon 5870. I've encountered zillions of quirks/issues that simply didn't happen on the nV setups, including a few examples: http://www.hardforum.com/showpost.php?p=1035198065&postcount=59

*IF* Fermi ends up as good as it currently looks to be shaping up as, this looks like another G80 to me.

New generation, new bugs, GF100 will most likely go through its trial period of bugs.
 
I agree that NV Surround is a terrible name. As names go, I think they struck out all around (though Tesla's not bad). In my opinion, they definitely should have gone with Kyle's name—nFinity—for their three monitor technology. It's actually a better name than Eyefinity, and under the circumstances at least we could applaud them for having a sense of humor about an obvious situation.


it should be nfinity SLI since it only works in SLI and not on a single card..
 
I won't link the other site, but it stated games are made with much higher detail than they are released with. If nVidia can convince the devs to release those as a patch or include them and allow a hardware check decide the level offered in game... Should see some nice and very visible differences in screen shots.

I have no idea what you are talking about.
 
I won't link the other site, but it stated games are made with much higher detail than they are released with. If nVidia can convince the devs to release those as a patch or include them and allow a hardware check decide the level offered in game... Should see some nice and very visible differences in screen shots.

Games are indeed made with much higher quality sources that are then scaled down to run at decent/good performance on modern hardware. It would be great if nV could get devs to release higher-quality options, for sure... it's a risky move PR-wise as your game can easily be slammed as "poorly coded" or "crappily optimized" like Crysis was, simply because it allowed for higher settings though :(. I haven't seen the article you're mentioning, but if that part's correct that has me excited for PC gaming more than "eyefinity" or "3d vision shutter-glasses" ever have.
 
I won't link the other site, but it stated games are made with much higher detail than they are released with.
They build high-polygon models for normal map generation, yes. Typically, in-game, low-polygon models are built by hand, and there'd be absolutely no reason to make them in several different detail levels.
 
Games are indeed made with much higher quality sources that are then scaled down to run at decent/good performance on modern hardware. It would be great if nV could get devs to release higher-quality options, for sure... it's a risky move PR-wise as your game can easily be slammed as "poorly coded" or "crappily optimized" like Crysis was, simply because it allowed for higher settings though :(. I haven't seen the article you're mentioning, but if that part's correct that has me excited for PC gaming more than "eyefinity" or "3d vision shutter-glasses" ever have.
Yes that's the danger. "Hey no fair we want them too!" and then when they do... "this runs like crap!"
 
They build high-polygon models for normal map generation, yes. Typically, in-game, low-polygon models are built by hand, and there'd be absolutely no reason to make them in several different detail levels.

Textures also are generated initially at far higher qualities.
 
Oh, yeah, that was another minor issue that hasn't been mentioned. Is anyone bothered by this?

Yes and no. Yes you have to buy two cards, but at the same time it appears they don't have to be displayport monitors or using an active displayport adapter. So trading one problem for another.

Then again maybe when ATI activates crossfire for eyefinity we may have a similar situation with ATI where you can attach monitor 1 and 2 to card A and monitor 3 to card B.
 
elaborate

I believe he's talking about the part of the presentation that stated that Nvidia's tessellation engine can perform the same amount of work much faster than a 5870, or work at the same speed of a 5870 while giving a better image quality. It's certainly a interesting piece of information.
Anandtech said:
So why does NVIDIA want so much geometry performance? Because with tessellation, it allows them to take the same assets from the same games as AMD and generate something that will look better. With more geometry power, NVIDIA can use tessellation and displacement mapping to generate more complex characters, objects, and scenery than AMD can at the same level of performance. And this is why NVIDIA has 16 PolyMorph Engines and 4 Raster Engines, because they need a lot of hardware to generate and process that much geometry.
 
Yes and no. Yes you have to buy two cards, but at the same time it appears they don't have to be displayport monitors or using an active displayport adapter. So trading one problem for another.

Problem trading as in spending $30 for an adapter vs. $600 for another card?
 
Well it's nice that you can now actually see them in 3D.

It's still not true 3D. It's just an illusion. I'll actually get excited when we get true 3D gaming, probably in the form of some kind of holographic screen/display/whatever. No stupid glasses, and there won't be any real kind of limitations, such as having to have a certain framerate.
 
Status
Not open for further replies.
Back
Top