NVIDIA's Fermi GF100 Facts & Opinions @ [H]

Status
Not open for further replies.
Are nVidia complete idiots?

http://www.techreport.com/articles.x/18332/5

-------- - GT200 - GF100 - RV870
SP FMA rate 0.708 Tflops 1.49 Tflops 2.72 Tflops
DP FMA rate 88.5 Gflops 186 Gflops* 544 Gflops
I should pause to explain the asterisk next to the unexpectedly low estimate for the GF100's double-precision performance. By all rights, in this architecture, double-precision math should happen at half the speed of single-precision, clean and simple. However, Nvidia has made the decision to limit DP performance in the GeForce versions of the GF100 to 64 FMA ops per clock—one fourth of what the chip can do. This is presumably a product positioning decision intended to encourage serious compute customers to purchase a Tesla version of the GPU instead. Double-precision support doesn't appear to be of any use for real-time graphics, and I doubt many serious GPU-computing customers will want the peak DP rates without the ECC memory that the Tesla cards will provide. But a few poor hackers in Eastern Europe are going to be seriously bummed, and this does mean the Radeon HD 5870 will be substantially faster than any GeForce card at double-precision math, at least in terms of peak rates.

nVidia are coming six months late to market with a huge/hot chip that will be difficilut to sell at a profit, as it is at best only 20% faster than the 5870 whilst having 50% greater die size and memory system to cover costs on, and yet they have the utter gall to start artificially disabling features on a GPU that will cost north of £320!!!!!!!

I realize DP makes zero difference in games, but the kind of person who is willing to drop £350 on a GPU is more than just a gamer, he's a tech enthusiast who is interested to the point of geekiness in new technology. that person wants to know they are buying the most awesome, capable and flexible tech available because otherwise they wouldn't be buying it in the first place.

Even if that person doesn't fold; they are buying that processing magnificence on the idea that in future their multitude of GP-GPU apps will crunch numbers at a truly awesome rate.

You tell him, on a product that is already a marginal gaming proposition on a price/performance metric, that they're also going to cripple the added functionality whose necessity caused its price/performance to be so marginal in the first place and he'll reply: "oh, so i'm buying bog standard consumer junk then? well sure i'm interested, i'll pay £175 and stick one in my HTPC."

I also realise that nVidia want to prop up their high-margin tesla sales, but frankly they should rely on ECC memory to make that premium worthwhile, because right now they need every advantage they can get!

So explain this idiocy to me please, because it makes zero sense.

I'm planning a new PC in late April precisely because Fremi will be here along with any AMD refresh, and I am more interested in Fermi because it sounds like a more advanced and forward looking architecture, and should in theory be more flexible which will make up for its reduced efficiency as a pure gaming product, but now i hear they are going to cripple the product and my first instinct is; "well, /@ck you then!"

Given their precarious position in the computer market as a non-provider of full platforms, i am amazed at nVidias ability to repeatedly shoot itself in the feet, now with this Fermi launch I see nVidia has both barrels pointed firmly at its own wedding-tackle and an itchy trigger finger to boot.

They begger belief sometimes....................................

[/rant]
 
Last edited:
Nope, I have spent that much on a gfx card just for gaming, £350 was normal a few years ago.
 
Nope, I have spent that much on a gfx card just for gaming, £350 was normal a few years ago.

guess what, me too, i got a 9800GX2 for £346, but here's two points:
1) We aren't normal, even most PC gamers i know don't spend more than £225, i.e. cut-down high-end cards like the GTX260-216 and its modern equivalents
2) I'm pretty sure that most people willing to drop £350 on a High-End card will get pretty peeved when a vendor artificially cripples the product for shear marketting reasons.

so; nope.
 
If you are that serious about wanting the extra power, I'm sure you could just hack the bios on the cards and enable the extra DP power.

no, it's just about not wanting to be treated like a berk by the very vendor to whom you are paying top-dollar.
 
no, it's just about not wanting to be treated like a berk by the very vendor to whom you are paying top-dollar.

I'm not famailar with the term berk, I'm guessing it is british. But in any case, if you are buying a card to do CUDA work, aren't you screwing the vendor by buying a graphics card instead of the card they sell to do CUDA work?
 
I'm not famailar with the term berk, I'm guessing it is british. But in any case, if you are buying a card to do CUDA work, aren't you screwing the vendor by buying a graphics card instead of the card they sell to do CUDA work?

Um, no.
 
I'm a little confused with this....obviously Physx an in game feature works via Cuda, does it use single or double prescision calculations? If it uses double then they will surely be also crippling their own gaming performance?
Also they have been pushing Cuda as a feature to sell there graphics cards for things like video encoding....does that use single or double prescision? Again if it is double then they are cripling a feature they themselves have used to promote their graphics card sales??
 
I'm a little confused with this....obviously Physx an in game feature works via Cuda, does it use single or double prescision calculations? If it uses double then they will surely be also crippling their own gaming performance?
Also they have been pushing Cuda as a feature to sell there graphics cards for things like video encoding....does that use single or double prescision? Again if it is double then they are cripling a feature they themselves have used to promote their graphics card sales??

Lets invent problems :rolleyes:
Choose the precision appropriate to your target app.
Not hard to research if you really want to know:
http://en.wikipedia.org/wiki/CUDA
 
Lets invent problems :rolleyes:
Choose the precision appropriate to your target app.
Not hard to research if you really want to know:
http://en.wikipedia.org/wiki/CUDA

Thanks....I read the Wiki but as a releatively non tech guy it didn't answer my question...but what I can understand is your sarcasm so I take it these apps all utilise single precision and so would not be affected by this limitation.

"Choose the precision appropriate to your target app.".... I suppose that is the question with Opencl and gpgpu only just coming into the commercial market we non of us know at the moment know what apps we will be running in the near future or what their requirements will be....shame Nvidia have choosen to criple the hardware you buy off them regardless of that fact.
 
Your target apps functions will depend on your app design as well as the hardware it is intended to run on.
As it says in the Wiki, double precision is available on the GTX 260 onwards so you cannot use it on cards below that.
So you get a choice of 2 programming paths or use single precision for everything.
How fast or slow double precision is, is irrelevant to us.
You can be sure that developers will use the best compromise.

Is there any particular reason you are looking at one single aspect of coding that is highly unlikely to be an issue and you cannot possibly have any control over?
 
This is why you aren't running multi-billion dollar company. You think you should tell your competitor all your secrets all while selling Ferraris at cost.

It's a bit late for any 'secrets' about Fermi providing AMD with any conceivable advantage they don't already have.

Learning the details of Fermi six months BEFORE they released Cypress MIGHT have provided AMD some advantage ... kicking the msrp up by $50 on the 5800 series for example.
 
It's a bit late for any 'secrets' about Fermi providing AMD with any conceivable advantage they don't already have.

Learning the details of Fermi six months BEFORE they released Cypress MIGHT have provided AMD some advantage ... kicking the msrp up by $50 on the 5800 series for example.

Nvidia is worried about what AMD is doing with 6xxx series now, not the 5xxx cards already out. Its like people forget that AMD is still working on new cards, they did not release the 5 series and then just give up...

AMD is in a really good position right now to to a price drop + card refresh when Fermi hits, then launch a 6xxx series a few months after Fermi based cards are readily available.
 
It's a bit late for any 'secrets' about Fermi providing AMD with any conceivable advantage they don't already have.

Learning the details of Fermi six months BEFORE they released Cypress MIGHT have provided AMD some advantage ... kicking the msrp up by $50 on the 5800 series for example.

If I know exactly how fast Femri is and it's price point at release then I can as AMD drop my price to a still profitable level but a better price/preformance ratio. Don't believe me? This is EXACTLY what happened in the 4800 vs GTX 200 series. AMD launched late and cheap and forced Nvidia to drop prices to levels AMD set, not Nvidia.
 
Nvidia is worried about what AMD is doing with 6xxx series now, not the 5xxx cards already out. Its like people forget that AMD is still working on new cards, they did not release the 5 series and then just give up...

AMD is in a really good position right now to to a price drop + card refresh when Fermi hits, then launch a 6xxx series a few months after Fermi based cards are readily available.

AMD should already have a team working on the 7xxx series. I do agree a lot of people seem to think these things are put out in 2 weeks instead of 18 months.
 
Nvidia is worried about what AMD is doing with 6xxx series now, not the 5xxx cards already out. Its like people forget that AMD is still working on new cards, they did not release the 5 series and then just give up...

AMD is in a really good position right now to to a price drop + card refresh when Fermi hits, then launch a 6xxx series a few months after Fermi based cards are readily available.

I would think VERY worried as AMD has publically stated the 6000 series is a brand new architecture and that is almost certain to mean a gpgpu designed to dovetail with bulldozer and so provide a compelling set of 'fusion' solutions neither Nvidia nor Intel can match for a while.

AMD's timing is working far better than Nvidia's as AMD will have a very fat cash flow and a luxury of time to work out the bugs and perfect the 6000 series before it is released whereas Nvidia's cashflow is in freefall as they desperately attempt to bring their GPGPU part into financial viability and their time frame is critically crunched and will CONTINUE to be crunched across the gpu segments into the foreseeable future.
 
Is there any particular reason you are looking at one single aspect of coding that is highly unlikely to be an issue and you cannot possibly have any control over?

Like I said I'm not from a tech background and will be upgrading my graphics for gaming in the near future.

I don't upgrade that frequently and I'm just trying to make sense of the latest developments in the hope that I make the correct choice...I'm not trying to invent problems but rather voicing concerns that this new info has raised in my relatively non tech understanding.

What I'm looking for in my next purchase is a gaming graphics card for now but one that once outdated that I can use as a secondary/auxilliary card for Opencl tasks along side a newer graphics card...you know, a bit like you can use a separate nvidia card now for Physx along side your main graphics card.

I was thinking that Fermi with its new architecture which is much more compute orientated may have fitted that requirement.
But now as it appears it will be a lesser performer in terms of both single and double prescision than ATi's offering I'm not so sure.

Also even if I did go with Fermi I'm concerned that Nvidia may prevent their cards from being used in such a manor as a secondary Opencl card when cards from other vendors are also present in the same system.
At the present time they prevent you from using an Nvidia card for Physx when an ATi card is used for primary graphics in the same system.

I think this latest move of crippling their card has been the final deciding factor.
 
Is there any particular reason you are looking at one single aspect of coding that is highly unlikely to be an issue and you cannot possibly have any control over?
assuming this might be addressed to me......... because:
"By all rights, in this architecture, double-precision math should happen at half the speed of single-precision, clean and simple. However, Nvidia has made the decision to limit DP performance in the GeForce versions of the GF100 to 64 FMA ops per clock—one fourth of what the chip can do. This is presumably a product positioning decision"

i don't pay £350 for an artificially bolixxed product, not when that product is designed the way it is to achieve the things that are being artificially bolixxed.
 
AMD will do to nVidia, what nVidia did to 3DFX back in the day if they don't watch themselves.

I really do want to believe that Fermi is going to be a roaring success, it's the competition between the two companies that has driven down prices and pushed innovation forward.
 
assuming this might be addressed to me......... because:
"By all rights, in this architecture, double-precision math should happen at half the speed of single-precision, clean and simple. However, Nvidia has made the decision to limit DP performance in the GeForce versions of the GF100 to 64 FMA ops per clock—one fourth of what the chip can do. This is presumably a product positioning decision"

i don't pay £350 for an artificially bolixxed product, not when that product is designed the way it is to achieve the things that are being artificially bolixxed.

This is not quite a fair statement, at least for the the GF 100 and the Telsa version of it. For now, Telsa sales will be subsidizing consumer GPUs. So bollixed yes, but its also being subsidized.

That's a fair business model I believe.
 
Last edited:
if they are struggling to make a competitive product in the gaming market, where they will sell millions of these GPU's instead of tens of thousands of Tesla products, then they need to make that product more competitive.............................. by offering more of something else if the raw price/performance isn't so compelling.

they have ECC to differentiate their Tesla products, because no multi-national is going to be seriously interested in peta-flops of DP without ECC, and that is a real product difference rather than an example of artificial software buggery.

nvidia are stupid to do this.
 
AMD will do to nVidia, what nVidia did to 3DFX back in the day if they don't watch themselves.

I really do want to believe that Fermi is going to be a roaring success, it's the competition between the two companies that has driven down prices and pushed innovation forward.

Not gonna happen. See my sig. He begs to differ. He's a master economic analyst and truly one of the brilliant minds on this forum.
 
if they are struggling to make a competitive product in the gaming market, where they will sell millions of these GPU's instead of tens of thousands of Tesla products, then they need to make that product more competitive.............................. by offering more of something else if the raw price/performance isn't so compelling.

they have ECC to differentiate their Tesla products, because no multi-national is going to be seriously interested in peta-flops of DP without ECC, and that is a real product difference rather than an example of artificial software buggery.

nvidia are stupid to do this.

So nVidia is stupid to do this but if one were serious about using this power they'd get the Telsa anyway?

A bit of a contradiction. Not saying that I don't get your point about the limitation, but without ECC sounds like you're saying that no one would use it for DP anyway.
 
AMD will do to nVidia, what nVidia did to 3DFX back in the day if they don't watch themselves.

I really do want to believe that Fermi is going to be a roaring success, it's the competition between the two companies that has driven down prices and pushed innovation forward.
AMD is going to buy nVidia? :p
 
Some people probably wish. I'm quite sure that won't be happening any time soon since AMD isn't making much money at all.
 
I don't think AMD would be allowed to buy Nvidia as it would create a monopoly....but on the other hand Intel may?
 
I don't think AMD would be allowed to buy Nvidia as it would create a monopoly....but on the other hand Intel may?

It certainly wouldn't be a monopoly considering that there are other discrete card vendors (admittedly few) and [especially] that Intel controls half the market. It would actually be more fair than what Nvidia had after its acquisition of 3DFX, since Intel was not as large a player then. But that doesn't matter, since the chance of ATI purchasing Nvidia is 0 at this point. Intel sounds possible.
 
AMD will do to nVidia, what nVidia did to 3DFX back in the day if they don't watch themselves.

No No NO NO NO. You're living in the [H] world. nVidia is still very much on top. Not that I like Nvidia or AMD. nVidia is HUGE compared to AMD.
 
I would think VERY worried as AMD has publically stated the 6000 series is a brand new architecture and that is almost certain to mean a gpgpu designed to dovetail with bulldozer and so provide a compelling set of 'fusion' solutions neither Nvidia nor Intel can match for a while.

AMD's timing is working far better than Nvidia's as AMD will have a very fat cash flow and a luxury of time to work out the bugs and perfect the 6000 series before it is released whereas Nvidia's cashflow is in freefall as they desperately attempt to bring their GPGPU part into financial viability and their time frame is critically crunched and will CONTINUE to be crunched across the gpu segments into the foreseeable future.

I agree. NVIDIA, as HUGE as they are, has bit the bone recently. fermi=vaporware, as far as I'm concerned, flame away
 
I would think VERY worried as AMD has publically stated the 6000 series is a brand new architecture and that is almost certain to mean a gpgpu designed to dovetail with bulldozer and so provide a compelling set of 'fusion' solutions neither Nvidia nor Intel can match for a while.

AMD's timing is working far better than Nvidia's as AMD will have a very fat cash flow and a luxury of time to work out the bugs and perfect the 6000 series before it is released whereas Nvidia's cashflow is in freefall as they desperately attempt to bring their GPGPU part into financial viability and their time frame is critically crunched and will CONTINUE to be crunched across the gpu segments into the foreseeable future.

but dont forget that Fermi is already one generation ahead of 5000 series. So in reality it's Nvidia that is ahead in terms of better architecture.
 
Last edited:
/Facepalm.

no facepalm pal, 5000 series is not much different than the 4000 series except the slap of dx11 support and extra shaders.

Fermi is more than just the extra shaders, it has a more advanced architecture for dx11 support.
 
but dont forget that Fermi is already one generation ahead of 5000 series. So in reality it's Nvidia that is ahead here in terms of better architecture.

This is like claiming AMD is currently 2 generations ahead of Intel, because they released some information about the Bulldozer architecture.
 
No, you are making an ignorant comment.

by your "logic" AMD have been generations ahead because apparantly it hardly took any additional work to make a card dx11 while nvidia had to do a hell of a lot of work.

However, dealing with facts. Fermi was designed to be released at the same time as the 5 series, it is late, that delay does not make it another generation as the 5xxx and the GF100 are both the same generation for each company.

Do you see now why I might suggest you are a troll when you put such a slant on a nvidia screw up.
 
no facepalm pal, 5000 series is not much different than the 4000 series except the slap of dx11 support and extra shaders.

Fermi is more than just the extra shaders, it has a more advanced architecture for dx11 support.

Double the shader processors, double the texture untis, double the ROP, 40nm vs. 55nm, much higher data rate on the DDR5, uses less power, runs cooler, new hardware instructions on the SPU. Other than that, it is pretty much the same architecture.

A good run down can be found here http://www.anandtech.com/video/showdoc.aspx?i=3643&p=5
 
Status
Not open for further replies.
Back
Top