NVIDIA's Fermi GF100 Facts & Opinions @ [H]

Status
Not open for further replies.
No, you are making an ignorant comment.

by your "logic" AMD have been generations ahead because apparantly it hardly took any additional work to make a card dx11 while nvidia had to do a hell of a lot of work.

However, dealing with facts. Fermi was designed to be released at the same time as the 5 series, it is late, that delay does not make it another generation as the 5xxx and the GF100 are both the same generation for each company.

Do you see now why I might suggest you are a troll when you put such a slant on a nvidia screw up.

wrong again, Fermi was delayed due to much more advanced architecture than the 5000 series. Nvidia could of released a new gpu with just more shaders on 40nm before the 5000 series. They took few steps extra this time, wait till more complex tessellation games or benchies to come out and you shall see what im talking about.
 
Double the shader processors, double the texture untis, double the ROP, 40nm vs. 55nm, much higher data rate on the DDR5, uses less power, runs cooler, new hardware instructions on the SPU. Other than that, it is pretty much the same architecture.

A good run down can be found here http://www.anandtech.com/video/showdoc.aspx?i=3643&p=5

its the same architecture like the 4000 series with just more things in the same architecture.
 
Is there and echo echo echo echo in here?

did you know that ATI could of slapped their eDRAM ram tech on their gpu's long time go and give pretty much no AA performance hit? i guess you didnt, and i guess you dont even know why.
 
Last edited:
What are you on about? Extended Data Out RAM? I don't see how that is even relevant to anything since 1996.
 
did you know that ATI could of slapped their eDRAM ram tech on their gpu's long time go and give pretty much no AA performance hit? i guess you didnt, and i guess you dont even know why.

Is that why all 360 games have AA? Oh, wait, they don't? *GASP*, could it be that it doesn't have free AA? omg omg omg! no waaaiiiiii!!!!

Do you know why? Of course you don't, why would I ask that. The answer is simple, AA requires more than just fast memory. Also, the EDRAM in Xenos is a whopping 10MB - good enough for 720p and lower, not worth a damn beyond that.

Oh, and the 360's GPU architecture just wouldn't work in a PC.

its the same architecture like the 4000 series with just more things in the same architecture.

No it isn't. The 5xxx series architecture has been tweaked and improved since the 4xxx series. Just like Fermi's architecture is a tweaked and improved GTX 2xx series architecture. Nvidia didn't start from scratch with Fermi - its using the same shader layout and design as the 8xxx series, its just on its 3rd generation (which is what Nvidia also says)

http://www.hardocp.com/image.html?image=MTI1NDM1MDgzM3RHa3EwSHBHQWhfMV83X2wuanBn

Look at that! 3rd generation SM architecture, its the exact same as the GTX 280, just with more things in the same architecture.
 
transistors speak for themselves.
I don't equate transistor count with the degree of architectural advancement, especially considering that a large number of transistors are being dedicated to cache stores in the Fermi architecture (which isn't "advanced", merely different and interesting in a GPU).

When you can make transistors do something remarkable, that's advanced.
 
"Look at that! 3rd generation SM architecture, its the exact same as the GTX 280, just with more things in the same architecture."

I'm not taking parts in this fight, but this statement is completely misleading. Read the whole layout of Fermi.

As to epeen fight between ATI and Nvidia going on in this thread, I'll lay in my 2 cents. ATI has indeed gone with an easier upscaling of the 4000 series, for the most part, to push a strong product out fast that will work well with all present games. Nvidia is pushing out a totally rebuilt GPU made specifically with DX11 in mind. Now the thing is, considering the slow, almost glacial, incorporation of DX10 by programmers, I really think ATI made the right choice this round. By the time there is more than a handful of games that can take advantage of the newer DX11 features, like heavy tessellation, we will probably be on the HD7000 series and GTX 500 series.

Don't get me wrong, I love how Nvidia is pushing the hardware to new levels, but I really think a lot of the features are going to be 3 or more years before software can catch up. While ATI on the other hand, will be getting almost the same efficiency in the DX9/10 games as Nvidia with smaller chips that cost less money.

I should mention, I'm planning on getting a pair of Fermi cards, unless Nvidia goes crazy on price, then I'm gonna say screw it, and replace my AM2+ p2 940 ~3.5 and upgrade to an i7 920 with the money I save by using the 5870 in crossfire or trifire.
 
"Look at that! 3rd generation SM architecture, its the exact same as the GTX 280, just with more things in the same architecture."

I'm not taking parts in this fight, but this statement is completely misleading. Read the whole layout of Fermi.

As to epeen fight between ATI and Nvidia going on in this thread, I'll lay in my 2 cents. ATI has indeed gone with an easier upscaling of the 4000 series, for the most part, to push a strong product out fast that will work well with all present games. Nvidia is pushing out a totally rebuilt GPU made specifically with DX11 in mind. Now the thing is, considering the slow, almost glacial, incorporation of DX10 by programmers, I really think ATI made the right choice this round. By the time there is more than a handful of games that can take advantage of the newer DX11 features, like heavy tessellation, we will probably be on the HD7000 series and GTX 500 series.

Don't get me wrong, I love how Nvidia is pushing the hardware to new levels, but I really think a lot of the features are going to be 3 or more years before software can catch up. While ATI on the other hand, will be getting almost the same efficiency in the DX9/10 games as Nvidia with smaller chips that cost less money.

I should mention, I'm planning on getting a pair of Fermi cards, unless Nvidia goes crazy on price, then I'm gonna say screw it, and replace my AM2+ p2 940 ~3.5 and upgrade to an i7 920 with the money I save by using the 5870 in crossfire or trifire.

Agree with everything except we will see heavy DX11 games much sooner than that. Battlefield 3 if you look it up and what its devs are saying about it. Geometry polymorph probably won't be utilized for years if at all in games will go the way of the voxels.
 
We will see some DX11 games before then, but very few will go far with DX11 for the next few years. B3 might considering they have the money and I'm pretty sure that the Frostbite engine being new was built with this upgrade in mind. For the majority of game with DX11 features, I imagine it will be like Dirt2 where they just do a few things like the audience and some water so they can slap a DX11 sticker on it. It's going to take awhile before more than a handful of games start to take advantage of the DX11 features in a game wide setting.

I have to say though, I'm looking forward to Fermi. I just hope they don't kill themselves on price again, I've got an AMD system I've been upgrading for a few years now, and its been on 2 asus SLI boards. If Nvidia screws the prices on this card too, I might have to go with an x58 chipset, which will hopefully support those 8 core 22nm i7's that are supposed to be coming next year, as well as both SLI and Crossfire. Being limited to a single ATI card on my current mobo sucks, especially since I'm going with 3 monitors after doing a couple week test run of a th2go system.
 
Last edited:
dx11 effects in Dirt2 are laughable at best. We are going to need a complex tessellation benchmark to test fermi vs 5870.
 
You're not meant to notice any DX11 effects besides improved performance and that things look nicer, clearer, more natural and realistic. Save the polygon budget and have tessellation as an option for those who simply cannot consider touching a girlgame unless they can max out AA/AF in it, those types.
 
Double the shader processors, double the texture untis, double the ROP, 40nm vs. 55nm, much higher data rate on the DDR5, uses less power, runs cooler, new hardware instructions on the SPU. Other than that, it is pretty much the same architecture.

A good run down can be found here http://www.anandtech.com/video/showdoc.aspx?i=3643&p=5

I'll have to agree with DualOwn on this one. The HD5000 series is more of the same as you have already listed with only 2 differences. Eyefinity & DX 11. There is nothing special about the HD5000 series other than it's price/performance ratio. It only looks good cus it has no competition. We'll see how good it looks once it does have competition later in the year. I see the HD5000 series as being more of an advancement of the HD4000 series with some major improvements while the Fermi is a completely different beast from what the GT200 series was for Nvidia.
 
I'll have to agree with DualOwn on this one. The HD5000 series is more of the same as you have already listed with only 2 differences. Eyefinity & DX 11. There is nothing special about the HD5000 series other than it's price/performance ratio. It only looks good cus it has no competition. We'll see how good it looks once it does have competition later in the year. I see the HD5000 series as being more of an advancement of the HD4000 series with some major improvements while the Fermi is a completely different beast from what the GT200 series was for Nvidia.

Where do you keep getting the idea that Eyefinity and DX11 are the only new things to the 5xxx series? Seriously guys, what the fuck? At least bother to read a review of the card

http://hardocp.com/article/2009/09/22/amds_ati_radeon_hd_5870_video_card_review/5
The Radeon HD 5800 series is more than just doubling all the good parts; there have been additions, improvements and refining.

The 5xxx series also added EDC memory, improved AF, Supersampling AA, and improved HDMI and DTS. And, of course, the individual shaders were tweaked and improved.

And the 58xx series looks good because it *IS* good.
 
no facepalm pal, 5000 series is not much different than the 4000 series except the slap of dx11 support and extra shaders.

Fermi is more than just the extra shaders, it has a more advanced architecture for dx11 support.

Yeah I'd have to agree that Fermi brings more to the table in terms of technology change than the 5870. That being said, that's not necessarily a good thing, because it might indicate that GT200 had a lot to fix. But from a design standpoint it is more exciting than the 5870. I'm gradually changing my tune on Fermi, although I still think that both companies need to drop their GPGPU bits into a seperate product. I'm not particularly convinced that GPU physics is going to take off, but that is something that needs to be implemented for competitive reasons. But bits like DP, unless they're very cheap (and I've read that at least on 5870 it is), should be dropped from the consumer line. I don't want to pay for it, and would rather have that space be taken up by more SP processing power.
 
Last edited by a moderator:
kneel before fermi


:D

http://en.wikipedia.org/wiki/Fermi_paradox

The Fermi Paradox shall boggle your mind, kneel before it!?

The Fermi paradox is the apparent contradiction between high estimates of the probability of the existence of advanced graphics card technology and the lack of evidence for, or contact with, such things in our universe. The age of the universe and its vast number of stars suggest that if the Earth is typical, advanced graphics card technology should be common throughout space.

Saw that link posted elsewhere and thought it was pretty funny :D.
 
oops....maybe i shouldnt of started a new thread.

from what i gather with all my reading is that FERMI is simply just gonna "look" better than AMD...period, no matter how much ram they throw at the current design, amd is not gonna be able to process tessalation or AA as fast as nvidia...period, no question...

now turn aa and tessalation off, and they probably will both run exactly the same numbers...

oh by the way. fermi is capable of 32!!!! x aa....with only a 10% dip in performance from 8AA...hows that grab ya? theres also something built in that fixes diagonals like chain link fences etc, but i cant remember the term right off.

one more thing, there is now OoO injection, and each polymorph engine has a dedicated com line so that things don't start happening out of order if you use that OoO,...thats why you get the amazing double precision numbers...dont forget to mention the addition of ECC...

im not a fan boy, but these are facts my friend. personally i hope AMD comes out with something that blows it away and raises the bar...thats what the game is all about...FERMI raises the bar.
 
dont forget to mention the addition of ECC

Your thinking of the tesla version of fermi. If you read the whitepages for GF100 it specifically states that they disabled ECC in the geforce version for performance reasons (ECC reduces memory bandwidth by 5-20% depending on the circumstances). And no offense but I don't think you fully understand how tessellation is implemented. Your saying that you won't get better performance from GF100 in terms of framerate but better quality, in reality it is the reverse. Nothing stops the 5870 from doing the same tessellation that GF100 can do, except performance. You are suggesting that tessellation will be disabled in game engines for ATI cards, it won't, it will just have less of a performance hit on gf100.
 
ill have to go back and read but im pretty sure ECC is built into the architecture....i could be wrong, ill go back and check

your statement that there wont be as big a performance hit with fermi is exactly my point, nvidias hard ware is gonna be able to do more of it, and faster, with the same fps, but in the end giving you a better picture.

not saying that tess will be "cut off" for ati, but i am saying that there will be different levels of tessalation maps included with the game and gf100 will be on the higher end of that. im betting there will be a setting similar to AA...now they just need to make it so you dont get the same level of tessaltion at a distance, as you do up close..but thats on the developers..FERMI seems to be built around geometry...the poly-engine is the shiznit

here have a read, and youll see what im talkin

http://www.anandtech.com/video/showdoc.aspx?i=3721&p=3

edit, your correct on ecc...but i have a bet it will be enabled somehow for those who want to use their non tesla card for fold@home and other gpu apps.
 
ill have to go back and read but im pretty sure ECC is built into the architecture....i could be wrong, ill go back and check

your statement that there wont be as big a performance hit with fermi is exactly my point, nvidias hard ware is gonna be able to do more of it, and faster, with the same fps, but in the end giving you a better picture.

not saying that tess will be "cut off" for ati, but i am saying that there will be different levels of tessalation maps included with the game and gf100 will be on the higher end of that. im betting there will be a setting similar to AA...now they just need to make it so you dont get the same level of tessaltion at a distance, as you do up close..but thats on the developers..FERMI seems to be built around geometry...the poly-engine is the shiznit

here have a read, and youll see what im talkin

http://www.anandtech.com/video/showdoc.aspx?i=3721&p=3

edit, your correct on ecc...but i have a bet it will be enabled somehow for those who want to use their non tesla card for fold@home and other gpu apps.


So even though Nvidia says ECC will be disabled, you think they are lieing and will enable it?....
 
im still looking for the answer to this, but i don't see the point in "disabling" something that is built into the chip...i can see not accessing it while in a game because its not needed, but as far as changing the chip..i dont think that would happen...so if its "built in" then its simply a matter of utilizing whats availible to you via CUDA code....

give me a minute to find out some more...im still reading.
 
yeah im not finding anything difinitive, but nvidia's own site says fermi has "full ecc support" so unless where talking about a change at the chip level......

im assuming its gonna be a matter of not including ECC tools in the game developers bag o tricks, while including it for GPGPU app developers......but hellifiknow
 
good question...i dont really know..ive never really read into ecc that much because it isnt used in gaming...maybe someone else knows more about it
 
What I find hilarious is that before all this new info, ATI fanboys were trolling for DX11 tessellation and how absolutely vital it is. Now suddenly tessellation doesn't matter very much.:rolleyes:

I think this older post I made was amazingly prescient:

ATI's "directX lead" amounts to getting to experience flag tessellation in DIRT 2 a few months early.

I'm not to sure Nvidia's advantage is going to count for much either though. By the time there are enough games for me to care about it, we will have moved on from Fermi.
 
i dont think so, alot of games coming out are dx11, and in order to be dx11 they HAVE to have tessaltion...im betting your see some at least 5 dx11 titles before the end of Q2, with more to come.

im just hoping they send mulitple levels of quality displacement maps with the games so you can have a better picture if your video card can run it. they could even name them "ati" and "nvidia" instead of low and high...just makes it sound better. haha
 

He is prolly questioning your belief that tesselation will be a requirement for DX11 games.
Tesselation is a requirement for the cards, but prolly not the games themselves.


What we actually know is pretty much nothing. Just guesses, speculation and Nv's claims.

History has taught me that when Nv or Ati make claims like "XX% faster" they actually mean "XX% faster in a best case, you will never see this in real life, situation."
 
Status
Not open for further replies.
Back
Top