"Fermi" GF100 performance numbers are out

Status
Not open for further replies.
He does add to mud slinging back and forth on the net.
I think he's a flake, but he does keep his cheerleaders happy.
 
The man has a history of being wrong about anything Nvidia and now because he called 2, mabe 3 things right he is to be given massive credit? That'd be like akin to giving Fudo credit even tho he is wrong most of the time even when he gets it right a few times.

Again, he's not really "wrong" most of the time. He usually says, "we heard this and that" and while he isn't particularly careful with what information he releases, that doesn't constitute him being incorrect. He isn't claiming that he knows this for a fact. Just that this is what a relatively trustworthy source is saying. We've yet to see him be wrong about Fermi, at least with the info he's been talking about the last 2 months. Although I suspect he might be wrong about tessellation. That's not obvious though. Fermi could easily have an ALU based tessellation pipeline, which is what he said it has ("software tessellation" as he calls it). And he's been right about enough things to make his posts worth reading, especially with AMD release stuff, and Fermi delays.

That being said no one is talking about giving him "massive credit". Just about not being a dbag towards him because you read someone else being a dbag towards him. :)
 
QFT.

Console ports are getting lamer and more ubiquitous by the day. So these big circle-jerks by ATI and Nvida about the "potential" of their latest GPU's are nauseating.

I'm not plopping down major $$$ for a GPU upgrade just to run some arbitrary benchmarks trying to impress people. We need games designed to use this stuff from the ground up.

This. It's been alright for the past few years since we could just keep cranking the resolution up, but now it's gotten to the point where a single card at 1080p can push 99% of games 100+ fps and crysis in the 50s and 60s.

Come the next gen of video cards, I believe nv and ati will be in for a big surprise whenever the sales are less than stellar. Though, I don't know how much of their business comes from consumer cards, so they might be alright.
 
This. It's been alright for the past few years since we could just keep cranking the resolution up, but now it's gotten to the point where a single card at 1080p can push 99% of games 100+ fps and crysis in the 50s and 60s.

Come the next gen of video cards, I believe nv and ati will be in for a big surprise whenever the sales are less than stellar. Though, I don't know how much of their business comes from consumer cards, so they might be alright.

Yeah it's pretty ridiculous, but starting ~ winter 2011 - 2012 we'll probably have games that push hardware back to ~ 2004 performance levels, where games like HL2, Doom 3, Far Cry gave hardware lovers something to cry over. We'll just be seeing graphics fidelity tied much more closely to console life cycles. Hopefully that changes at some point.

And I don't think nVidia is betting on huge consumer GPU sales. They've clearly prepared themselves for decreasing consumer GPU market significance. And why wait for the "next gen" of graphics cards. That's the thing with people making these sweeping predictions. It's always the "next generation" they talk about. This has been happening for about 10 years, and accelerated in 2006. Nothing new about it. Hardware sales have probably been driven by the million-2m people owning Crysis for 3 years now. Again nothing new.
 
I'm not sure I get the Charlie hate. He's been pretty accurate these last several months. I've seen posts of his from May 09 talking about Fermi blatantly missing the Holiday cycle. He's probably wrong about Fermi not having a dedicated tesselator, but considering that he's given us something to talk about these last 6 months, and that he's been on the money about a few things is good enough for me. Or do you have some personal grudge? Did Charlie steal your girlfriend?

He's wrong 95% of the time, 4% of the time he steals info then spins it into outright rumors while pretending he came up with it himself, and 1% of the time he has something real but it is full of bias and brutally poorly-written, making it annoying to read. Even a broken watch is right twice a day, ;). He's been around for many years and has always had the same patterns ;).
 
Nvidia invests in gaming and will continue to do so. They know it's what sells the bulk of their graphic cards.
 
He's wrong 95% of the time, 4% of the time he steals info then spins it into outright rumors while pretending he came up with it himself, and 1% of the time he has something real but it is full of bias and brutally poorly-written, making it annoying to read. Even a broken watch is right twice a day, ;). He's been around for many years and has always had the same patterns ;).

I see this all the time. Show me this 5% accuracy you're talking about. Because I've read most of his posts, and besides an obvious anti-nvidia slant, I don't see a lot of misinformation. I do see a lot of sensationalized stuff, but little that is outright wrong. And of course he "steals" all of it. The information isn't originating with him, nor does he claim it is. But I follow B3D threads very closely, and his information is usually a few steps ahead of what is posted on those boards. And yes, of course some of it is off. But it isn't coming from nowhere. Sources aren't always going to be correct, especially with highly secretive projects like this. His stuff is light years ahead of old Inquirer finger-paint journalism, and much better than Fudo's stuff, imo. But both have gotten better since they started on their own.

I'd like you to find me 5 of his posts from September to today, and show me that all of that information is incorrect. Because that's what you're claiming.
 
Nvidia invests in gaming and will continue to do so. They know it's what sells the bulk of their graphic cards.

I very, very much doubt that their high-end consumer GPU market is what drives constitutes the majority of their GPU sales. I don't think you understand how small that market is. Or is it an accident that most of their GPU design focuses around increasing GPGPU performance, and that all of their initial announcements have to do with GPGPU? Must be a mistake...:D
 
Oh, it's only the high end that is capable if gaming now?

Well you're writing this in a thread on their new high-end GPU, so that's what we're going to assume. Fermi is not going to be a low-end part, heck even a $150 part this generation, at least not MSRP. It's an enormous chip, and clearly over-designed for the consumer space. So yes, when you write that
Nvidia invests in gaming and will continue to do so. They know it's what sells the bulk of their graphic cards.
it's completely wrong. Even the 448SP product will not be the majority of their GPU sales. They're probably hoping that Fermi will open massive new markets. They'll keep their market share in this one, and that will be good enough. I don't think GTX 260 and 280 constituted even a major fraction of their consumer GPU sales last generation.

The bulk of their sales are in the rebranded (and slightly redesigned) G80 era products. And they'll continue to be. The high margin stuff is going to workstations. The halo-products like 512SP Fermi are going to give them massive press, not sales. And PhysX is obviously that other chip. But again, if they wanted to make a consumer GPU, it wouldn't be 3.2B transistors and 1.5Tflops. This is probably going to be a low-margin product for the consumer space, and a great product for every other market.
 
I see this all the time. Show me this 5% accuracy you're talking about. Because I've read most of his posts, and besides an obvious anti-nvidia slant, I don't see a lot of misinformation. I do see a lot of sensationalized stuff, but little that is outright wrong. And of course he "steals" all of it. The information isn't originating with him, nor does he claim it is. But I follow B3D threads very closely, and his information is usually a few steps ahead of what is posted on those boards. And yes, of course some of it is off. But it isn't coming from nowhere. Sources aren't always going to be correct, especially with highly secretive projects like this.

I'd like you to find me 5 of his posts from September to today, and show me that all of that information is incorrect. Because that's what you're claiming.

I think in general you have him pegged. But he's SO biased that its hard to take him seriously some times, even when he may have a point.

That said I think most of this shit is garbage to the average person simply meant to generate web traffic. My last main rig with an AMD GPU was an old Athlon 64 3400+ with an AGP 9800 Pro. Sweat card, better than nVidia's 5000 FX seres at the time, this was the end of 2003.

But since then from summer of 2004 with the GeForce 6800, that cards still in service in an in-law's machine today, I've been all nVidia. Not so much a fanboy thing, just the timing of need and or willingness to buy.

Right now 3 GTX 280s STILL are formidable. With multi-monitor gaming potentially on the why, whose to say that these puppies still might have a LITTLE life left in them, if not for lack of DX 11 potentially a LOT of life left in them.

If Fermi has good multi-monitor gaming support and a nice 20%
+ lead over the 5870 and is around the $500 to $530 range, count me in. The higher the performance lead over the 5870 above 20% and the more I'm willing to pay.

Still just guessing, I doubt even when the info is released today that we'll have all the answers.
 
You wrongly assumed I was only referring to their top end.

They have not reduced their investment in TWIMTBP. They know the halo effect still sells cards with their name on it.
 
Looks like that "GF100 = 5970 on a single core" turned out to be complete and utter BS.

Nothing extremely impressive about the card. Maybe after 28nm
 
This was all kind of underwhelming. I want to see some real performance numbers.
 
Zzzzzzzzzzzzzzz.....

Please wake me up when Nvidia makes a really good Voxel accelerator card....
 
Looks like that "GF100 = 5970 on a single core" turned out to be complete and utter BS.

Nothing extremely impressive about the card. Maybe after 28nm

+1 the hype will be gone soon and reality will settle in. Looks like Fermi will be faster than a 5870 and a tad slower than a 5970. Price will prob be near $600.

Bring on the 28nm battle.
 
GTX280<HD5850<GTX360<HD5870<GTX380<HD5970

Kinda like

9800GTX<HD4850<GTX260<HD4870<GTX280<HD4870x2
 
The big issue is what developer is gong to support this new hardware or will we simply be able to play old games like Crysis and Farcry faster?

Nv and ATI better start sinking $$ into game studios to push PC gaming or this is all for nothing...

QFT !!!
 
So all in all the "GF100" or "3 Series" is going to be the same old story as usual their mid range is better than ATi high range and their high range is jsut under ATi's uber range. They just took their sweet time.
Looks like im going to wait for the GTX360
 
GTX280<HD5850<GTX360<HD5870<GTX380<HD5970

Kinda like

9800GTX<HD4850<GTX260<HD4870<GTX280<HD4870x2

Not quite. Someone needs to read the leaked info and the articles about GF100's graphics bits.

Given what we know now, it's more like:

HD 5850 < HD 5870 < GTX 360 < GTX 380 <= HD 5970

Geometry performance in Fermi based GeForces is off the chart, which is funny since tessellation was supposed to be AMD's strong point :D
 
Not quite. Someone needs to read the leaked info and the articles about GF100's graphics bits.

Given what we know now, it's more like:

HD 5850 < HD 5870 < GTX 360 < GTX 380 <= HD 5970

Geometry performance in Fermi based GeForces is off the chart, which is funny since tessellation was supposed to be AMD's strong point :D
you know gtx 360 specs ? i didn't see any in game real benchs how about showing us there awesome tessellation power in real games like dirt and stalker ?
 
Impressive demos indeed, except that's all they are, demos. The raytracing is no where near the point where it can be used in a game, and the hair looked great, but it alone dropped the performance to 26 fps. What happened to the days when nVidia would show off tech demos for new GPUs that actually ran smoothly?

When I see actual DX11 games on PC that run great on one of these, then I'll be intrigued.
 
you know gtx 360 specs ? i didn't see any in game real benchs how about showing us there awesome tessellation power in real games like dirt and stalker ?

448 SPs, 320 Bit memory interface, 1280 MB of GDDR5.

And you can see the tessellation power in the hair demo.
 
Impressive demos indeed, except that's all they are, demos. The raytracing is no where near the point where it can be used in a game, and the hair looked great, but it alone dropped the performance to 26 fps. What happened to the days when nVidia would show off tech demos for new GPUs that actually ran smoothly?

When I see actual DX11 games on PC that run great on one of these, then I'll be intrigued.

Demos have always been like that... they're usually showing off new GPU features. Every GPU I've owned has usually barely been able to run the demos that they ship with (with maybe the exception of the G80).

I do agree with you about usable performance of raytracing and the hair, but you have to remember that almost all current tech went through the same hiccup. When AA and AF was first added to GPU's it wasn't practical to even try to use it because of the major performance hit... now, with GPU's getting more powerful, it's a common feature. We're seeing the same thing with Ambient Occlusion now... you need one hell of a rig to make use of it without suffering major performance drops. Same with eyefinity... just because you can run 3 monitors on a single card doesn't mean you should :)

As the application gets refined and the hardware becomes more powerful, they will eventually become standard features :)
 
Last edited:
Impressive demos indeed, except that's all they are, demos. The raytracing is no where near the point where it can be used in a game, and the hair looked great, but it alone dropped the performance to 26 fps. What happened to the days when nVidia would show off tech demos for new GPUs that actually ran smoothly?

When I see actual DX11 games on PC that run great on one of these, then I'll be intrigued.

I don't think you realize how taxing it is to simulate physics in hair...Nothing could do that, with that amount of hair, in real time before.

Plus, the hair demo is using tessellation.
 
I don't think you realize how taxing it is to simulate physics in hair...Nothing could do that, with that amount of hair, in real time before.

Plus, the hair demo is using tessellation.

I agree it's really amazing to see how far we've come. But how long until we see something like that in a game? I would estimate 6-8 years (if ever).
 
You wrongly assumed I was only referring to their top end.

They have not reduced their investment in TWIMTBP. They know the halo effect still sells cards with their name on it.

Ok Kowan, I'll break it down. Fermi WILL NEVER be a huge seller this generation in the consumer space. It's 3.2B transistors and 23-24mm^2 on what, 40nm? There might be a version coming thus fall that will be 1/4 to 1/8 of fermi and will be a large volume part but that's not what were talking about. Nvidia has not made a great consumer gpu with Fermi. They made what is (hopefully) the fastest GPU on the market, at enormous size and cost. Their major investment this gen was for large coherent caches helping out raytracing, fast context switching, and 8x faster dp. You have a lot there that won't give great performance once you start dumping boatloads of sm's and those geometry units. And from what we know about Fermi it would make no sense for Fermi to be primarily targeting consumer graphics.

Btw according to Dave Bauman, gemoetry bottlneck accounts for single digit % performance increase. I'm assuming he's talking about current dx11 titles as well. So all in all the tech looks fascinating but fermi doesn't strike me as a huge seller. I'm going to go on a limb and say that the majority of their gpu sales will be gt2xx and g9x.
 
Fermi is done for tesla (breaking new markets) with folding and provide features for future games.
as today, most have enough power already in their machines.

Interesting card, let us see one in da shop now ;)
 
I agree that Fermi is going to be a folding monster but that is about it. It has zero to offer to the under 300USD crowd. Also far worse they are cramming transistors on a huge die in order to beat the 5870 The result is going to be a card that is built for 28nm sitting on 40nm AKA HOT.

ATI is about to let loose the 5830 at 230USD. That is insanely high but they are sending Nvidia a message that they simply don't care and can drop the price at the slightest sign of competition.

Nvidia needs 28nm more than ever as thats the ONLY way they can get fermi into smaller versions without killing performance.
 
Any idea about fermi GPU which will be priced close to $180? GTS 350?

Not with the way Fermi is currently built. Its make to suck power and money if they make even a 200 USD version without going to 40nm first. It will be utterly dominated by ATI.
 
I agree that Fermi is going to be a folding monster but that is about it. It has zero to offer to the under 300USD crowd. Also far worse they are cramming transistors on a huge die in order to beat the 5870 The result is going to be a card that is built for 28nm sitting on 40nm AKA HOT.

ATI is about to let loose the 5830 at 230USD. That is insanely high but they are sending Nvidia a message that they simply don't care and can drop the price at the slightest sign of competition.

Nvidia needs 28nm more than ever as thats the ONLY way they can get fermi into smaller versions without killing performance.

I disagree. You assert that the Fermi architecture has nothing to offer to the sub $300 market. It's true that the 4-GPC high-end part (GF100) will be expensive, but the Fermi architecture looks remarkably well suited to size-reductions. I don't think it will take much engineering effort at all to produce a less expensive 2-GPC part. It won't beat AMD at performance/watt, but it will exist and will be at least price competitive with AMD.
 
Not so simply, what NVIDIA have done is to separate the Raster Engine from the pipeline and move it down into the GPCs in four parts, and they have created a new engine they are calling the "PolyMorph Engine" which is integrated into the SMs. First a little breakup of the hierarchy, the GF100 is made up of 4 GPCs (Graphics Processing Clusters) which break down into 4 SMs (Streaming Multiprocessors) which break down into 32 CUDA cores and 4 Texture Units and some other stuff. So, 32 CUDA cores plus 4 Texture Units plus the PolyMorph Engine make up an SM, and 4 SMs make up a GPC. With this kind of parallelism you can see how the GPU can be sliced and diced to create less expensive parts.

HardOCP did mention that GF100 should be highly scalable with the GPC stuff. If GF100 is as powerful as they say the 2GPC version could be the basics of a GTS 350. That's what I would extrapolate but yeah it's pure speculation.
 
There is ZERO chance a 40nm Fermi part will be competitive with AMDs lower ends. Right now the 5670 and the 5770 are selling like hotcakes. And they are way overpriced. Any competition from Nvidia = instance price drops.
 
Status
Not open for further replies.
Back
Top