some fermi performance numbers

argor

n00b
Joined
Dec 20, 2009
Messages
35
d276985e.jpg



The Chinese name card is most likely GTX470,but I'm not sure.

Another rumor from Chinese forum:
http://we.pcinlife.com/thread-1366899-1-1.html

gap bw 5870 and 470 suggests that the Charlie 5% claim might actually be true.
 
Well, only two games, but still disappointing if these are accurate. Suppose some driver updates might put it nearly par with the 5870, but the 480 better be alot faster, or the 5870 refresh will destroy Nvidia this round.

Is the 5870 running out of video memory at Crysis 19*12 8xmsaa?
 
Guess it is worth noting the thing did pull ahead on Crysis with 8X AA but Crysis has usually been a nVidia tuned game....so nothing too surprising. We'll see though, once drivers get better tweaked.
 
Numbers could make sense if that is a GTX470. Typically it seems mid and high-enthusiast level cards are seperated by around 10-20% in performance. A 20% increase over the 470 numbers seen here (if real) would put the 480 at about the 5870's level. Of course what game is being played makes a large difference as well.

My money is on Fermi and Cypress trading blows.
 
Numbers could make sense if that is a GTX470. Typically it seems mid and high-enthusiast level cards are seperated by around 10-20% in performance. A 20% increase over the 470 numbers seen here (if real) would put the 480 at about the 5870's level. Of course what game is being played makes a large difference as well.

My money is on Fermi and Cypress trading blows.

from the rumored specs, the 470 has what, 10% less shaders than the 480, if so, It would be hard to give a 20% increase with slightly less shaders and memory / bandwidth
 
from the rumored specs, the 470 has what, 10% less shaders than the 480, if so, It would be hard to give a 20% increase with slightly less shaders and memory / bandwidth

I agree with you there, 20% was only a guess. Based on what you've pointed out shader intensive apps like those tested above may scale a bit beyond the 10% but not much more. My estimation was based loosely on the 5850/5870 where the 5850 has 10% less shaders than 5870 and suffers about 15% in performance.
 
I agree with you there, 20% was only a guess. Based on what you've pointed out shader intensive apps like those tested above may scale a bit beyond the 10% but not much more. My estimation was based loosely on the 5850/5870 where the 5850 has 10% less shaders than 5870 and suffers about 15% in performance.
The performance delta is also highly dependent on the clock speed differences. The 5870 has a 17%+ faster clock speed alone. I'd bet the same performance comparison could be made with the 470 and 480, where the clock speed has a great effect than the number of shaders.
then what is the point of waiting for fermi other then folding ?
Not much, if these are numbers are close to accurate. Ouch, just ouch. The 5890 refresh should be able to take on or overcome the 480 if AMD really wants to push it.
 
I agree with you there, 20% was only a guess. Based on what you've pointed out shader intensive apps like those tested above may scale a bit beyond the 10% but not much more. My estimation was based loosely on the 5850/5870 where the 5850 has 10% less shaders than 5870 and suffers about 15% in performance.

Yes that's true, but the 5870 is also clocked higher 725->850 and 1000->1200 so that 15% increase is less than the hardware differences imo =)
 
Are you Mindfury @ B3D?
Same exact post over there.
no i am not him
Not hardly, Charlies 5% numbers were for the GTX480, not 470.
je you were rite http://www.semiaccurate.com/2010/02/20/semiaccurate-gets-some-gtx480-scores/

The GTX480 with 512 shaders running at full speed, 600Mhz or 625MHz depending on which source, ran on average 5 percent faster than a Cypress HD5870, plus or minus a little bit. The sources were not allowed to test the GTX470, which is likely an admission that it will be slower than the Cypress HD5870.
 
Ah simple answer to that: old drivers! New drivers boost Crysis Warhead performance by 30% :D :D :D
That post was put up on the 3rd of March, and the new drivers had been available for a few days before that. It's possible--but not likely--that they were using old drivers if the post itself is factual.
 
How can fermi running 384bit vs ati's 256bit, be slower?

The 384-bit 8800GTX is slower than the 128-bit 5770. The 256-bit 8800 GTS 512mb is faster than the 320-bit 8800 GTS 640mb.

One spec does not a complete picture paint.
 
The 384-bit 8800GTX is slower than the 128-bit 5770. The 256-bit 8800 GTS 512mb is faster than the 320-bit 8800 GTS 640mb.

One spec does not a complete picture paint.

But both fermi and cypress are running ddr5. While the gtx runs ddr3 and the 5770 runs ddr5.
 
But both fermi and cypress are running ddr5. While the gtx runs ddr3 and the 5770 runs ddr5.

well, gddr3 and gddr5 (gddr is not the same as ddr), but you completely missed the point. Regardless, bus width is only half of the equation for memory bandwidth. Given two cards, both with GDDR5, one has 128-bit bus and the other a 256-bit bus it is completely impossible for you to tell me which will be faster without knowing the memory clocks as well. And even if you do know the memory clocks, all you know then is which card has more memory bandwidth which tells you shit all about its game performance.

Also, 4850 (256bit gddr3) > 8800 GTS 640mb (320bit gddr3). Heck, 4850 > 2900xt (512bit gddr3), and 8800 gtx (384bit gddr3) > 2900xt (512bit gddr3)
 
But both fermi and cypress are running ddr5. While the gtx runs ddr3 and the 5770 runs ddr5.
So? Number of shader cores will not tell you how efficient they are at a specific task, the number of transistors will not tell you how many of them are useless for gaming purposes, etc. etc.
It is also relevant to point out that this is the first time Nvidia uses GDDR5 on their high end chips while AMD were the ones who developed the spec for it and introduced it in the market more than 1 year ago.

Edit: GDDR5 was introduced in June of '08 so AMD has at least 2 years (added development time) of experience in designing, building and optimizing their chips for that memory interface under their belt.
 
Last edited:
But both fermi and cypress are running ddr5. While the gtx runs ddr3 and the 5770 runs ddr5.

The purpose of memory bandwidth is to keep shader cores and texture units fed, and provide a destination for pixels written from the ROPs.

Think of the memory bus and speed like a tire: wider tires rated for faster speeds mean you can really improve your grip and performance. But the best tires in the world mean nothing when your car's engine has the power of a lawnmower.

If you don't have enough memory bandwidth, it will hold-back performance of the execution units on the card. On the other hand, if you have "enough" memory bandwidth, then anything more is just overkill.

The HD 5870 has just enough memory bandwidth to function. The GTX480 has more bandwidth tha the 5870 because Nvidia was aiming for a performance level about %25-50 faster than the 5870. Unfortunately, Nvidia was unable to push their execution units to the predicted levels of performance, so the extra bandwidth is largely unnecessary.

The card has the extra bandwidth to drive higher performance, but the execution engine has failed to live-up to expectations.
 
Last edited:
Wow if these numbers are accurate then who or rather why would anyone buy a 470/480? I mean unless they give them away at firesale prices. Which they can't afford to do, so ummm ya. They might be fucked. I hope this isn't true since I wanted two 480's, but not at 5870 speeds or slightly above and not this late in the game. /sigh
 
The purpose of memory bandwidth is to keep shader cores and texture units fed, and provide a destination for pixels written from the ROPs.

Think of the memory bus and speed like a tire: wider tires rated for faster speeds mean you can really improve your grip and performance. But the best tires in the world mean nothing when your car's engine has the power of a lawnmower.

If you don't have enough memory bandwidth, it will hold-back performance of the execution units on the card. On the other hand, if you have "enough" memory bandwidth, then anything more is just overkill.

The HD 5870 has just enough memory bandwidth to function. The GTX480 has more bandwidth tha the 5870 because Nvidia was aiming for a performance level about %25-50 faster than the 5870. Unfortunately, Nvidia was unable to push their execution units to the predicted levels of performance, so the extra bandwidth is largely unnecessary.

The card has the extra bandwidth to drive higher performance, but the execution engine has failed to live-up to expectations.

since you have the actual Fermi on you right now and doing benchmarks can you send it to me after you finish?
 
Wow if these numbers are accurate then who or rather why would anyone buy a 470/480? I mean unless they give them away at firesale prices. Which they can't afford to do, so ummm ya. They might be fucked. I hope this isn't true since I wanted two 480's, but not at 5870 speeds or slightly above and not this late in the game. /sigh

nobody has the actual Fermi and even if you did what can you do with it without the drivers?

all the stuff you hearing right now is nothing but bs.
 
since you have the actual Fermi on you right now and doing benchmarks can you send it to me after you finish?

I believe the rumors, since they seem to be pretty consistent. It's your thing if you choose to not believe. I'm not here to argue anything.

But my point stands regardless of what the true performance of Fermi is: you cannot judge performance of a video card purely by memory bandwidth. Lack of bandwidth can hold a card back, but in the end it's the execution pipeline that gets things done.
 
Guess it is worth noting the thing did pull ahead on Crysis with 8X AA but Crysis has usually been a nVidia tuned game....so nothing too surprising. We'll see though, once drivers get better tweaked.

uh? both shows clearly win on 5870...

and on 8xAA, 5870 is running out memory for it, at least even on 1680*1050 when I tested it...
that is why in warhead the beach level using a 5970 is a horrible idea, while it stutters like crazy because of memory....
 
from the rumored specs, the 470 has what, 10% less shaders than the 480, if so, It would be hard to give a 20% increase with slightly less shaders and memory / bandwidth

the 480 will have higher clock speeds then the 470 ( memory and core)
 
I believe the rumors, since they seem to be pretty consistent. It's your thing if you choose to not believe. I'm not here to argue anything.

But my point stands regardless of what the true performance of Fermi is: you cannot judge performance of a video card purely by memory bandwidth. Lack of bandwidth can hold a card back, but in the end it's the execution pipeline that gets things done.

you mistaking me for someone else, i never said that.

Fermi has everything on the paper from what the specs show, from shading to memory bandwidth.

idk where you getting your info from.
 
That's a hilarious thread title, intentional or not. I've been wanting some "ferm" numbers for "fermi" for a long time.
 
you mistaking me for someone else, i never said that.

Fermi has everything on the paper from what the specs show, from shading to memory bandwidth.

idk where you getting your info from.

Read the thread. My original post was in-response to rocketr2, not you. That point I made IN RESPONSE TO his comment still stands in that context.

You were never involved in this conversation, so stop attempting to get into it. Do us all a favor, and stop trying to start a flamewar, troll.
 
Read the thread. My original post was in-response to rocketr2, not you.

So, do us all a favor, and stop trying to start a flamewar, troll.

im not trying to start anything, btw you quoted me when you said that so dont go there.
 
im not trying to start anything, btw you quoted me when you said that so dont go there.

It's not really their fault ya know, after a while all you complete fanboys start to look the same. If you posted something other than OMG NVIDIA AWESOMO-O sometime, might not happen. Alas.
 
im not trying to start anything, btw you quoted me when you said that so dont go there.

And that was only because you quoted my original post, and took it out of context. I was only saying that:

1. Assuming the performance estimates for Fermi are true...

2. The 384-bit memory bus is superfluous, and should not be used as a basis for overly-generalized performance comparisons versus other cards...

3. The analogy was there to make it clear to the less technically-minded that you can put all the memory bandwidth you want on a card, but it means nothing if the execution unit can't process data as fast as you intended. This was in-response to a conjecture that Fermi automatically beats the 5870 because 384-bit GDDR5 > 256-bit GDDR5. The same could be said for the boneheaded decision by ATI to strap a 512-bit bus to the HD 2900XT - it did nothing for the real-world performance.

I'm an equal-opportunity hardass :D
 
Back
Top