ATI 6xxx gpu-z and 3dmark real or fake?

Status
Not open for further replies.
Got any way to back that up?

The same way, I could say since AMD was building Buldozer back in 03, they started multi-core and threading first.

Things can change quite significantly. Fermi is also an example of that, as for the first year of development, it was simply a doubled-up version of GT200b.

I suppose I could have made my initial post more clear, but what I was trying to say is that ATI is going to be going the same route as Nvidia. Chalk Fermi up as a fail all you want, but it obviously works.


Got any way to back up AMD copying Nv's architecture? Anyhow, I never chalked Fermi up as a fail. I actually think it is a damn good product. It is definitely the fastest single gpu solution in existence currently. It did come with a few caveats, heat, power, loud, horribly late, produced by a company I will not personally do business with until certain business practices are fixed. Only the last two make any difference to me. Heat, meh, my case has enough fans to achieve lift off. Power, I have a 1200 watt PSU from a good mfg. Loud, I wear headphones when I game.

I do thank you for finally clarifying your post regarding the copying.
 
Got any way to back that up?

Really!!!?!??!!!!

Z, if you dont know that a single GPU architecture takes 2 years to produce from concept to factory sampling and production then you are ignorant. ANY changes made to a CPU, AFTER the conceptual phase are difficult to say the least. So yes, the 6xxx series was in development before the 5xxx series started shipping. It probably used a lesson or two learned during the sampling and the early production run of the 5xxx series that started last year, but that's about it.

There's was a huge blog somewhere by one of the lead 5xxx series architects that described the Eyefinity setup and how it was a surprise for everyone but 4-5 people in the company who were developing the hardware for it. Im too lazy to go look it up for you, maybe one of the other posters will be more accommodating. But go find it and read it. Then come back with a slightly more informed understanding.

There is also a few comments by the Fermi engineers on how much "fun" they had developing that core. They also comment on the time spans that it takes to work out the core architecture.

And yes, 64 bit architetcure and on die memory controller were copied from AMD, because it was so not intuitive that that was they way to go.(sarcasm) Next thing you will say is that Quantum computers are copied from the research that CERN is doing....
 
Last edited:
everyone take the opposite side chip, open it and make a reverse engineer of it.
common practice of all tech.

the 6 series was designed for 32nm, which was canceled, and 40nm if stuck with same features will be bigger and not as profitable as the 32nm one or 28nm one, but might rule the graphical market.
sell like cookies.

eyefinity was a tight held secret, much a huge suprise for a lot of people.
 
With the 6000, ATI is going to be copying Nvidia architecture again, as Fermi, no matter how you slice it, is simply better at tessellation. It'll be refined however, and should run cooler.

It will be on 40nm as TSMC/GF gave up on anything smaller. I expect it to also have the same amount of Stream Processors as ATI won't want to loose their 'cool running GPU' title. :p
-----
They will, just watch. They copied it with unified shaders too. Just like Intel copied x64 from AMD.

Companies do this stuff. Mark my words, the next ATI overhaul is going to have shader-based tessellation. And don't you dare call me a fanboy. I have MORE than my fair share of ATI/AMD systems. My big boy is just with those with the big chips.

Hopefully you were just playing the heel here but I doubt it. Try to educate yourself a little bit before dramtically passing off opinion as real information.

Read this: http://www.anandtech.com/show/2679/3 (it takes YEARS to design/fab/roll out a product)
 
I suppose I could have made my initial post more clear, but what I was trying to say is that ATI is going to be going the same route as Nvidia. Chalk Fermi up as a fail all you want, but it obviously works.
Sorry, Zurginator, but that just sounds like you're trying to save face after almost every point you've made thus far has been wrong.

- Fermi's tessellation unit(s) is still fixed function hardware, just like AMDs. Fermi's tessellation units are placed close to the shader modules, whereas AMD's single tessellation unit is more remote. It's not entirely clear how much of an advantage this is because, as I mentioned earlier, Cypress's tessellator is starved by other parts of the architecture. It's not impossible that AMD will similarly divide its tessellation hardware amongst its shader cores, but such a change would take a year or two to develop and test so there's no chance it would be "copying" Fermi. Not to mention AMD has other things to fix first.
- As already mentioned by others, AMD was the first to release a unified-shader architecture. When NV released G80 it was actually a surprise that it had unified shaders because nobody expected them to have been developing them.
- Neither AMD nor Intel "copied" multicore from each other as the first commercial multicore processor was made by IBM back in 2001. Again, lead times for these products are a lot longer than you are implying.
- Intel didn't "copy" x64 as much as they implemented it. x64 is an extension to the x86 instruction set. Intel had their own brand new 64-bit instruction set (IA-64) which didn't do too well. AMD's approach was to simply extend the already ancient x86 ISA so that developers would have a much easier time migrating, not to mention it preserves backwards compatibility with older x86 code. Intel did not copy parts of the AMD CPU architecture in order to implement those new instructions. They found their own way to implement them.

Mark my words, the next ATI overhaul is going to have shader-based tessellation.
Mark my words, ATI's next architecture will certainly NOT implement "shader-based" tessellation (or at least not to any extent more shader-based than it already is). Not unless DirectX 12 decides to make tessellation hardware programmable. This isn't a terrible idea for sometime in the future, but right now fixed-function hardware is probably the faster/better solution. Of course, if this ends up being the case both AMD and NV will develop such stuff independently.

As for Fermi being a "fail" all I can say is that ATI has managed to pack very similar performance into a lot less silicon. Cypress is faster than GF104 and is smaller too. Is that a "design win" for ATI? I think so. Does that then imply that the Fermi architecture's perf/mm2 is a "design loss"? It's never been an argument that Fermi doesn't work rather that Fermi doesn't work as well as one should expect given the die size and power draw of those chips.
 
Last edited:
As for Fermi being a "fail" all I can say is that ATI has managed to pack very similar performance into a lot less silicon. Cypress is faster than GF104 and is smaller too. Is that a "design win" for ATI? I think so. Does that then imply that the Fermi architecture's perf/mm2 is a "design loss"? It's never been an argument that Fermi doesn't work rather that Fermi doesn't work as well as one should expect given the die size and power draw of those chips.

maybe we should replace "design fail" with "not economically feasible" here. Fermi architecture does work and sometimes works well its just not going to turn Nvidia a profit at this point. much in the same way AMD cpus are not bad at all, they are in fact good chips. the problem AMD has been having is not that the chips are bad in any way its that Intel is designing excellent ones. that puts them in second place. Nvidia doesn't quite have that issue as they can still take first but at a high cost. so this argument might just be one of terms
 
I suppose I could have made my initial post more clear, but what I was trying to say is that ATI is going to be going the same route as Nvidia. Chalk Fermi up as a fail all you want, but it obviously works.

Yeah you've already been burned by many replies already so I am just going to say this.

You cannot copy an architecture like you'd copy your buddie's answers during a test.

It just doesn't work like that. Even if it were possible, there are things set up in the real world to prevent that from happening. Think about it. You've spent millions of man-hours and hundreds of millions to set up, research, produce and maintain one kind of "technology." Would you be happy if another company came along, looked at your invention, and decided to mass produce it at a much cheaper cost than you can the day after?
 
Just like Intel copied x64 from AMD.

Intel and AMD have a cross-licensing agreement. Intel didn't copy x64 from AMD anymore than AMD copied x86 from Intel. They have an agreement to implement the instruction sets that the other develops for the sake of compatibility. They can *not* copy each others work, but they *can* implement the same instructions and call it compatible. As in, Intel can advertise support x64 while AMD can advertise support SSE, but they have to implement the support for that themselves from the ground up.

If AMD ever copied Nvidia's design or vice versa, they would sue the snot out of each other. That is illegal.
 
this thread got derailed quickly by people jumping on Z.

But to get back to OPs post, I think that enough evidence has surfaced thus far on the various sites to suggest that these could be the real deal. I will keep my eye out more more leaks or more reviews in the next month. If they are launching the first 6xxx series in October, would it be fair to say that we can expect a sneak peak/preview in September?
( I wonder if Steve or Kyle could comment?)

What do you think....
 
I would love a sneak peak, I'm really hoping for some pricing info so I can budget for one of these things.

I'm running a 4670 1GB and it's killing me as I have a backlog and would like to play some games without stuttering.
 
Here's the link to the Anandtech article that was mentioned before:

http://www.anandtech.com/show/2937

As for people saying it takes years in R&D - that's right, and for this reason alone - is why NVIDIA won't be able to swing against AMD until Q3+ 2011.

This tech will be all they have, and I very much doubt they could squeeze another 50% performance out of it, without going over 300W - it would require ABSOLUTE magic to do so.

A full 512 shader model uses 100W+ more under load - so that answers any questions you may have.

http://www.geeks3d.com/20100810/geforce-gtx-480-512sp-power-consumption-with-furmark/

Proof there for the lazy.

So, keeping that in mind, even a 512 shader won't beat the 6870 - yet the 6870 is <300W and the 512 shader 480 hasn't even been announced at all....
 
Now I will call Eyefinity Sunspot, the proper name.

Have to hand it to ATI crew honestly. These folks pulled some major upset, as large as the revolution ushered in by the 9700 in the past.
 
Now I will call Eyefinity Sunspot, the proper name.

Have to hand it to ATI crew honestly. These folks pulled some major upset, as large as the revolution ushered in by the 9700 in the past.

I like that comparison... but also remember how much NVIDIA came back after the 9700 - it took the 5, 6 and then 7 series to bounce back to their former glory days.

I truly believe we'll see that again.

I don't think NVIDIA will be able to be on top (I don't count using 300W+, dual card required for Surround, etc keeping up) until end of 2011.

Bring it.
 
Best case is its real and its actually lower scoring than the final product due to the drivers being the first ones with support (i assume they are the first/actually do support them).

I have to keep laughing that most of this thread has nothing to do with the topic :p

I'm just getting overley excited over all the new releases planned for the next year as i want to replace my gaming rig and will proberley start with a new gpu so that i can wait for both intel and amd's next gen processors to come out to see who can give me the most power for what i decide is an acceptable budget and really the 6xxx cards are getting me so excited because of the cards themself but also their effect on nvidia's 4xx and ati's 5xxx cards.
 
Anandtech forums say 20% faster than GTX 480 in Unigen. ATI worked hard to improve it's Tesselator.

So what's been going around is a photoshopped GTX480 score on the left and the 671x on the right, merged... unigine 2.1

So I can't trust that, and no one seems to have their 480 score handy when everyone else announces 25% better than... so I hit google like any normal person would do.

Here's X27163 for the gtx480.
http://service.futuremark.com/resultComparison.action?compareResultId=2291877&compareResultType=19


then a unigine 2.1 that came up
GTX480
Direct3D 11
Res: 1920×1080 fullscreen
Tessellation mode: Extreme
- Score: 970

the leaked 671x scored 926 ...


I guess just claiming something is faster and repeating it suffices above and beyond actual benchmarks already out there, or a very certain one photo'ed in alongside the leak in a neat little anonymous pic share...good gawd...

http://www.geeks3d.com/20100525/qui...-4-0-and-direct3d-11-in-extreme-tessellation/

Yes, the 480 is slightly overclocked but that's the 1st link that came up in my search so I'm not spending forever - I guess the overlcock gained it far more than the "+25%" the new northern islands SI named Radeon card "beat it by"... yeah, the GTX480 achieves far over 25% increase at slightly higher core... ( another great rumor).
 
Last edited:
Here's X27163 for the gtx480.
http://service.futuremark.com/resultComparison.action?compareResultId=2291877&compareResultType=19

I guess just claiming something is faster then repeating it suffices above and beyond actual benchmarks already archived that cannot be changed in photoshop.

Two things about that score (and the OCB site, too).

i7 980x = hugh CPU score boost, and overall boost.

You cannot easily tell if you have CF/SLI setups on that site.

which can result in a less glamourous:

http://service.futuremark.com/resultComparison.action?compareResultId=2435223&compareResultType=19

X18138

with "Linked display adapters: Yes "
 
Two things about that score (and the OCB site, too).

i7 980x = hugh CPU score boost, and overall boost.

You cannot easily tell if you have CF/SLI setups on that site.

which can result in a less glamourous:

http://service.futuremark.com/resultComparison.action?compareResultId=2435223&compareResultType=19

X18138

with "Linked display adapters: Yes "

Ok, very good point on the 2x 480 orb.
I just find it very frustrating that someone doesn't just post a score. How about some proof ? I guess I'll dig through Hard here for whatever is already posted by one of his reviews.

Ok, so here's a evga web link with loads of scores to scroll down to what matches. Doubtful that got modified after the leak.

http://www.evga.com/forums/tm.aspx?m=353061

----

In any case my guess is since the GTX480 is %25 faster than the 5870, not 10%, that this new SI card will be 10% faster than the GTX480. Basically the opposite of what I read someone here say. (they had the 10% and 25% reversed)
 
Ok, very good point on the 2x 480 orb.
I just find it very frustrating that someone doesn't just post a score. How about some proof ? I guess I'll dig through Hard here for whatever is already posted by one of his reviews.

Ok, so here's a evga web link with loads of scores to scroll down to what matches. Doubtful that got modified after the leak.

http://www.evga.com/forums/tm.aspx?m=353061

----

In any case my guess is since the GTX480 is %25 faster than the 5870, not 10%, that this new SI card will be 10% faster than the GTX480. Basically the opposite of what I read someone here say. (they had the 10% and 25% reversed)

Yeah, I cannot run extreme mode ("not 1200p" since I only have a 1080p monitor...)

Otherwise, I can only contribute to the talk, and not to the data :(

EDIT: evga ran with Physx on, at "Performance" modes :( I don't think that data is comparible in any way. :mad: Why does ORB have to make this so hard, lol.
 
Ok, very good point on the 2x 480 orb.
I just find it very frustrating that someone doesn't just post a score. How about some proof ? I guess I'll dig through Hard here for whatever is already posted by one of his reviews.

Ok, so here's a evga web link with loads of scores to scroll down to what matches. Doubtful that got modified after the leak.

http://www.evga.com/forums/tm.aspx?m=353061

----

In any case my guess is since the GTX480 is %25 faster than the 5870, not 10%, that this new SI card will be 10% faster than the GTX480. Basically the opposite of what I read someone here say. (they had the 10% and 25% reversed)

480 is never 25% faster than 5870.
more likely 10-15%..

The 3D vantage score will not favor ATI, since it count GPU PhysX score... which will be incorrect as result.
 
This tech will be all they have, and I very much doubt they could squeeze another 50% performance out of it, without going over 300W - it would require ABSOLUTE magic to do so.

Nvidia already goes over 300w. The GTX 480 will hit 320w under furmark. Which is probably why we won't see the 512sp version or a refresh until a die shrink - either that or the refresh will cut out a ton of the GPGPU stuff like the GTX 460 did.
 
Ok, very good point on the 2x 480 orb.
I just find it very frustrating that someone doesn't just post a score. How about some proof ? I guess I'll dig through Hard here for whatever is already posted by one of his reviews.

Ok, so here's a evga web link with loads of scores to scroll down to what matches. Doubtful that got modified after the leak.

http://www.evga.com/forums/tm.aspx?m=353061
GTX 480 Vantage scores:

http://www.overclockersclub.com/reviews/nvidia_gtx480/13.htm
http://www.overclock.net/nvidia/710893-gtx480-vantage-extreme-score.html

Seems to score about 9200 at stock, 10500 at 800MHz+, variability obviously dependent on the rest of the system, etc. How are these in conflict with what has been posted?

In any case my guess is since the GTX480 is %25 faster than the 5870, not 10%, that this new SI card will be 10% faster than the GTX480. Basically the opposite of what I read someone here say. (they had the 10% and 25% reversed)
Where's you benchmarks and screenshots to back up that statement?

Here's a review with GTX 480 performance in Crysis: http://www.tomshardware.com/reviews/geforce-gtx-480,2585-10.html . Even though the GTX 480 was run at a lower 1920x1080 resolution, the 6870 is still at least 40% faster than the GTX 480.
 
GTX 480 Vantage scores:

http://www.overclockersclub.com/reviews/nvidia_gtx480/13.htm
http://www.overclock.net/nvidia/710893-gtx480-vantage-extreme-score.html

Seems to score about 9200 at stock, 10500 at 800MHz+, variability obviously dependent on the rest of the system, etc. How are these in conflict with what has been posted?

Where's you benchmarks and screenshots to back up that statement?

Here's a review with GTX 480 performance in Crysis: http://www.tomshardware.com/reviews/geforce-gtx-480,2585-10.html . Even though the GTX 480 was run at a lower 1920x1080 resolution, the 6870 is still at least 40% faster than the GTX 480.

My only comment on this is that that's March 26, meaning it's before the latest driver revisions which per [H] garnered significantly higher fps out of the Fermi series. I don't agree or disagree though I'll have to pull up some more recent reviews.

Edit: actually comparing the newest [H] review a vanilla GTX480 vs a HD5870. a GTX480 is generally at "most" 10% that a HD5870 and that may be due to larger Ram. I'm sorta leaning toward the 6870 raping a GTX480 just from the chinese benchmarks. Although they could mean nothing.

http://www.hardocp.com/article/2010/08/30/asus_eah5870_v2_stalker_edition_video_card_review/5
 
480 is never 25% faster than 5870.
more likely 10-15%..

The 3D vantage score will not favor ATI, since it count GPU PhysX score... which will be incorrect as result.

insert: for mrk6's complaint"Where's you benchmarks and screenshots to back up that statement? " Yeah, looksie below.

http://www.hexus.net/content/item.php?item=24000&page=14

480 Dirt2 31.2% faster than 5870
HAWX 18.2% faster than 5870
Far Cry2 39.9% faster than 5870
---- I guess "never" isn't so "never"and 10% is so 25%...;)

Oh and - PhysX "isn't allowed" in the vantage scores when comparing with the cards that can't do PhysX, so the advantage is skewed for the crippled ATI. Is Cuda allowed ? Is Bokeh filter counted ? 32xmsaa ? Tesselated water ? How many things can't we "count" ?
I'll stick with 25% and I'm being generous, as always, to the disadvantaged.
 
Last edited:
GTX 480 Vantage scores:


http://www.overclock.net/nvidia/710893-gtx480-vantage-extreme-score.html

10500 at 800MHz+, variability obviously dependent on the rest of the system, etc. How are these in conflict with what has been posted?

Yes, variability - so we have the 25% better variability quacked about - as in 67xx is 25% (some are saying 35% but we'll forget that variabilty).... and right away a 10k score vs an 11k score is 10%



Let's take yer link up there - and variability with it- the cpu 1 score for NI secret bench is 3682.88 - on yer link for 480 cpu 1 score is 2415.47

That's over a 33% variability in the test processor, in you know who's direction... now that's what I call variability, and what I call "a problem" and proof for you and your "where's your links"... heck you provided proof I'm correct, thanks for that.

Like I said 10%.
 
Hey SiliconDoc, how about this review of the 480?
http://www.hardocp.com/article/2010/03/25/nvidia_fermi_gtx_470_480_sli_review/6
Im just comparing the hexus review to this one, they have the same resolution but slightly different settings for Dirt 2. I mean we could dissect each game listed, but I only wanted to do one of them.

In Hexus, they are using 8xAA (ultra) in [H]ardocps they set the 480 to 4xAA, but it gets only 42FPS average versus the Hexus 77FPS. Anyone seeing this kind of disparity would be very concerned.

In the Hexus review you get a 77FPS for the 480 and a 59FPS for the 5870. In the [H]ard review you get 42 FPS for the 480 and 48FPS for the 5870.

So after seeing this, I naturally wanted to reach out for a third source, so i went to Anand. He calls the 480 at 56FPS versus 52FPS for the 5870 for the same resolution that Hexus quoted.

Do these numbers seem odd?

and Anand says: "To wrap things up, let&#8217;s start with the obvious: NVIDIA has reclaimed their crown &#8211; they have the fastest single-GPU card. The GTX 480 is between 10 and 15% faster than the Radeon 5870 depending on the resolution, giving it a comfortable lead over AMD&#8217;s best single-GPU card."

[H]ard says: "The GeForce GTX 480 is more relevant in the market but it hasn&#8217;t exactly come out of the gate wowing us with performance either. There are some games where it is faster than the Radeon HD 5870, and there are some games where it is even with the Radeon HD 5870. Factor in the cost and power, and include the ability to run Eyefinity on a single GPU, the Radeon HD 5870, to us, seems like the better value for the gamer right now."

Hexus says: "onjecturing somewhat, GeForce GTX 480 is probably 75 per cent of the high-end GPU that was imagined by NVIDIA early last year. Our numbers show that NVIDIA's finest single-GPU card is, on average, 10-20 per cent faster than AMD's Radeon HD 5870 1,024MB at a 2,560x1,600 resolution. GeForce GTX 480 is due to cost some 40 per cent more, so whilst the trade-off between extra expense and performance isn't ideal, it's not completely disastrous for NVIDIA. "


In all three reviews, the systems are fairly similar. all CPUs run above 3 GHz and have 6GB of RAM
 
Last edited by a moderator:
My only comment on this is that that's March 26, meaning it's before the latest driver revisions which per [H] garnered significantly higher fps out of the Fermi series. I don't agree or disagree though I'll have to pull up some more recent reviews.

Edit: actually comparing the newest [H] review a vanilla GTX480 vs a HD5870. a GTX480 is generally at "most" 10% that a HD5870 and that may be due to larger Ram. I'm sorta leaning toward the 6870 raping a GTX480 just from the chinese benchmarks. Although they could mean nothing.

http://www.hardocp.com/article/2010/08/30/asus_eah5870_v2_stalker_edition_video_card_review/5
Yeah, I just quickly Googled, thanks for putting that up. The GTX 480 actually loses pretty badly in Arma II, that's surprising. Anyway, I think you ballpark of ~10% is a good assessment :cool:.
insert: for mrk6's complaint"Where's you benchmarks and screenshots to back up that statement? " Yeah, looksie below.

http://www.hexus.net/content/item.php?item=24000&page=14

480 Dirt2 31.2% faster than 5870
HAWX 18.2% faster than 5870
Far Cry2 39.9% faster than 5870
---- I guess "never" isn't so "never"and 10% is so 25%...;)
It's funny how you neglected to mention the Crysis test from that review, where the HD5870 was faster than the GTX 480, or the BF:BC2 benches, where the GTX 480 was only ~3% faster. Let's take a look at the more recent [H] review referenced by piscian18 (so, real world testing). The HD5870 is 15% slower in AvP, 28% faster in Arma II, 4% slower in BC2, 3% faster in STALKER: CoP, and 20% slower in Metro 2033. That only works out to the GTX 480 being ~4% overall, so maybe ~10% is generous. In either case, I've already compared the 6870 scores to GTX 480, so regardless of how you think the GTX 480 performs against the HD 5870 those leaked benchmarks are showing the 6870 to be at least 40% faster than the GTX 480 in Crysis.

Oh and - PhysX "isn't allowed" in the vantage scores when comparing with the cards that can't do PhysX, so the advantage is skewed for the crippled ATI. Is Cuda allowed ? Is Bokeh filter counted ? 32xmsaa ? Tesselated water ? How many things can't we "count" ?
I'll stick with 25% and I'm being generous, as always, to the disadvantaged.
If you don't know or can't understand why having PhysX on in Vantage skews the overall result, you shouldn't even be arguing in this forum in the first place (here's a hint: it's having the GPU help the CPU to artificially inflate the CPU score, and therefore the overall score).
Yes, variability - so we have the 25% better variability quacked about - as in 67xx is 25% (some are saying 35% but we'll forget that variabilty).... and right away a 10k score vs an 11k score is 10%



Let's take yer link up there - and variability with it- the cpu 1 score for NI secret bench is 3682.88 - on yer link for 480 cpu 1 score is 2415.47

That's over a 33% variability in the test processor, in you know who's direction... now that's what I call variability, and what I call "a problem" and proof for you and your "where's your links"... heck you provided proof I'm correct, thanks for that.

Like I said 10%.
Poor math and poor homework. Here's a direct comparison that should make it simple:
GTX 480 stock, GPU score (not total) is 18026: http://hardforum.com/showthread.php?t=1513719
Here's the 6870 reference bench, GPU score is 24056: http://img291.imageshack.us/img291/3225/vantaget.jpg
The 6870 appears to be over 33% faster than the GTX 480 in Vantage. Also note that PhysX was used with the CPU, although that doesn't affect the GPU score.
 
My only comment on this is that that's March 26, meaning it's before the latest driver revisions which per [H] garnered significantly higher fps out of the Fermi series. I don't agree or disagree though I'll have to pull up some more recent reviews.

"significantly higher fps" from newer drivers? Not really: http://hardocp.com/article/2010/06/16/nvidia_forceware_25721_driver_performance/

AVP got a nice bump, and BF:BC2 with TR AA got a huge bump, but everything else was ~1fps improvement, if that. Always welcome, but not significant.

But yeah, just look at the latest review, 10-15% faster is still about right. Heck, if anything the gap has *shrunk* between the 480 and 5870 over the months. 25% faster is wishful thinking.

Oh and - PhysX "isn't allowed" in the vantage scores when comparing with the cards that can't do PhysX, so the advantage is skewed for the crippled ATI. Is Cuda allowed ? Is Bokeh filter counted ? 32xmsaa ? Tesselated water ? How many things can't we "count" ?

Better yet, let's run vantage at 5760x1200. Oh, what's that? The GTX 480 can't run at that resolution? Awww, that means even the lowly 5650 is infinitely faster than the GTX 480.

Also, why is "tessellated water" in that list? The 5xxx series does tessellation too, you know... Hell, the 9700 Pro did tessellation.
 
Last edited:
Hey SilDog, how about this review of the 480?
http://www.hardocp.com/article/2010/03/25/nvidia_fermi_gtx_470_480_sli_review/6
Im just comparing the hexus review to this one, they have the same resolution but slightly different settings for Dirt 2. I mean we could dissect each game listed, but I only wanted to do one of them.

In Hexus, they are using 8xAA (ultra) in [H]ardocps they set the 480 to 4xAA, but it gets only 42FPS average versus the Hexus 77FPS. Anyone seeing this kind of disparity would be very concerned.

In the Hexus review you get a 77FPS for the 480 and a 59FPS for the 5870. In the [H]ard review you get 42 FPS for the 480 and 48FPS for the 5870.

So after seeing this, I naturally wanted to reach out for a third source, so i went to Anand. He calls the 480 at 56FPS versus 52FPS for the 5870 for the same resolution that Hexus quoted.

Do these numbers seem odd?

and Anand says: "To wrap things up, let&#8217;s start with the obvious: NVIDIA has reclaimed their crown &#8211; they have the fastest single-GPU card. The GTX 480 is between 10 and 15% faster than the Radeon 5870 depending on the resolution, giving it a comfortable lead over AMD&#8217;s best single-GPU card."

[H]ard says: "The GeForce GTX 480 is more relevant in the market but it hasn&#8217;t exactly come out of the gate wowing us with performance either. There are some games where it is faster than the Radeon HD 5870, and there are some games where it is even with the Radeon HD 5870. Factor in the cost and power, and include the ability to run Eyefinity on a single GPU, the Radeon HD 5870, to us, seems like the better value for the gamer right now."

Hexus says: "onjecturing somewhat, GeForce GTX 480 is probably 75 per cent of the high-end GPU that was imagined by NVIDIA early last year. Our numbers show that NVIDIA's finest single-GPU card is, on average, 10-20 per cent faster than AMD's Radeon HD 5870 1,024MB at a 2,560x1,600 resolution. GeForce GTX 480 is due to cost some 40 per cent more, so whilst the trade-off between extra expense and performance isn't ideal, it's not completely disastrous for NVIDIA. "


In all three reviews, the systems are fairly similar. all CPUs run above 3 GHz and have 6GB of RAM

And here you see the difference between time demos and actual game play. Hexusa and Anand still use canned time demos. They may be ones they recorded themselves, but they still use time demos. [H] has been using the average of multiple play through method for quite some time.
 
And here you see the difference between time demos and actual game play. Hexusa and Anand still use canned time demos. They may be ones they recorded themselves, but they still use time demos. [H] has been using the average of multiple play through method for quite some time.

I noticed that, and some time ago used to browse Hocp before it was "on the map". The way the games are benched is fine ( HWC does similar), but then Vantage cannot be spoken of, nor can Unigine. Take one way or the other, and if that's the argument here, then I'll ignore 100% unigine and vantage.
Thanks for that.

As to all the other(s), they can cite this or that, but I've seen hundreds of reviews and 25% faster for the 480 is correct. We can discount Cuda, PhysX, Bokeh Filter, 32xmsaa, SLI scaling, and on and on and on. Then we can choose certain benches or games and then card settings and resolutions and then AA often shown at 0 so ATI can keep up....

When all the dice are thrown 25% faster for the 480 is a generous number for the 5870 to settle for.

You're all welcome to your different opinions, and you certainly won't be changing mine.

http://techreport.com/articles.x/19404/11
DX11 cards charted


GTX480 64 FPS overall

HS5870 49 FPS overall

25% is generous, not to mention all the other advantages the GTX480 has for gaming and more.
 
Last edited:
6xxx will be faster then 480 or 5870, i think we agree on that. We will see by how much.

You're all welcome to your different opinions, and you certainly won't be changing mine.

and yes, i think the 470 and 480 are a complete and total failure. They are too loud and use too much power by comparison to the 5xxx series. This shows a lack of power management in the chip design or a failure of the chip architecture at "reasonable" speeds that would not consume as much power or require as much cooling. The 460 on the other hand, is a kick in ATIs nuts. One they needed, their pricing was inflated.
 
6xxx will be faster then 480 or 5870, i think we agree on that. We will see by how much.



and yes, i think the 470 and 480 are a complete and total failure. They are too loud and use too much power by comparison to the 5xxx series. This shows a lack of power management in the chip design or a failure of the chip architecture at "reasonable" speeds that would not consume as much power or require as much cooling. The 460 on the other hand, is a kick in ATIs nuts. One they needed, their pricing was inflated.

I feel bad for being a follower but based mostly on [H] impressions I've completely written off the gtx470/80. Yeah the performance is ok for the "current" price but with so many drawbacks I'd rather just wait or pick up the 6xxx series.
 
I feel bad for being a follower but based mostly on [H] impressions I've completely written off the gtx470/80. Yeah the performance is ok for the "current" price but with so many drawbacks I'd rather just wait or pick up the 6xxx series.

I hear ya. I am a Green Fanboi all the way, but it looks like a pair of 6870s will be my first Non-Nvidia card since the Radeon 9700 Pro.
 
I hear ya. I am a Green Fanboi all the way, but it looks like a pair of 6870s will be my first Non-Nvidia card since the Radeon 9700 Pro.

I ran a single 5870 for eyefinity until recently and the only real reason for ditching was to avoid getting burned by the pricedrop on the cards and the DP adapters. I would highly recommend ATi at this point.
 
CF profiles have been fixed, so that horribly biased techreport article is out of date. It's also outdated because prices have changed. Furthermore, that article was NV-biased, including some very-NV-biased games (need to test more than a few games, else one weird one like Far Cry 2 or Metro will throw everything off), overclocked 480 card but stock 5870 card.

http://www.techpowerup.com/reviews/Axle/GeForce_GTX_460_768_MB/31.html

TPU says 480 is faster by 13.6% over a much larger span of games than that biased techreport garbage. Anandtech pegs it at 10-15%: http://www.anandtech.com/show/2977/...x-470-6-months-late-was-it-worth-the-wait-/20 I could cite to a ton of other stats but the point is, keep dreaming about that 25% gap, because it doesn't exist.

CUDA has no current relevance for gaming.

PhysX (which may be superseded by OpenCL anyway) made barely any difference in every game it's been used in so far except maybe 2 (Batman, Mirror's Edge), and even then it wasn't a huge difference, just a nice bonus.

AMD also does better video decoding (and bitstream audio too): http://www.techpowerup.com/reviews/HQV/HQV_2.0/8.html

Not to mention that starting with Cypress, AMD has perfect AF filtering and Fermi still doesn't (not that this matters, either, much like PhysX... nobody really notices or cares). Link: http://www.anandtech.com/show/2977/...tx-470-6-months-late-was-it-worth-the-wait-/7

And nothing more need be said about energy efficiency and thermals... no contest AMD wins that battle with current-gen cards.

AMD Eyefinity works with ONE card, doesn't require multi-GPU and the power/heat/noise/potential microstutter/potential need to change mobos to get SLI, that NV requires. And NV still has no Surround profile hotkeys, wtf?

Both companies have their respective strengths. Competition is good for consumers, and that there is no one-size fits all solution for everyone. You should be happy for competition and let other people choose what they want for their cards.

P.S. Vantage and Unigine aren't games. Around here, gameplay numbers are what really matters when comparing gaming cards, not stupid benches that may or may not have anything to do with reality.

 
Last edited by a moderator:
I noticed that, and some time ago used to browse Hocp before it was "on the map". The way the games are benched is fine ( HWC does similar), but then Vantage cannot be spoken of, nor can Unigine. Take one way or the other, and if that's the argument here, then I'll ignore 100% unigine and vantage.
Thanks for that.

As to all the other(s), they can cite this or that, but I've seen hundreds of reviews and 25% faster for the 480 is correct. We can discount Cuda, PhysX, Bokeh Filter, 32xmsaa, SLI scaling, and on and on and on. Then we can choose certain benches or games and then card settings and resolutions and then AA often shown at 0 so ATI can keep up....

When all the dice are thrown 25% faster for the 480 is a generous number for the 5870 to settle for.

You're all welcome to your different opinions, and you certainly won't be changing mine.

http://techreport.com/articles.x/19404/11
DX11 cards charted


GTX480 64 FPS overall

HS5870 49 FPS overall

25% is generous, not to mention all the other advantages the GTX480 has for gaming and more.

Back when the GTX 480 came out, heatlesssun (I think) made a spreadsheet comparing the 480 vs. the 5870 across dozens of reviews and settings. The result? ~7% faster without AA, ~15% faster with AA.

It isn't opinion, we are talking hard data here. ~15% faster is about right. 25% is high - not generous.

You also keep listing these features the 480 supports that the 5870 doesn't as if that is unique. BOTH cards have features the other doesn't support. Those should be considered on their own, not as part of performance testing. That is why Vantage using PhysX gives the 480 an artificial advantage in overall score. What if the tables were turned and it used the Stream SDK to boost CPU score? Would you be OK with that? Of course not - it's blatantly biased. And why do you care about 32xMSAA so much? The 480 is fast, but no way in hell is it driving anything somewhat recent at a decent resolution with 32xMSAA. ATI has 24xCFAA, by the way, so you can still get stupid high AA levels on ATI.

Likewise, Brokeh Filter isn't even an Nvidia feature. It is done by developers. You can do the same thing with ATI's cards as well.
 
I don't think those results are faked, but likely from an early build with unoptimised drivers. That doesn't mean the release card will definitely be faster as sometimes performance tweaks don't make it into release drivers.
 
Last edited by a moderator:
Not to mention that starting with Cypress, AMD has perfect AF filtering and Fermi still doesn't (not that this matters, either, much like PhysX... nobody really notices or cares). Link: http://www.anandtech.com/show/2977/...tx-470-6-months-late-was-it-worth-the-wait-/7
I'd like to make a slight correction here. It's true the AMD's AF filtering is almost completely angle-independent, and that is a good thing. However, angle-independence isn't the only measure of AF quality, and both RV770 and Evergreen both suffer from AF that under samples and causes shimmer / texture crawl, which is far from "perfect". For all the complaints there were against G80's AF, I had less issues with texture shimmering and moire effects in Oblivion with my 8800GTS than my HD4870 when using high-res texture replacements.

I sincerely hope that AMD considers this a "bug" rather than an "optimization" and has fixed it for NI if it really is a hardware issue and not related to the driver.
 
Last edited:
snip!
It isn't opinion, we are talking hard data here. ~15% faster is about right. 25% is high - not generous.

Yeah, if we isn't opinion, ours isn't either. :) We are talking hard data. I provided hard data. Oh, you have it, it's not opinion, it all over the place. Got it. Thanks.
Hard data at the link. http://techreport.com/articles.x/19404/11 It's more than 25%, without the added features. That's not opinion either counselor, that's hard evidence, the jury will have to decide. :)
Next, what 5870 features ? None to count, plenty to subtract. Eyefinity requires something other than what is being tested - extra monitors, extra adapter - did you see me bring up 3d Vision or triple monitor Nvidia ? No, I did not, but if that's a comparison, NVidia wins that one as well ---
I didn't bring it up because it requires a lot of added cost, just like eyefinity does.
What I did bring up is ready in a normal setup - and ATI cards can't do, don't do.( PhysX driver hack if you like, that's great in fact, but if I'm told PhysX is nothing, not worth it, then counting it for ATI w/ hack isn't quite right is it ?)
Ray tracing Design Garage - good and free for the GTX200 on up.
See, there's so much, I didn't even list it all. I wasn't being overt. Free PhysX screen saver. See.
Stream doesn't count for Radeon , Nvidia has a better equivalent, and badaboom !
I mean really, is there anything at all ATI has uniquely....that doesn't cost you an extra $500 to a grand after you buy the card ? No.
Nvidia Yes. Lot's
I'd say that's why.
I certainly hope the 6000 does better. Would be nice to have another company card that has so many extra, added, values, and fun things many gaming related, all attached and added on for free. I mean that would be great. I'm not holding my breath.
 
Last edited:
Status
Not open for further replies.
Back
Top