3DMark 2005 released!

kcthebrewer said:
Anybody have any XGI scores? :p

Ummm.....*slaps* XGI....I wonder if they too have some magical drivers yet...err..wait, I meant "optimized" drivers.... :p
 
101998 said:
Also when I had to run the timedemo through twice because the first time through (on either driver any setting) the demo would pause every couple seconds. ? The second time it always played fine.

The reason it does this is because the information needs to get cached (which is what the first run does, it moves it into high memory). So basically the first time you play any game that uses over 64megs of ram you get skips till you rerun a level. REALLY gay but that's the what happens =/.

~Adam
 
I think my scores are pretty low for what I have. Any ideas what my problem might be? I'm new to the whole overclocking thing.

3dmark score: 38xx (at work cant remember)

AMD 64 3200+ @ 2.3ghz
Chaintech Nforce3 250
1gb Crucial DDR3200
Gainward GF6800GT (non GS) @ 400/1100 66.7 drivers

Make SURE you put on some new thermal paste with these Gainward cards you'll drop huge temps when you do. Put on some AS3 and I dropped 10c from it.
 
Not very impressed with 3dMark2005. The only thing I like is the cool guy with the massive gun, I wanted to playing a game where I'm that guy with the HUGE gun killing everyone. Firefly was ok. Balloon scene was kinda boring.

I was hoping for 4 tests like in older versions. Whats up with them having fewer & fewer Test. Before it was 4 game tests, some run on 2 different quality levels, then just 4 test, now only 3 tests.

I guess in 3dMark 2007 they'll have 2 game tests, and you can only run one, unless you pay for the other one.
 
titanium said:
I think my scores are pretty low for what I have. Any ideas what my problem might be? I'm new to the whole overclocking thing.

3dmark score: 38xx (at work cant remember)

AMD 64 3200+ @ 2.3ghz
Chaintech Nforce3 250
1gb Crucial DDR3200
Gainward GF6800GT (non GS) @ 400/1100 66.7 drivers

Make SURE you put on some new thermal paste with these Gainward cards you'll drop huge temps when you do. Put on some AS3 and I dropped 10c from it.


I am thinking a properly setup A64 rig from 2.2ghz to 2.5ghz and a 6800GT are going to be seeing lots of 3800's up to 4700's, prob going to depend alot on the driver used. I get 4400's with my rig/see sig, using the 66.70's...more testing tonight in games, I am on a fresh install of XP-SP2.
 
your score is probably low for 2 reasons

1) the card isnt overclocked
2) you are prolly using 61.77 drivers
 
I'll prolly overclock more when I get a different Heatsink but I was thinking that 100mhz on a stock HS and fan was pushing it on the 512k 3200's. As far as my score goes I dont understand why its not at least mid 4k's.
 
Batman,

Dave over at B3D gave more info on the ATI bug when fixed that netted 1000 pts in 3dmark. It had to do with the way ATI managed vertex data. It seems like on these AGP cards the vertex data was slipping into main memory vrs saved on the on-board memory. PCI did not see this as much due to its higher bandwidth. A guy by the name of "dio" is an ATI D3D software engineer said it was a memory management issue.

As far as you statement about no other games seeing a increase that is very true. However please name me one game out there today that is even close to same complexity of what 3dmark05 is doing. I will save you the time. There is no game today that even comes close.

This is not an attack to you per say just you happened to be the most vocal. It just bothers me to see the same week agurement on many forums when the users dont have all the info.

Besides there are other things to mention...about 3dmark2k5, like how all NV cards get a speed increase using a DST where as ATI users that could use 3Dc dont get a chance too. Neither DST nor 3Dc are DX9 standards and neither should be used in a true DX9 benchmark. I mean if your making a fair DX9 benchmark then you should

a) stick to all DX9 standards right?

or

b) Allow vender specific functions but make sure to do it on all hardware that can take advatnage of it.


ok I will let you all return to ranting..


Brent,

while its funny that people agure over this stuff, its still sad as you never have answered my question on how do you predict furture prefromance? Sticking to current games is not a good indicator. I dont have a good answer but having synthics show some possible weekness in the API that those games will use seems like at least a starting point...
 
Jbirney said:
Batman,

Dave over at B3D gave more info on the ATI bug when fixed that netted 1000 pts in 3dmark. It had to do with the way ATI managed vertex data. It seems like on these AGP cards the vertex data was slipping into main memory vrs saved on the on-board memory. PCI did not see this as much due to its higher bandwidth. A guy by the name of "dio" is an ATI D3D software engineer said it was a memory management issue.

As far as you statement about no other games seeing a increase that is very true. However please name me one game out there today that is even close to same complexity of what 3dmark05 is doing. I will save you the time. There is no game today that even comes close.
..

Test out codecult code creatures benchmark, they have an average of 300k polys on the screen at any given time, the new drivers have no influence on that.

How about Stalker there was a leaked demo of that out right?

Synthetics don't really show you the true potential, they will if and only if future games are coded the same way.

3dmark05's shaders are not hand optimized, which will of course change performance drastically in a real game situation.
 
I was really surprised to see a score of 5497 with my 6800GT..
http://people.ucsc.edu/~dvanderb/3dmark05.jpg

System specs follow:
AMD Athlon64 3200+ @ 2480mhz(225mhz FSB)
Asus K8V SE Deluxe
1gb OCZ Enhanced @ CAS 2.5 and 450mhz
PNY 6800GT w/ NV Silencer 5 @ 430mhz core/1150mhz memory - fastwrites on.. 1.4V bios

Using 66.70 drivers
 
dvanderb said:
I was really surprised to see a score of 5497 with my 6800GT..
http://people.ucsc.edu/~dvanderb/3dmark05.jpg

System specs follow:
AMD Athlon64 3200+ @ 2480mhz(225mhz FSB)
Asus K8V SE Deluxe
1gb OCZ Enhanced @ CAS 2.5 and 450mhz
PNY 6800GT w/ NV Silencer 5 @ 430mhz core/1150mhz memory - fastwrites on.. 1.4V bios

Using 66.70 drivers

What was the clock on your GT for the test?
 
Jbirney said:
Brent,

while its funny that people agure over this stuff, its still sad as you never have answered my question on how do you predict furture prefromance? Sticking to current games is not a good indicator. I dont have a good answer but having synthics show some possible weekness in the API that those games will use seems like at least a starting point...

Unless you are a time traveler how can you know what future games will be like?

Is 3DMark really telling you of what future games will be like?

Gamers buy video cards to play games NOW. They usually don't buy a brand spanking new video card and shelve it for 6 months and then pull it out and use it. They usually install it right then and there when they buy it and play current games on it. That is what we test, current/latest games. These gamers want to know what their brand new shiny video card will do in the current games they are playing now. By using popular games we get a good feel of current game performance.

As for predicting future game performance no one can do this with 100% certainty.

What we CAN do though is use the latest games based on the latest game engines that might give us a prediction in how a video card will perform on a game using that game engine.

For example testing DOOM 3, it has a very new engine which WILL be used by a whole lot of games in the future. By testing DOOM 3 we get a glimpse at what kind of performance we MAY see in future titles based on this engine. If nothing else at least we are testing a real game engine that WILL be a popular game engine in future titles.

Same for other games, we want to hit game engines that will be used in the future, HL2, UE3, Cry Engine, etc... These are all state of the art games and engines.

Through all of that we can form a guess at which cards might perform better in a future title using that engine.

Of course, even with that we do understand that games themselves may perform differently even it is using the same game engine.

But we are much closer to figuring out future game performance by using real game engines that are going to be used in the future rather then some synthetic benchmark that will never be used as any game engine in any game in the future.



As Doc Brown would say: "The Future Hasn't been Written Yet".
 
dvanderb said:
I listed it in the system specs.. was 430mhz core.. and 1150mhz memory

I was asking because I get the same score with mine clocked @450/1200 with the 1.4v bios also. Wonder if fast writes turned on would make a difference.
 
3,924

Wow i thought there was something wrong when the cpu test did not go above 1 fps oh well

oh yea specs

amd 64 3000+ default speed
msi k8n neo plat.
1 gig of cheap pc2700 mem
powercolor x800 pro vivo (with softmod to x800 xtpe 16 pipes) only running at 500/540
audigy 2
 
I should note that the ~ 5500 score was acheived WITH the cpu tests on(if you look at the detailed results.. you will see that the GFX test score alone is higher than the overall score w/ cpu tests). I noticed a lot of ppl on ORB only show their GFX scores and not a score that includes the CPU Tests which is the way that 3DMark05 default setup is.

btf said:
I was asking because I get the same score with mine clocked @450/1200 with the 1.4v bios also. Wonder if fast writes turned on would make a difference.
 
The Batman said:
Also, about 'approved' drivers. WHO. GIVES. A. FUCK. Seriously, this is [H]ardOCP, as in Hardcore, not Hardly packing a dick. WHQL, FutureMark Approved, Microsoft Recomended, GITG, TWIMTBP, runs best on Alienware/Intel P4, they can all blow me.

Quoted for truth. Damn that was good.
 
truffle00 said:
.
Numbers are in the following format: 4.9 score/framerate / 8.07 score/framerate / Percent difference

3dmark01SE 21902 / 21957 / 0.25
3dmark03 11929 / 11976 / 0.39
3dmark05 4473 / 5788 / 22.72

Doom 3 - 1280x1024
Medium 48.6 / 53.3 / 8.82
High 46.0./ 51.7 / 11.03
Ultra 42.8 / 48.3 / 11.39

Doom 3 - 1280x1024
Medium 35.3 / 38.1 / 7.35
High 33.8 / 37.1 / 8.89
Ultra 31.9 / 34.9 / 8.60


Swing away, Batman.


Thanks alot Bro. Must say this is much better then flaming back and forth. Though I think you misunderstood what I wanted to compare [doesn't matter, you gave the scores and that's all I really needed, I'll be sure to explain my reasoning as well].

Alright let's get cracking at the benchs and why I think your results support my "the bug excuse is FUD" theory.

Alright, let me draw your attention the scores I've bolded in the quote. Those are what we'll be looking at for the comparision. For now let's looks at the top non AA scores. Compare the 48.6 to 46.0. That's when moving from Mediuem [128megs used] to High [256megs used] under the older drivers. This difference is 2.6 frames. Percentage wise that's a .95% difference. The percentages are what we'll be looking at, we can't use a frame comparision since the card is doing more work in high than mediuem that is not nessearrily related to textures.

Okay, now we're going to compare the same thing but using the hotfixs. 53.3 to 51.7. Difference of 1.6. That's a .97% making it all in all a .02% difference. Not really something to write home about. So...what happened to the bug fix, the difference between .95 and .97 is negligable at best, falling within the benchs margin of error at worst? We should be seeing a LARGER frame% disparity. Do you guys follow me? The hotfix is suppoed to fix a 256meg allocation problem [as it only effects 256meg cards]. What I did was compare the difference between using 128 [Mediuem] and 256 [High] under the older drivers to the difference between using 128 and 256 under the hotfix drivers. If indeed the hotfix worked as ATI claimed it did, we should have seen a LARGER disparity between 128 and 256 since 256mb mode wouldn't be crippled as it was under the older drivers. In other words if this was such a serious issue truffle should have taken a much larger hit when moving from mediem to high under the older drivers [since 256meg wasn't allocating properly]. Obviously this is not the case.

Okay, now we're going to do the same exact thing but this time we'll use the AA scores.
35.3 to 33.8. A .95% difference again under older drviers. 38.1 to 37.1. .97% again.

Same difference between nonAA and AA. So yes I call FUD, I think there was NOTHING wrong with using 256megs of vram under the older drivers [that in fact there was no bug to begin with] and I think truffle's scores support this in REAL GAMES. I don't want to use synthetics as an example because people with stock X800 Pros are beating people with Ultra Extremes.

If anyone can punch a whole in this comparision, please do so by all means.
 
The Batman said:
Thanks alot Bro. Must say this is much better then flaming back and forth. Though I think you misunderstood what I wanted to compare [doesn't matter, you gave the scores and that's all I really needed, I'll be sure to explain my reasoning as well].

Alright let's get cracking at the benchs and why I think your results support my "the bug excuse is FUD" theory.

Alright, let me draw your attention the scores I've bolded in the quote. Those are what we'll be looking at for the comparision. For now let's looks at the top non AA scores. Compare the 48.6 to 46.0. That's when moving from Mediuem [128megs used] to High [256megs used] under the older drivers. This difference is 2.6 frames. Percentage wise that's a .95% difference. The percentages are what we'll be looking at, we can't use a frame comparision since the card is doing more work in high than mediuem that is not nessearrily related to textures.

Okay, now we're going to compare the same thing but using the hotfixs. 53.3 to 51.7. Difference of 1.6. That's a .97% making it all in all a .02% difference. Not really something to write home about. So...what happened to the bug fix, the difference between .95 and .97 is negligable at best, falling within the benchs margin of error at worst? We should be seeing a LARGER frame% disparity. Do you guys follow me? The hotfix is suppoed to fix a 256meg allocation problem [as it only effects 256meg cards]. What I did was compare the difference between using 128 [Mediuem] and 256 [High] under the older drivers to the difference between using 128 and 256 under the hotfix drivers. If indeed the hotfix worked as ATI claimed it did, we should have seen a LARGER disparity between 128 and 256 since 256mb mode wouldn't be crippled as it was under the older drivers. In other words if this was such a serious issue truffle should have taken a much larger hit when moving from mediem to high under the older drivers [since 256meg wasn't allocating properly]. Obviously this is not the case.

Okay, now we're going to do the same exact thing but this time we'll use the AA scores.
35.3 to 33.8. A .95% difference again under older drviers. 38.1 to 37.1. .97% again.

Same difference between nonAA and AA. So yes I call FUD, I think there was NOTHING wrong with using 256megs of vram under the older drivers [that in fact there was no bug to begin with] and I think truffle's scores support this in REAL GAMES. I don't want to use synthetics as an example because people with stock X800 Pros are beating people with Ultra Extremes.

If anyone can punch a whole in this comparision, please do so by all means.


Make sure AI is off, in the 8.07 drivers there is a registry hack for it
 
The Batman said:
Truffle said AI was off [he doesn't have CCC installed].


Does matter me thinks, its automatic. Not sure haven't really checked it out myself
 
actaully we tested this on our engine as well, we did notice after going above 3 million polygons there is a sudden drop on both ATi and nV cards where the frame rates drop from 60 to 30!

Interesting, its not really a bug, more over memory access changes which fixed this. Its a flaw in 3dmark05. My engine is Ogl, this is very similiar to what is happening in Dx with 3dmark.

This was also noticable on different lower end cards too, happened with the 9800, the fx 5900, but had a sudden drop at around 1.2 million polys, Same with the Gf 3, had a big drop at 700k,

I'm thinking nV will have a fix to this aswell, if Futuremark doesn't fix it first
 
Brent_Justice said:
Unless you are a time traveler how can you know what future games will be like?

Is 3DMark really telling you of what future games will be like?

No and I never said it would. What it does show is potental weekness based on the IHVs hardware and the way games with make use of DX9. Have you run CS:S on a 5600 serries of cards and forced it to use the DX9 path? If so what was its score? Then repeate with a 9600 serries of card. Here let me save you the trouble:

http://www.firingsquad.com/hardware/geforce_fx_half-life2/page7.asp

See how bad the FX did in a true DX9 benchmark.

The point is synthetics like 3dmark2k3 showed a possible weekness about 18 months ago and now that people have some games that make use of DX9, if they have an FX card from that time and run it in DX9 mode their frame rate is crap compared to what ATI cards at the same time were offering. So we hope that the average Joe has upgraded their card by now as DX9 games will be much slower with out some serrious help from NV....


Gamers buy video cards to play games NOW. They usually don't buy a brand spanking new video card and shelve it for 6 months and then pull it out and use it.

Thats very true. But you also have some that dont upgrade that often. How often does the average Joe buy a new video card? every 6 months? 12 months? 18 months? See the above example.


As for predicting future game performance no one can do this with 100% certainty.

Very true.


What we CAN do though is use the latest games based on the latest game engines that might give us a prediction in how a video card will perform on a game using that game engine.

For example testing DOOM 3, it has a very new engine which WILL be used by a whole lot of games in the future. By testing DOOM 3 we get a glimpse at what kind of performance we MAY see in future titles based on this engine. If nothing else at least we are testing a real game engine that WILL be a popular game engine in future titles.

No thats not true at all. Doom3 tells us only Doom3 perfromance. Does perfromance in Q3 tell me how my card will prefrom in SOF2, JK2, RtCW, MOH:AA? The limitations found in Doom3 stand a very good chance to be much much different in the games that are based off Doom3. Also Doom3 is a poor example as its a "DX7" class game at heart :)


Same for other games, we want to hit game engines that will be used in the future, HL2, UE3, Cry Engine, etc... These are all state of the art games and engines.

Look at last year for example. We had nothing that showed us how Farcry would prefrome. How do we know how new games are going to do using enigines that we have not seen yet? How about F.E.A.R? how about SS2? Will the current engines have any weight in those games?


But we are much closer to figuring out future game performance by using real game engines that are going to be used in the future rather then some synthetic benchmark that will never be used as any game engine in any game in the future.

Again thats your opinion (and I respect that). However I dont agree with it. Those engines dont make use of future features. Do you know how PS3.0 will change things? Are todays Farcry benches accurate of actually PS3.0 games in tomorrows games?

My point again is that Syns are not going to show how future game perfromace is going to be. But what they do show is how the hardware will react given the same methods that those games will use. And while it can not predict it can show hardware limitations that so far do show up in real games. Case in point 2 years ago 3dmark showing how crappy FX cards were in true DX9 enviroments. Case in point AA in GT2 of 3dmark2k3 showing how slow the ATI cards where using simular shadowing that Doom3 used which Doom3 also showed this speed difference.
 
:(

Same difference between nonAA and AA. So yes I call FUD, I think there was NOTHING wrong with using 256megs of vram under the older drivers [that in fact there was no bug to begin with] and I think truffle's scores support this in REAL GAMES. I don't want to use synthetics as an example because people with stock X800 Pros are beating people with Ultra Extremes.

The bug was documented as being an issue with vertex data and how it was stored and then leaked into main memory on AGP cards with 256mb of memory. Here is some home work for you.

1) How much vertex data is used in 3dmark2k5 gametest 1 (the one that saw the biggest benifit of this driver)?

When you get that answer compare it too

2) How much vertex data is used in Doom3?

And see if there is a difference. Of course there is the obvoius opengl vrs DX, and doom being more or less a "dx7 game" vrs a "true dx9 benchmark"...
 
Jbirney said:
:(



The bug was documented as being an issue with vertex data and how it was stored and then leaked into main memory on AGP cards with 256mb of memory. Here is some home work for you.

1) How much vertex data is used in 3dmark2k5 gametest 1 (the one that saw the biggest benifit of this driver)?

When you get that answer compare it too

2) How much vertex data is used in Doom3?

And see if there is a difference. Of course there is the obvoius opengl vrs DX, and doom being more or less a "dx7 game" vrs a "true dx9 benchmark"...

test out code cult code creatures, its a dx8.1 benchmark that uses 300k polys around 600k vertices, dx 8.1 to 9 thier vertex memory usage hasn't changed. Its a bug in the engine that is being tested, as we also came across this in Ogl,
 
Jbirney said:
:(



The bug was documented as being an issue with vertex data and how it was stored and then leaked into main memory on AGP cards with 256mb of memory. Here is some home work for you.

1) How much vertex data is used in 3dmark2k5 gametest 1 (the one that saw the biggest benifit of this driver)?

When you get that answer compare it too

2) How much vertex data is used in Doom3?

And see if there is a difference. Of course there is the obvoius opengl vrs DX, and doom being more or less a "dx7 game" vrs a "true dx9 benchmark"...


Fine. Since you've done your homework you can explain to me why the new drivers do very little for Far Cry [except for breaking the sm2b path]. It's a DX9 heavy game. It didn't see a massive jump in frames [hell, alot of users were reporting a few frames lost even].
 
The Batman said:
Fine. Since you've done your homework you can explain to me why the new drivers do very little for Far Cry [except for breaking the sm2b path]. It's a DX9 heavy game. It didn't see a massive jump in frames [hell, alot of users were reporting a few frames lost even].


The bug is a specific to how vertex data is handled not texture data. Frycry does not come close to the same about of vertex data that 3dmark2k5 does. No other game stresses your GPU like 3dkmark2k5. And thats why your not seeing it any where else. But given these facts there is a very reasonable cause for this drop and so far there is NO evidence that there is any foul play going on. The fact the FM approved these drivers also seems to indicate this is a very legit reason. I am not trying to be a butt head just looking at all of the data before I say they are "cheating".


rancor,
I dont have an 256 mb AGP card to test your case. I just got that "new" sony HP94P LCD (which is very nice) so I have to save up a bit to get a new video card.
 
The Batman said:
Fine. Since you've done your homework you can explain to me why the new drivers do very little for Far Cry [except for breaking the sm2b path]. It's a DX9 heavy game. It didn't see a massive jump in frames [hell, alot of users were reporting a few frames lost even].

The sm2b pathway doesn't work correctly because you need to enable instancing with these sets because they are off by default. Also who is reporting frames lost? I certainly didnt and in fact my minimum fps actually went up more than my average and im not using x800.
 
Jbirney said:
The bug is a specific to how vertex data is handled not texture data. Frycry does not come close to the same about of vertex data that 3dmark2k5 does. No other game stresses your GPU like 3dkmark2k5. And thats why your not seeing it any where else. But given these facts there is a very reasonable cause for this drop and so far there is NO evidence that there is any foul play going on. The fact the FM approved these drivers also seems to indicate this is a very legit reason. I am not trying to be a butt head just looking at all of the data before I say they are "cheating".


rancor,
I dont have an 256 mb AGP card to test your case. I just got that "new" sony HP94P LCD (which is very nice) so I have to save up a bit to get a new video card.



Batman, I'm leaning away from it being a cheat, specially since what I just saw with my engine, and it effects nV's cards as well, its a bug that has to be fixed by both sides, seems to be since this many polygons haven't been used in a real time app before its a common flaw in programming technique or driver vertex memory access errors.


I did not notice the frame rates have any effect on Code cult code creatures, so not all programs have this flaw.
 
rancor said:
Batman, I'm leaning away from it being a cheat, specially since what I just saw with my engine, and it effects nV's cards as well, its a bug that has to be fixed by both sides, seems to be since this many polygons haven't been used in a real time app before its a common flaw in programming technique or driver vertex memory access errors.

that would also explan the 1000 point jumps of the 66.70 NV drivers hmmm.....
 
Jbirney said:
The fact the FM approved these drivers also seems to indicate this is a very legit reason. I am not trying to be a butt head just looking at all of the data before I say they are "cheating".

That's a good point. These are one of the only two catalyst drivers approved for 3DMark05. Obviously if Futuremark can't find anything shady and neither can any users (I've seen quite a few insinuations, but they were been quickly disproved), how can one continue to act so adamently against them. Especially when they say little relating to the recent set nvidia was distributing which isn't FM approved, provides an almost similar significant jump in performance for the nv40, and breaks a few popular games. You can be biased and skeptical, but there's a point when it gets ridiculously excessive.
 
rancor said:
Batman, I'm leaning away from it being a cheat, specially since what I just saw with my engine, and it effects nV's cards as well, its a bug that has to be fixed by both sides, seems to be since this many polygons haven't been used in a real time app before its a common flaw in programming technique or driver vertex memory access errors.


I did not notice the frame rates have any effect on Code cult code creatures, so not all programs have this flaw.

I'm not calling it a cheat/hack/whathaveyou. I'm just questioning the validity of this 3dmark05 only performence boost and MORE IMPORTANTLY it's relevance to real world gaming.

Like the man said if Far Cry doesn't stress it, and it's going to be the top dog of DX9 engines [well maybe not top dog but I don't imagine any new engine is going to come along and utterly destroy inside the year] until UR3 debuts...who cares? We'll all have 7800s and XI800s. It didn't do a thing for D3.
 
The Batman said:
I'm not calling it a cheat/hack/whathaveyou. I'm just questioning the validity of this 3dmark05 only performence boost and MORE IMPORTANTLY it's relevance to real world gaming.

Like the man said if Far Cry doesn't stress it, and it's going to be the top dog of DX9 engines [well maybe not top dog but I don't imagine any new engine is going to come along and utterly destroy inside the year] until UR3 debuts...who cares? We'll all have 7800s and XI800s. It didn't do a thing for D3.


True, also this GT1 test also has 1 million to 2 million polys per veiable scene, Unreal 3 doesnt' even get that high no where near actaully, they are at like 400k per scene.
 
The whole idea of a "Synthetic" benchmark being effected in performance by drivers completely invalidates it as a "Synthetic" benchmark that is suppose to tell you how powerful your video card is.
 
Brent_Justice said:
The whole idea of a "Synthetic" benchmark being effected in performance by drivers completely invalidates it as a "Synthetic" benchmark that is suppose to tell you how powerful your video card is.

If I wasn't already maxing out my Sig lines your post was going in there. :D
 
Brent_Justice said:
The whole idea of a "Synthetic" benchmark being effected in performance by drivers completely invalidates it as a "Synthetic" benchmark that is suppose to tell you how powerful your video card is.

indeed indeed indeed!
 
Elios said:
that would also explan the 1000 point jumps of the 66.70 NV drivers hmmm.....


Definitly a possibility, where are the 66.70 drivers they at guru3d got to test em out lol
 
Back
Top