Half-life 2 video stress test out today on steam

GabooN said:
http://www.driverheaven.net/showthread.php?p=423840&posted=1#post423840


One word WOW!

I just got my Ultra installed today...
O well not like I could find a x800 XT been looking for months..

well we know driverheaven likes to benchmark the 6800Ultra with optimizations off...so add 10% to the nvidia score...

and comparing 6xaa to 8xaaS is about fucking retarded...apples and oranges...driverheaven is about as blatantly biased as it gets...
 
well damnit i guess i eat my keyboard. no benchmark for everyone today. why did i trust valve to make the right decision? :p pass the salt please
 
^eMpTy^ said:
well we know driverheaven likes to benchmark the 6800Ultra with optimizations off...so add 10% to the nvidia score...

and comparing 6xaa to 8xaaS is about fucking retarded...apples and oranges...driverheaven is about as blatantly biased as it gets...
nvidia opts were on. way to fucking READ.

but yeah, 6xaa vs 8xsaa is pretty retarded.
 
Silverghost said:
nvidia opts were on. way to fucking READ.

but yeah, 6xaa vs 8xsaa is pretty retarded.

haha...I searched the thread for the word "optimization"

they spelled it wrong..."optimisation"...

whoops
 
tranCendenZ said:
So basically a 10% advantage to ATI eh? Wonder if that could be made up with FP16pp calls and SM3.0... All in all not a bad showing for Nvidia.


Don't even think sm 3.0 is need to cover this gap, just some newer drivers should do the trick.
 
rancor said:
Don't even think sm 3.0 is need to cover this gap, just some newer drivers should do the trick.
Indeed, note that they used the just-released-yesterday Catalyst 4.8 drivers vs nVidia's month-old 61.77s.

Especially interesting given that we know the 65.xx already produce a huge FPS jump for nVidia.

By the time this game is released, ATI cards are going to be trailing nVidia nicely (especially if SM3.0 gets in the initial release - somehow, I don't see ATI letting that happen, though).
 
ok here's why I hate driverheaven...they used an FX-53, vr-zone used a 3800+, both had the same amount of ram...

1600x1200 4xaa 8xaf
vr-zone XTPE: 70.54
driverheaven XTPE: ~72

so that makes sense...but look at the 6800Ultra...

vr-zone 6800U: 63.54
driverheaven 6800U: 50

that's a 15fps disparity...it's a good thing nobody takes thier shit-hole website seriously...
 
dderidex said:
Indeed, note that they used the just-released-yesterday Catalyst 4.8 drivers vs nVidia's month-old 61.77s.

Especially interesting given that we know the 65.xx already produce a huge FPS jump for nVidia.

By the time this game is released, ATI cards are going to be trailing nVidia nicely (especially if SM3.0 gets in the initial release - somehow, I don't see ATI letting that happen, though).

I seriously doubt hl2 will come with sm3 support out of the box...but between the new drivers and impending sm3 support...nvidia should easily be able to match the xtpe (which you still can't buy, blah)...
 
I only take [H] seriously now.

Not even anandtech, they did a cpu scalablity test for Doom3 using the built in timedemo.....
 
Silverghost said:
I only take [H} seriously now.

Not even anandtech, they did a cpu scalablity test for Doom3 using the built in timedemo.....

those anandtech numbers were just funny...ATi losing buy as much as 50%...lol...but they were mostly worthless...

I agree that the [H] is the only one you can really trust these days...I just hope new nv drivers come out before they get a hold of hl2...
 
ok here's why I hate driverheaven...they used an FX-53, vr-zone used a 3800+, both had the same amount of ram...

1600x1200 4xaa 8xaf
vr-zone XTPE: 70.54
driverheaven XTPE: ~72

so that makes sense...but look at the 6800Ultra...

vr-zone 6800U: 63.54
driverheaven 6800U: 50

that's a 15fps disparity...it's a good thing nobody takes thier shit-hole website seriously...


For one, the VR-Zone test was done on CS:Source.

The driverhaven test was done using the Video Stress Test utility (benchmark) that was released just 6 hours ago.

They are different tests. The benchmark(stress test) being the more graphically intensive one.
 
NSG said:
For one, the VR-Zone test was done on CS:Source.

The driverhaven test was done using the Video Stress Test utility (benchmark) that was released just 6 hours ago.

They are different tests. The benchmark(stress test) being the more graphically intensive one.

christ...maybe I should just stop talking tonight...
 
Ehhh, don't worry about it. All this rubbish is getting confusing for folks. I just want to get my comp working again (been out of commission for 1.5 months now...damn mobo rma).
 
NSG said:
For one, the VR-Zone test was done on CS:Source.

The driverhaven test was done using the Video Stress Test utility (benchmark) that was released just 6 hours ago.

They are different tests. The benchmark(stress test) being the more graphically intensive one.

Thier nV benchs opts were off, people with GT's are getting higher scores then that.
 
rancor said:
Thier nV benchs opts were off, people with GT's are getting higher scores then that.


In the Driver Heaven Video Stress Test, the nVidia opts. were clearly ON. Vr-zone's were off iirc. Is that what u were saying? Don't expect opts. to magically allow nvidia to gain 10%, it probably won't happen.

Oh, and to all those naysayers expecting nvidia to get a huge boost from sm3.0, I have to ask; why wouldn't valve implement a sm2.0b profile alongside it? Sm2.0b brings nearly the same improvements for r420 as sm3.0 does for nv40 in Far Cry. Also, don't expect to see the same improvements in hl2 from enabling these profiles as we saw from Far Cry. It may or may not happen. We don't know how much benefit opts. like instancing and 1 pass lighting will have on Source and the implementation of it in HL2.
 
pakotlar said:
In the Driver Heaven Video Stress Test, the nVidia opts. were clearly ON. Vr-zone's were off iirc. Is that what u were saying? Don't expect opts. to magically allow nvidia to gain 10%, it probably won't happen.

Oh, and to all those naysayers expecting nvidia to get a huge boost from sm3.0, I have to ask; why wouldn't valve implement a sm2.0b profile alongside it? Sm2.0b brings nearly the same improvements for r420 as sm3.0 does for nv40. Also, don't expect to see the same improvements in hl2 from enabling these profiles as we saw from far cry. It may or may not happen. We don't know how much benefit opts. like instancing and 1 pass lighting will have.


Guess what check your shader files sm 2.0b is already there :), can't say if its in use or not but the path is definitly there. Also run the tests again in highest setting nV's opts turn off. Interesting how ya don't look into everything.
 
rancor said:
Guess what check your shader files sm 2.0b is already there :). Also run the testest again in highest setting nV's opts turn off.


As you can see in my previous post, my comp is down unfortunately. So sm 2.0b is already being used on r420? Great! I wonder how much benefit it has over sm 2.0. Can anyone run sm 2.0 path? Is it even possible? And I understand that highest quality cp settings disable opts for nvidia. However, driver heaven ENABLED these opts. Read the preface to the benches.
 
pakotlar said:
As you can see in my previous post, my comp is down unfortunately. So sm 2.0b is already being used on r420? Great! I wonder how much benefit it has over sm 2.0. Can anyone run sm 2.0 path? Is it even possible? And I understand that highest quality cp settings disable opts for nvidia. However, driver heaven ENABLED these opts. Read the preface to the benches.


can't say if the 2.0b path is in use or not, no way to really tell without log outputs from the engine. The thing is the guys that did teh benchmark probably messed up the driver setting cause if not set right the opts will turn off in highest settings.
 
GabooN said:
http://www.driverheaven.net/showthread.php?p=423840&posted=1#post423840


One word WOW!

I just got my Ultra installed today...
O well not like I could find a x800 XT been looking for months..
how about an apples-to-apples benchmark? I have no idea what they did (wrong), but I get much higher scores with a slower processor.

1280x960 0xaa/0xaf
them:
6800U ~92fps
x800XT ~135fps
me:
6800GT @ 400/1100 125fps

1280x960 4xaa/8xaf
them:
6800U ~62fps
x800XT ~93fps
me:
6800GT @ 400/1100 98fps

1600x1200 0xaa/0xaf
them:
6800U ~77fps
x800XT ~99fps
me:
6800GT @ 400/1100 90fps

1600x1200 4xaa/8xaf
them:
6800U ~51fps
x800XT ~72fps
me:
6800GT @ 400/1100 59fps

the "6x" scores on the x800XT look more like 4x, probably caused by a fallback because the amount of textures + front/back buffer in the stress test exceed the 256MB memory size in both 1280x960 and 1600x1200 with 6xaa. That can be verified by running an x800XT in 4xaa/16xaf and comparing the scores. It's impossible that 1280x960 4xaa/8xaf is only 1-2 fps faster than 1280x960 6xaa/16xaf (50% more AA subsamples than 4xaa), that has to be only the difference between 8xaf and 16xaf.
 
have you tried any benches with vsync on/off? some games vsync off can give quite a few more frames, depending on the area.
 
pxc said:
how about an apples-to-apples benchmark? As mentioned above, driverheaven has a funny idea about disabling optimizations for one card and not the other.

1280x960 0xaa/0xaf
them:
6800U ~92fps
x800XT ~135fps
me:
6800GT @ 400/1100 125fps

1280x960 4xaa/8xaf
them:
6800U ~62fps
x800XT ~93fps
me:
6800GT @ 400/1100 98fps

1600x1200 0xaa/0xaf
them:
6800U ~77fps
x800XT ~99fps
me:
6800GT @ 400/1100 90fps

1600x1200 4xaa/8xaf
them:
6800U ~51fps
x800XT ~72fps
me:
6800GT @ 400/1100 59fps

the "6x" scores on the x800XT look more like 4x, probably caused by a fallback because the amount of textures + front/back buffer in the stress test exceed the 256MB memory size in both 1280x960 and 1600x1200 with 6xaa. That can be verified by running an x800XT in 4xaa/16xaf and comparing the scores. It's impossible that 1280x960 4xaa/8xaf is only 1-2 fps faster than 1280x960 6xaa/16xaf (50% more AA subsamples than 4xaa), that has to be only the difference between 8xaf and 16xaf.

nice comparision, see Driver Heaven has nothin better to do but toot ATi's horn :D
 
rancor said:
nice comparision, see Driver Heaven has nothin better to do but toot ATi's horn :D
I edited my post because the driverheaven link claims the optimzations were left on for the nvidia card. I really have no idea what they did to the 6800U to get such low scores. :confused: Those are 15%-50% differences when the performance difference in memory timings between a 6800GT @ 400/1100 and 6800U @ 400/1100 should be 2-3% tops.
 
pxc said:
how about an apples-to-apples benchmark? I have no idea what they did (wrong), but I get much higher scores with a slower processor.

1280x960 0xaa/0xaf
them:
6800U ~92fps
x800XT ~135fps
me:
6800GT @ 400/1100 125fps

1280x960 4xaa/8xaf
them:
6800U ~62fps
x800XT ~93fps
me:
6800GT @ 400/1100 98fps

1600x1200 0xaa/0xaf
them:
6800U ~77fps
x800XT ~99fps
me:
6800GT @ 400/1100 90fps

1600x1200 4xaa/8xaf
them:
6800U ~51fps
x800XT ~72fps
me:
6800GT @ 400/1100 59fps

the "6x" scores on the x800XT look more like 4x, probably caused by a fallback because the amount of textures + front/back buffer in the stress test exceed the 256MB memory size in both 1280x960 and 1600x1200 with 6xaa. That can be verified by running an x800XT in 4xaa/16xaf and comparing the scores. It's impossible that 1280x960 4xaa/8xaf is only 1-2 fps faster than 1280x960 6xaa/16xaf (50% more AA subsamples than 4xaa), that has to be only the difference between 8xaf and 16xaf.

now that's "WOW" worthy...

it's amazing how driverheaven manages to consistently show nvidia in such a negative light...your scores are much closer to what I was expecting...which drivers were you using?
 
pxc said:
I edited my post because the driverheaven link claims the optimzations were left on for the nvidia card. I really have no idea what they did to the 6800U to get such low scores. :confused: Those are 15%-50% differences when the difference in memory timings between a 6800GT @ 400/1100 and 6800U @ 400/1100 should be 2-3% tops.


They probably mangled thier card lol
 
^eMpTy^ said:
now that's "WOW" worthy...

it's amazing how driverheaven manages to consistently show nvidia in such a negative light...your scores are much closer to what I was expecting...which drivers were you using?
65.62, 24 days old. :p

also, that's with an A64 3200+ @ stock (right now) vs an FX-53
 
pxc said:
65.62, 24 days old. :p

also, that's with an A64 3200+ @ stock (right now) vs an FX-53

well the drivers are probably helping you a bit...but that still doesn't account for the numbers they were showing...

blah...fuck em...
 
Using 61.77
Specs in SIG

1280X960 = 103 FPS

1280X960 4XAF 8XAA = 76 FPS

1600X1200 = 90 FPS

1600X1200 4XAF 8XAA = 59 FPS

I must say there numbers seem a bit off.. O well I think I made the right choice..
 
Those benchmarks are blatantly off. See specs in signature.

1280x960 noAA / Trilinear = 100fps

1280x960 4xAA / 8xAF = 84fps

1600x1200 noAA / Trilinear = 80fps

1600x1200 4xAA / 8xAF = 57fps

And let me tell you, a FX53 beats the f**k out of a 2.4ghz Athlon XP. DriverHeaven did something wrong. Whether it was intentional or not, who's to say.

(using 61.77's and opts on)
 
1600x1200 no AA and Trilinear , outlook , and IE open on one screen CS S on main


85.9 FPS

6x AA 4x AF (ops on)

75.9 FPS


"65's"
 
DigitalEmperor said:
a FX53 beats the f**k out of a 2.4ghz Athlon XP. DriverHeaven did something wrong. Whether it was intentional or not, who's to say.

(using 61.77's and opts on)


With the way DH is acting seems like it was purposeful ;)
 
Back
Top