Valve sucks

Status
Not open for further replies.
I just had to come in here to say "wow" at the stupidity of some of the claims in this thread.

Valve programs something using dx9 full precision standard and someone says that Valve is against nvidia because they didn't use partial precision? They are against nvidia because they programmed to dx9 specifications and didn't specifically do more work to add _pp hints (which DO noticeably reduce IQ) just so that crappy hardware can run the game?

Hey, the FX line sucked. Blame nvidia for the crappy performance, not the progarms that code to spec. Blame yourself for getting crappy hardware when it's been known since the first benchmarks of the nv3x line that it was incredibly under powered when it came to ps20 and full precision operations. The nv4x line is great, so only the nv3x applies.

I am amazed at the stupidity of some people. And shame on Kyle for even linking to a thread like this on the home page. Very poor integrity on his part to even support idiocy like this.
 
Chris_B said:
Personally i think if games companies are optomising for one gfx card or another its total bullshit and its something that should stopped before it goes too far.
It's terrible in general, but in this case valve may have had a good reason to drop a path that helped the FX cards (even ID dropped the NV30 path, which may have influenced Valve). The game was very, very late and the mixed mode path was extra work. I wrote that a couple of pages ago and there's a similar post several replies up.

Too bad HL2 became an ATI marketing tool, or nvidia may have been able to make it play better out of the box. But the same goes for other games, especially a few TWIMTBP games.
 
Jbirney said:
Again this tweak only seems to help FX users and not 6800 users. Here is one user on a 6800 that gained a whooping 2 fps on his 6800:
http://www.nvnews.net/vbulletin/showthread.php?t=41625&page=2&pp=15

Where are some real benchmarks with some more numbers.... like the system, resolutions, etc.... 2fps boost may have been b/c he was already CPU limited, and that tweak just squeezed 2 more fps out (ex 150fps to 152fps).

With out any details... that is useless information.

Or if it was at 1600x1200 with 8xAA/16xAF and he went from 10fps to 12fps that would be a 20% difference.
 
First off, doom3 engine is a more advanced engine then the source engine, lets get that straight please.

Now, doom3 was never programmed in favor of nvidia cards, if it was programmed in favor of nvidia cards, we would have seen something like this.

Valve clearly for whatever reasons decided not to optimise their game for nvidia cards, you cant say this with doom3, because in doom3 the game was optimized for a variaty of different cards. The only reason it runs slightly slower (NOTE SLIGHTLY SLOWER, THE GAME IS STILL PLAYABLE ON ATI CARDS.) is because of ATI's choice not to work with opengl that much.

Now here we have HL2, using the source engine. At first reports say it magically runs almost 40% faster on ATI cards (but it doesnt), now if you read what this tweak actually fixes, I think the only logical explanation you can come to is that valve decided not to optimize the game for nvidia cards. Valve even says they were working on an idea for pp, but they for some reason, magically decided to not continue it.

So here is the problem of it. HL2 works SLIGHTLY BETTER ( NOTICE THE SLIGHTLY BETTER, BECAUSE THE GAME IS STILL PLAYABLE ON NVIDIA CARDS) on ATI cards, yet with this tweak, nvidia cards are equal in speed (I presume, logically it seems they would be). Valve chose for whatever reason not to do whatever they needed to do for the nvidia cards (which if 3rd party users like us can get to work, valve surely could have very easily could have gotten to work.).

So to end the rambling, Valve knowingly chose not to optimize their game for the nvidia cards.
 
rashly said:
I am amazed at the stupidity of some people. And shame on Kyle for even linking to a thread like this on the home page. Very poor integrity on his part to even support idiocy like this.

This was linked because there is a easy way to almost double the performance in hl2 on the fx line of cards in dx9 mode. If you dont like to hear opinions on the matter why did you read them?
 
chrisf6969 said:
Where are some real benchmarks with some more numbers....
geez, i'll do some benchmarks. I can't believe no one had posted any yet. Scratch that, I do believe it because all the HL2 benchmark threads here have been miserable failures. :mad:

I only have a FX5200 from that line, so don't expect much. The relative DX8.1, FP32 and FP16 benchmarks should at least be meaningful.
 
-=bladerunner=- said:
That's not true . In CoD run Ati cards at least as fat as nV cards if no faster

I am sorry but every review that I have seen (even with the 5XXX nvidia cards) ati has gotten it's ass handed to it in CoD and all other opengl. Go to rage3d, myself and others have been begging for years for ati to prioritize opengl performance.
 
rashly said:
I just had to come in here to say "wow" at the stupidity of some of the claims in this thread.

Valve programs something using dx9 full precision standard and someone says that Valve is against nvidia because they didn't use partial precision? They are against nvidia because they programmed to dx9 specifications and didn't specifically do more work to add _pp hints (which DO noticeably reduce IQ) just so that crappy hardware can run the game?

Hey, the FX line sucked. Blame nvidia for the crappy performance, not the progarms that code to spec. Blame yourself for getting crappy hardware when it's been known since the first benchmarks of the nv3x line that it was incredibly under powered when it came to ps20 and full precision operations. The nv4x line is great, so only the nv3x applies.

I am amazed at the stupidity of some people. And shame on Kyle for even linking to a thread like this on the home page. Very poor integrity on his part to even support idiocy like this.

I can't believe you came in here with that bullshit without even reading the thread. You have missed every point and repeated everything else that has been said by others that didn't read/comprehend. Good job.
 
Mchart said:
So here is the problem of it. HL2 works SLIGHTLY BETTER ( NOTICE THE SLIGHTLY BETTER, BECAUSE THE GAME IS STILL PLAYABLE ON NVIDIA CARDS) on ATI cards, yet with this tweak, nvidia cards are equal in speed (I presume, logically it seems they would be)..

This only applies to the 5xxx series. The 6800 series..well..do you really want to play it at 16FP? I'm sure you'll get a boost there but...whhhy?

for the 9800 and 5900, the 9800s were still better for DX games
 
Lord of Shadows said:
This was linked because there is a easy way to almost double the performance in hl2 on the fx line of cards in dx9 mode. If you dont like to hear opinions on the matter why did you read them?
Too bad Kyle didn't do that. Saying a ridiculous claim like "Half Life 2 Coded Against NVIDIA?" in a news post is much different than saying "Increase HL2 Performance on nv3x by Lowering Quality". All he has shown is that he doesn't understand the issue at all and made an assumption based on his lack of knowledge on the subject.
 
rashly said:
I just had to come in here to say "wow" at the stupidity of some of the claims in this thread.

Valve programs something using dx9 full precision standard and someone says that Valve is against nvidia because they didn't use partial precision? They are against nvidia because they programmed to dx9 specifications and didn't specifically do more work to add _pp hints (which DO noticeably reduce IQ) just so that crappy hardware can run the game?

Hey, the FX line sucked. Blame nvidia for the crappy performance, not the progarms that code to spec. Blame yourself for getting crappy hardware when it's been known since the first benchmarks of the nv3x line that it was incredibly under powered when it came to ps20 and full precision operations. The nv4x line is great, so only the nv3x applies.

I am amazed at the stupidity of some people. And shame on Kyle for even linking to a thread like this on the home page. Very poor integrity on his part to even support idiocy like this.


We say that Valve is against NVIDIA not because of this but because of a consistant, deliberate pattern of anti-NV statements.

Let's review:

- ATI gives Valve a sweet bundle deal for HL2. Assuming ATI was paying $10 for each copy of HL2 (far below the retail proice), and assuming that they shipped one million Radeon 9800XT/9600XT cards, that's something like $10 million. A lot of money.

- Gabe appears at ATI's "shader day" and trashes NV30

- Valve claims that antialiasing won't work on NV30 and that it can't be fixed. A month later, they reverse their position.

- Shortly after the 6800 release, Valve states that HL2 is "much faster" on next-gen ATI hardware.

- ATI releases HL2 benchmarks that strongly favor the x800 series. Other benchmarks, like [H]'s benchmarks, show a much smaller difference.

- ATI forces NV30 cards to run in DX8.1. When forced to use DX9, rendering errors occur. By changing the card ID to "ATI", these errors disappear. By forcing FP16, performance greatly increases.



These actions by Valve show a consistant and willful attempt to paint NVIDIA hardware as inferior. Valve's words, and it's actions show that it is firmly in the pocket of ATI.

Consider this:

- DOOM3 runs equally well on NV30 as it does on R300
- FarCry, while slower on NV30, does not display the dramatic performance difference of HL2

I'm not claiming that DX9 performance on NV30 was great. It, quite frankly, sucks. So does the AA performance.

People keep arguing that using FP16 on NVIDIA cards could lead to quality problems - and that Valve didn't want to test this. Great. Perhaps it does cause quality problems. But *so does switching to DX8*. I want the *option* of running my FX5900XT in DX9 with FP16. Let the *users* decide which quality settings are best.
 
rashly said:
Too bad Kyle didn't do that. Saying a ridiculous claim like "Half Life 2 Coded Against NVIDIA?" in a news post is much different than saying "Increase HL2 Performance on nv3x by Lowering Quality". All he has shown is that he doesn't understand the issue at all and made an assumption based on his lack of knowledge on the subject.


This thread in our forum alleges
 
First off, doom3 engine is a more advanced engine then the source engine, lets get that straight please.

Doom 3 has this over Source - Lighting, optimized for indoor areas, a slightly cooler flashlight.

Source has these over Doom 3 - Material-level object interaction, interactive physics, easily modifiable/expandable codebase(how long did the doom 3 sdk take to come out?), ground-based vehicle code, air-based vehicle code, water-based vehicle code, realistic light reflection/refraction, optimizations for large open areas, realistic specular mapping(no melted tupperware armour), realistic bump mapped textures, character model->weapon interaction on a much higher level(i.e. disintegration, flaming, ragdoll physics, being stuck to the wall, burned corpses).

If you're a modder, you're most likely going with source... and modders generally go with the better engine.

The FX series was pretty bad, but it's good to see there's a tweak that makes them perform quite a bit better. I honestly didnt notice that much of a difference going from ti4600 to 6800GT except resolution though...
 
rashly said:
Too bad Kyle didn't do that. Saying a ridiculous claim like "Half Life 2 Coded Against NVIDIA?" in a news post is much different than saying "Increase HL2 Performance on nv3x by Lowering Quality". All he has shown is that he doesn't understand the issue at all and made an assumption based on his lack of knowledge on the subject.

Well Kyle has shown an eagerness to post things against Valve for a while now, like that article which completley overdramatized certain issues with CS:Source. Even though at the time it was posted, a good number of the issues the article complained about had been addressed.
 
bsoft said:
We say that Valve is against NVIDIA not because of this but because of a consistant, deliberate pattern of anti-NV statements.

Let's review:

- ATI gives Valve a sweet bundle deal for HL2. Assuming ATI was paying $10 for each copy of HL2 (far below the retail proice), and assuming that they shipped one million Radeon 9800XT/9600XT cards, that's something like $10 million. A lot of money.

- Gabe appears at ATI's "shader day" and trashes NV30

- Valve claims that antialiasing won't work on NV30 and that it can't be fixed. A month later, they reverse their position.

- Shortly after the 6800 release, Valve states that HL2 is "much faster" on next-gen ATI hardware.

- ATI releases HL2 benchmarks that strongly favor the x800 series. Other benchmarks, like [H]'s benchmarks, show a much smaller difference.

- ATI forces NV30 cards to run in DX8.1. When forced to use DX9, rendering errors occur. By changing the card ID to "ATI", these errors disappear. By forcing FP16, performance greatly increases.



These actions by Valve show a consistant and willful attempt to paint NVIDIA hardware as inferior. Valve's words, and it's actions show that it is firmly in the pocket of ATI.

Consider this:

- DOOM3 runs equally well on NV30 as it does on R300
- FarCry, while slower on NV30, does not display the dramatic performance difference of HL2

I'm not claiming that DX9 performance on NV30 was great. It, quite frankly, sucks. So does the AA performance.

People keep arguing that using FP16 on NVIDIA cards could lead to quality problems - and that Valve didn't want to test this. Great. Perhaps it does cause quality problems. But *so does switching to DX8*. I want the *option* of running my FX5900XT in DX9 with FP16. Let the *users* decide which quality settings are best.

Where does PR translate into actual code? HL2 isn't the only game to have the nv3x use a dx8.1 path.

I don't know how you could even add the D3 numbers into this argument due to the use of PCF.

The bottom line is that the playing feild is even. The same shaders are used by both IHVs. The only Ati only features would be 3dc in future levels, and valve would most likely use dxt5 for nvidia cards and that has the same performance as 3dc (just noticeably lower quality).

Why should valve put in extra work to make up for a product line's crappy performance when they could just code to a standard path? Carmack said he spent half of his time coding the nv30 path into d3 before it was scrapped just to get acceptable performance.

Again, blame nvidia for the crappy hardware and blame yourself for making a crappy purchase.
 
creedAMD said:
This thread in our forum alleges
Why even help spread an obviously incorrect rumor that the fickle public would pick up as fact? Why even bother posting something false? Why not just link it as a way to increase performance, unless he has another agenda completely...
 
CGFMaster said:
Wow! This article is amazing!

So far we have:

9,592 views

156 posts
~and the most astounding~
0 benchmark results

Ahh the voice of reason...hey guys, before we start beating each other to a bloody pulp over this lets see some benches
 
pxc said:
There's a small ATI logo on the back of my HL2 box, between the Vivendi logo and closed captioned logo.

must be a different box from mine then, i dont have the vivendi logo, i got a sierra logo where you probably have the vivendi logo, still no ati logo on mine though :(
 
rashly said:
Why even help spread an obviously incorrect rumor that the fickle public would pick up as fact? Why even bother posting something false? Why not just link it as a way to increase performance, unless he has another agenda completely...

He had a question mark at the end of it, if you don't know how to read you deserve to be misinformed, it will happen more often, get used to it.
 
This is insane. I've read NINE pages of flames and seen ONE screenshot which supposedly conclusively proves beyond any doubt that the visual quality is the same. How does FX DX8 IQ compare to FP16 DX9 IQ? How are the framerates?

This is truly shameful. Someone starts a thread seemingly with the sole purpose of attacking an ISV. He makes factual claims that he spectacularly fails to prove (relating to IQ differences and framerate improvements). Then he immediately jumps from factual claims that could be proven to speculation about motive which is impossible to prove. "Valve intetionally crippled FX cards! Oh noes!!!11!!!1!"

Give me a break. First prove that there is no IQ loss on any the shaders by using FP16 precision instead of FP24 (and no, one screenshot doesn't cut it). Then show that FP16 DX9 provides a better gaming experience on mid-low end FX cards than DX81. Even then, this is no more suspicious than Doom 3 using texture lookups instead of math on all cards. There was a reasonable explanation for why that decision was made (i.e. simplifying development/validation). It seems likely there is a reasonable explanation for Valve's decision to use FP24 everywhere (probably because their artists actually did notice IQ differences and because they didn't want to waste hundreds of hours of development time validating modified shaders for a fraction of their userbase.) When you include an option in your AAA, shipping product, it is expect to work correctly in all circumstances. Making sure this hack would meet that standard would take so much time as to be prohibitive. Would you rather have waiting another couple months for the game? (How about if you have a GF6-series card or a Radeon, would you still like to delay the game so Valve could create a special path for people with bad cards?)

I might also hypothesize that the decision regarding shader precision was probably made quite early in the development cycle. Probably most of the other artistic content was created with the assumption of FP24 precision. All of that content would have had to have been re-validated. Additionally, since Valve planned on utilizing HDR, it seems reasonable that they would want to avoid flirting with low precision math that could result in ugliness down the road.

One of the early posters here said "don't ascribe to malice what can be explained by incompetence." That's right, but this isn't even about incompetence; it is about legitimate design decisions. There is a ton of content in games; that content needs to be validated. That takes a really long time. So if you ever want to ship your product, you try to cut down on the variables in play. That's probably what happened here. Carmack made analogous decisions during the development of Doom 3. And I doubt many of those who are so quick to say "valve sucks" would say the same thing about JC or ID. Because of the delays in HL2, Valve would probably be especially concerned about actually shipping the product, even if that means that every ounce of performance has not been eeked out of every single card on the market.
 
bsoft said:
Consider this:

- DOOM3 runs equally well on NV30 as it does on R300
- FarCry, while slower on NV30, does not display the dramatic performance difference of HL2

I'm not claiming that DX9 performance on NV30 was great. It, quite frankly, sucks. So does the AA performance.

People keep arguing that using FP16 on NVIDIA cards could lead to quality problems - and that Valve didn't want to test this. Great. Perhaps it does cause quality problems. But *so does switching to DX8*. I want the *option* of running my FX5900XT in DX9 with FP16. Let the *users* decide which quality settings are best.

Maybe you really need to reconsider that since when Carmack released his early Doom3 performance figures, it sure seemed like the NV30 performed a lot better. Heck r300 performance has even gone up a lot, with the recent shader replacements ATI did in the drivers. BTW you obviously havent seen farcry 1.3 benchmarks (think 9600xt being performance competitive with a 5950 Ultra :p).
 
mathyou said:
This is insane. I've read NINE pages of flames and seen ONE screenshot which supposedly conclusively proves beyond any doubt that the visual quality is the same. How does FX DX8 IQ compare to FP16 DX9 IQ? How are the framerates?.

I asked earlier if one of the fx users yapping about the implications could actually test it themselves, but so far only pxc has said he'll do it with his 5200.
 
InternationalHat said:
Doom 3 has this over Source - Lighting, optimized for indoor areas, a slightly cooler flashlight.

Source has these over Doom 3 - Material-level object interaction, interactive physics, easily modifiable/expandable codebase(how long did the doom 3 sdk take to come out?), ground-based vehicle code, air-based vehicle code, water-based vehicle code, realistic light reflection/refraction, optimizations for large open areas, realistic specular mapping(no melted tupperware armour), realistic bump mapped textures, character model->weapon interaction on a much higher level(i.e. disintegration, flaming, ragdoll physics, being stuck to the wall, burned corpses).

If you're a modder, you're most likely going with source... and modders generally go with the better engine.

The FX series was pretty bad, but it's good to see there's a tweak that makes them perform quite a bit better. I honestly didnt notice that much of a difference going from ti4600 to 6800GT except resolution though...

And doom3 engine doesnt have that? I suggest you look at the engines tech specs before you reply to me next time.

The only thing that no one knows for sure if the engine can support it or not is the large scale outdoor levels. But quake 4 has that to, so this is why the doom3 engine is better then the source engine. What you are doing is comparing doom3 THE GAME TO the source engine. It doesnt work like that. Compare both engines tech specs and you will see that doom3 clearly comes out in the lead. And you may argue physics, but source uses havoc physics, which uses boxes basically. The doom3 physics engine, which john carmack wrote, is per polygon. So instead of big boxes that are in each area of the object, doom3 engine has it so everything you see, no matter how minute, has physics to it. So a round ball, has sphere shaped physics, instead of having a box around it.
 
DaveBaumann said:
If you want to hack it, then the 5 minutes of analysis thats probably gone on here will probably tell you there is no discernable IQ differences. If you want to produce a large game, veryify that it operates on 10's of different type of graphics chips, on 1000's of different configurations, and that each of the 1000's of shaders under each mode (DX8, DX8.1, DX9 full precision, DX9 Partial Precision) produces the correct output in each of the different scenarious you are you using them then the validation is going to take some time. For a game thats already late, is it worth it more for 2.5% of the install base when they will get perfectly good performance and pretty good quality under the DX8.1 mode anyway?

The real question is Dave, how did every other intensive DX9 game that is out manange to use partial precision? How was Crytek able to get their SM3.0 path in FarCry up and running in FarCry with partial precision in two weeks, while HL2 is delayed a full year and can't even implement partial precision nevermind SM3.0? How come HL2 lacks partial precision, lacks SM3.0, has stuttering issues on a variety of hardware, memory read crashses, video glitches on both ATI (rainbow textures) and Nvidia (texture lighting errors) cards, etc? Hell, the source engine doesn't even work properly with my logitech mouse drivers! Seems like the source engine either is half baked, or Valve is incompetant if I am reading you correctly.

The Doom3 engine worked pretty much flawless out of the gate on all hardware, which if you pardon the pun is a "Far Cry" from how the Source engine works.
 
I dont see why people think Valve didnt try to get FX cards to run faster. Gabe said he spend 3 times more time on FX cards over the ATi cards for HL2.
 
pxc said:
geez, i'll do some benchmarks. I can't believe no one had posted any yet. Scratch that, I do believe it because all the HL2 benchmark threads here have been miserable failures. :mad:

I only have a FX5200 from that line, so don't expect much. The relative DX8.1, FP32 and FP16 benchmarks should at least be meaningful.

Hey, I've got an FX5700 that's overclocked a bit of a beast.

Still, I don't have HL2 and have no intention of supporting that company, so unless someone sends me a copy, nothing I can do to test it.

I think that's probably true of many in the thread. Every time Gabe blabbed about how much the nv3x sucks, it started to chip away at people. By the time HL2 launched, your nVidia users were in two groups - those who wanted HL2 anyway, and upgraded to GF 6-series cards; and those who refused to support a company so biased against nVidia, stuck with the GF-FX cards and didn't buy it.
 
fallguy said:
I dont see why people think Valve didnt try to get FX cards to run faster. Gabe said he spend 3 times more time on FX cards over the ATi cards for HL2.

lol fallguy, he said he spent all that time making a partial precision path which the game does not have. That is part of the point, either that statement was pure BS or the pp path was discarded purposely.
 
fallguy said:
I dont see why people think Valve didnt try to get FX cards to run faster. Gabe said he spend 3 times more time on FX cards over the ATi cards for HL2.
Because he obviously lied about that. Maybe he spent 3 times more time sitting on his ass eating chicken legs while *thinking* about nV cards as he did CODING for ATI's.....? :p

Seriously:
* Running the FX cards by identifying them as ATI cards solves ALL the graphical glitches
* Running the FX cards in FP16 provides similar quality and performance to ATI's running in FP24

There is obviously NOTHING in the game specifically added for the FX users benefit right now.
 
creedAMD said:
He had a question mark at the end of it, if you don't know how to read you deserve to be misinformed, it will happen more often, get used to it.
But why even post it at all? Why even make the insinuation that valve is purposely making the nv3x line perform bad?
How was Crytek able to get their SM3.0 path in FarCry up and running in FarCry with partial precision in two weeks, while HL2 is delayed a full year and can't even implement partial precision nevermind SM3.0?
Hahahahahahaha. One of the funniest things I've ever read. Thanks for the proof that you have no idea what you are talking about, whatsoever.
 
If the excuse is that they wanted the game to run at full precision at the highest settings, it should be FP32 right? If they optimized it to run at FP24, why not go the extra mile and support FP16 as well since DX9 spec does support it too? Considering how easy people claimed it was to get the results, wouldn't Valve had added this little tweek during the 1 year delay?

This sounds like the humus thread all over again :p
 
please, benchmarks from people w/ 6800 and FX cards who have some spare time and have gotten 3danalyze working all you need is here:

Anyone with an NVIDIA card who has tried this tweak, can you please run a regular benchmark, then an FP16 benchmark with the following timedemos and report back the FPS:

HARDOCP TIMEDEMOS:
http://www.fileshack.com/file.x?fid=5857

ATI TIMEDEMOS:
http://www.tommti-systems.de/main-Dateien/misc/HL2timedemos.zip

To do a timedemo, go to advanced keyboard options and enable developer console, then hit ~ and type timedemo demofile.dem
 
rashly said:
Hahahahahahaha. One of the funniest things I've ever read. Thanks for the proof that you have no idea what you are talking about, whatsoever.

You claim Crytek is lying then? They are the ones who made the statement that they got SM3.0 up and running in two weeks.

http://www.techreport.com/etc/2004q3/crytek/index.x?pg=1

"Some of the changes Crytek made took only a few hours, while other elements of the retrofit took a week or two to complete. In fact, after the first two weeks of effort, Crytek had the initial version of the Shader Model 3.0 code path up and running."
 
bsoft said:
We say that Valve is against NVIDIA not because of this but because of a consistant, deliberate pattern of anti-NV statements.

Let's review:

- ATI gives Valve a sweet bundle deal for HL2. Assuming ATI was paying $10 for each copy of HL2 (far below the retail proice), and assuming that they shipped one million Radeon 9800XT/9600XT cards, that's something like $10 million. A lot of money.

- Gabe appears at ATI's "shader day" and trashes NV30

- Valve claims that antialiasing won't work on NV30 and that it can't be fixed. A month later, they reverse their position.

- Shortly after the 6800 release, Valve states that HL2 is "much faster" on next-gen ATI hardware.

- ATI releases HL2 benchmarks that strongly favor the x800 series. Other benchmarks, like [H]'s benchmarks, show a much smaller difference.

- ATI forces NV30 cards to run in DX8.1. When forced to use DX9, rendering errors occur. By changing the card ID to "ATI", these errors disappear. By forcing FP16, performance greatly increases.



These actions by Valve show a consistant and willful attempt to paint NVIDIA hardware as inferior. Valve's words, and it's actions show that it is firmly in the pocket of ATI.

Consider this:

- DOOM3 runs equally well on NV30 as it does on R300
- FarCry, while slower on NV30, does not display the dramatic performance difference of HL2

I'm not claiming that DX9 performance on NV30 was great. It, quite frankly, sucks. So does the AA performance.

People keep arguing that using FP16 on NVIDIA cards could lead to quality problems - and that Valve didn't want to test this. Great. Perhaps it does cause quality problems. But *so does switching to DX8*. I want the *option* of running my FX5900XT in DX9 with FP16. Let the *users* decide which quality settings are best.


I agree with a bunch of what your saying here but I still feel that folks need to remember nVidia chose to skip PS2.0b which spec'd FP24, so I don't think anyone here is totaly innocent. ATi made what was at the time a shrewd and solid business choice, give the consumer excellent graphics (FP24) without the penalty of the absolute best graphics (FP32). nVidia tried to deliver the best graphics (FP32) but instead ended up with a pig of a core that couldnt deliver when push came to shove (NV3). nVidia should have held off on FP32 until the 6800 series, just my opinion. Also I cant fault Valve for forcing default settings with hi quality, but they definelty should have allowed the choice of FP16 for older cards....
 
I'm running 2 identical systems apart from the video cards. One has a
800xt while the latter has a 6800 ultra. The ATi ran HL2 with nary a hiccup
or stutter but the nvidia was brutal until the HL2 stutter fix was released.

I guess if you throw someone a few million in research fee's ;) they
tend to be your friend. Smart move on ATi's behalf. But for the sole
nvidia holder it's was enough to make you see red if you know what I mean. :mad:

I was a die hard nvidiot till the 9700 pro hit the market now I tend to favour ATi.
Graphics in my opinion tend to look richer on ATi. As long as something is playable
fps doesn't concern me too much. Guess I'm a ATi-hole now ;)
 
tranCendenZ said:
please, benchmarks from people w/ 6800 and FX cards who have some spare time and have gotten 3danalyze working all you need is here:

Anyone with an NVIDIA card who has tried this tweak, can you please run a regular benchmark, then an FP16 benchmark with the following timedemos and report back the FPS:

HARDOCP TIMEDEMOS:
http://www.fileshack.com/file.x?fid=5857

ATI TIMEDEMOS:
http://www.tommti-systems.de/main-Dateien/misc/HL2timedemos.zip

To do a timedemo, go to advanced keyboard options and enable developer console, then hit ~ and type timedemo demofile.dem


tran i would love to benchmark for you guys. but i refuse to support that company and that shitty ass game

i have 2 GF4TI4400 and 4800 also GFFX5900XT and a GF6800GT

but inless someone wants to send me a copy i will be more than happy to install it and benchmark
 
tranCendenZ said:
please, benchmarks from people w/ 6800 and FX cards who have some spare time and have gotten 3danalyze working all you need is here:

Anyone with an NVIDIA card who has tried this tweak, can you please run a regular benchmark, then an FP16 benchmark with the following timedemos and report back the FPS:

HARDOCP TIMEDEMOS:
http://www.fileshack.com/file.x?fid=5857

ATI TIMEDEMOS:
http://www.tommti-systems.de/main-Dateien/misc/HL2timedemos.zip

To do a timedemo, go to advanced keyboard options and enable developer console, then hit ~ and type timedemo demofile.dem

Don't you have an nvidia card? Can't take time out of your busy schedule of fudding to actually test if you're statements are true :p?
 
This thread is worlthelss. No facts, no benchmarks, no screenshots to compare.

Just another post crying about HL2 and FX cards, from someone who "doesn't have HL2 and have no intention of supporting that company".

Wake me up when some actual numbers are posted, and the crying has stopped.
 
tranCendenZ said:
You claim Crytek is lying then? They are the ones who made the statement that they got SM3.0 up and running in two weeks.
How did they do that? Did they re-write all the shaders to take advantage of ps30 or did they just recompile the shaders? Yeah, thought so...

To add _pp hints, one has to go into every shader and change teh code to where _pp hints are needed. This is very time consuming for ASM shaders. And please do not use the 3DA argument. Using 3DA to hack pp is much different than acutally making reliable code for millions of users with no graphical glitches.
 
Status
Not open for further replies.
Back
Top