Valve sucks

Status
Not open for further replies.
da sponge said:
While I agree with your last point, I don't with the first sentence. The reason precision isn't needed at times is because some of the shaders don't appear complex enough that the extra precision provided at full precision doesn't make any difference in the output generated. The point here is that Valve should have coded it so that the card had the option to run these shaders at partial precision to improve performance if image quality wasn't effected. The shaders that DO require full precision (as seen in the comparison screenshots of the glass) would simply not have this option, forcing all cards to run at full precision. Valve bypassed this completely by forcing the most of the FX cards into dx8.1 mode when they seemingly could have provided the majority of those users with dx9 benefits without a huge difference from dx8.1 performance by adding partial precision hints where appropriate.

Yep you got it dead on.

That's of course assuming we're not missing some big reason why not to use partial precision on the majority of the shaders. Many people are jumping to conclusions as to why Valve didn't do this and assume that partial precision would have been fine for the majority of the computations in the source engine. If there's some other factor that makes partial precision a moot point, then everyone getting riled up will seem pretty ridiculous, but if it is a viable, signification perfomance boost, Valve has some serious explaining to do.

Well using 3danalyze we can see most of the shaders work fine in FP16, with a few exceptions that need FP32. Mixed mode should have been made available, as its the best performance with no IQ loss you can program. Every other dev can do it and has done it in intensive dx9 games, Valve can do it, they just chose not to. I think ATI's dollars played into this.
 
trooper11 said:
Now what I do think happened was that they simply decided, maybe in haste or just from looking at the market, that they would only need to cose around the full prescion mode to maximize the quality of the game, somehting alot of people were antisipating as being superb. They decided to keep it as close to top quality as the could, which means it would hurt older cards obviously, which turned out to hurt the fx cards more then the older ati cards. This doesnt mean the intetion was to favor the ati cards, it just happened that ati's older cards also supported 24bit precision. But lets nto forget, they implemented full precision, which means 32bit and 24bit, so it equally supported the current generation of cards from nvidia and ati, so no bias there that i can see.

This topic has been beaten to death (often by me) on this board. I'll give you the quick rundown; if you want a further explanation, you'll need an actual graphics programmer (I'm a programmer, but not in the field of graphics).

Using a partial precision shader means calculating the floating-point numbers (mantissa and exponent) with 16 bits of precision. Full precision means using 24 bits or 32 bits for the numbers. (Double precision, common for floating-point math on CPUs, uses 64 bits.)

The number of bits used determines the number of significant figures of the result; higher precisions allow for more significant figures to be retained, and generally lead to fewer rounding errors. This is a good thing, especially when you plan to perform many intermediate calculations before reaching a result.

HOWEVER...

In the end, we're currently rendering to 32-bit integer framebuffers. So, if using a lower precision shader doesn't cause enough error to throw the final color of the pixel off, then there's no point in using the higher-precision shader - the output will be EXACTLY the same.

So when would you need the extra precision? When you're dealing with numbers where a tiny error will throw off the result, or performing many calculations where the rounding errors will compound over time.

I wonder if Valve's got a good reason why partial precision hints weren't used in HL2. It wouldn't have hurt them any, if used properly, and it certainly could have helped Nvidia owners.
 
Kevin Lowe said:
I wonder if Valve's got a good reason why partial precision hints weren't used in HL2. It wouldn't have hurt them any, if used properly, and it certainly could have helped Nvidia owners.

Its called the 8 million dollars ATI payed Valve to have special treatment.

Its one thing to use special features to make one card run a little faster then the other, however its a completely different situation when you totally cripple a card to make another look much better.

There is NO point in using FP24 and FP32 precision in areas where FP16 will work. There is no image quality difference when used properly. You only have to use the higher precision in certain areas where needed. I know VERY little about programming and yet i realize this very clearly. Why doesn't Valve?

BTW, according to the new Microsoft DirectX 9.0c spec, FP24 is now only partial precision :).
 
Kevin Lowe said:
This topic has been beaten to death (often by me) on this board. I'll give you the quick rundown; if you want a further explanation, you'll need an actual graphics programmer (I'm a programmer, but not in the field of graphics).

Using a partial precision shader means calculating the floating-point numbers (mantissa and exponent) with 16 bits of precision. Full precision means using 24 bits or 32 bits for the numbers. (Double precision, common for floating-point math on CPUs, uses 64 bits.)

The number of bits used determines the number of significant figures of the result; higher precisions allow for more significant figures to be retained, and generally lead to fewer rounding errors. This is a good thing, especially when you plan to perform many intermediate calculations before reaching a result.

HOWEVER...

In the end, we're currently rendering to 32-bit integer framebuffers. So, if using a lower precision shader doesn't cause enough error to throw the final color of the pixel off, then there's no point in using the higher-precision shader - the output will be EXACTLY the same.

So when would you need the extra precision? When you're dealing with numbers where a tiny error will throw off the result, or performing many calculations where the rounding errors will compound over time.

I wonder if Valve's got a good reason why partial precision hints weren't used in HL2. It wouldn't have hurt them any, if used properly, and it certainly could have helped Nvidia owners.

now THIS is great to see... someone who has (1) a brain, (2) the ability to use (1), (3) the common sense to see what's right, and (4) the stick-to-it-ness to tell us the truth.

major kudos.
 
burningrave101 said:
Its called the 8 million dollars ATI payed Valve to have special treatment.
you are playing politician here, pal. sorry, but ati never "paid" $8m to valve. what happened was that valve auctioned some rights off... and nvidia wouldn't go above $8m. your figure is thus off by a considerable amount of brains.

burningrave101 said:
There is NO point in using FP24 and FP32 precision in areas where FP16 will work. There is no image quality difference when used properly. You only have to use the higher precision in certain areas where needed. I know VERY little about programming and yet i realize this very clearly. Why doesn't Valve?
okay... now you're starting to show some intelligence here. thing is, i've been saying the same thing (and so have several other people) for quite a while now.
what needs to happen is to say "okay, we've figured this out, now how do we fix the game to make it work right on both cards?"

burningrave101 said:
BTW, according to the new Microsoft DirectX 9.0c spec, FP24 is now only partial precision :).
this is as relevant to the discussion as my cousin in albequerque, i.e. not one bit.
 
So I'm going to ask again... AHEM!

CrimandEvil said:
Doesn't Bloodlines run as DX9 with the FX cards? I think I saw something like that a couple pages back. If so that should tell you something.
 
starhawk said:
you are playing politician here, pal. sorry, but ati never "paid" $8m to valve. what happened was that valve auctioned some rights off... and nvidia wouldn't go above $8m. your figure is thus off by a considerable amount of brains.

And what rights would that entitle? Better performance? More exposure for ATI? A longer amount of time to test the game before release?

Valve has been screaming the 30% better performance on the ATI hardware since they started this whole thing last year.

HL2 and ATI has been advertised togethor in nearly every gaming and PC magazine out there for months. It wouldn't look so hot on their part if the nVidia cards were faster after its release. Especially after all those performance numbers they've been pushing.

ATI also had a full week to test HL2 before release to get their drivers ready. nVidia had 48 hours.

starhawk said:
this is as relevant to the discussion as my cousin in albequerque, i.e. not one bit.

Yea so are half the posts in this thread, whats your point?

I was just mentioning the fact that FP24 is no longer full precision compared to FP32 on the nVidia cards.

You also have to take into account that the NV40's are doing 32-bit precision against the X800's which are only doing 24-bit precision. If 16-bit precision was used when FP24/PF32 was not needed then i think the nVidia cards would be even faster compared to the ATI hardware.
 
burningrave101 said:
And what rights would that entitle? Better performance? More exposure for ATI? A longer amount of time to test the game before release?

Valve has been screaming the 30% better performance on the ATI hardware since they started this whole thing last year.

HL2 and ATI has been advertised togethor in nearly every gaming and PC magazine out there for months. It wouldn't look so hot on their part if the nVidia cards were faster after its release. Especially after all those performance numbers they've been pushing.

ATI also had a full week to test HL2 before release to get their drivers ready. nVidia had 48 hours.
thanks to all the spamming and flaming... i'm not entirely sure what rights were auctioned. i recall it being mentioned in size-5 text about 10 pages back tho... if you want to look.

burningrave101 said:
I was just mentioning the fact that FP24 is no longer full precision compared to FP32 on the nVidia cards.

You also have to take into account that the NV40's are doing 32-bit precision against the X800's which are only doing 24-bit precision. If 16-bit precision was used when FP24/PF32 was not needed then i think the nVidia cards would be even faster compared to the ATI hardware.
*sigh* beam me up scotty, there's no intelligent life in this thread. PRECISION DOES NOT MATTER AT THIS POINT!!! we know that there's a problem, all we need to do is find out HOW TO CODE IT INTO THE GAME SO THAT nVIDIA CARDS DEFAULT TO FP16!!!
 
starhawk said:
*sigh* beam me up scotty, there's no intelligent life in this thread. PRECISION DOES NOT MATTER AT THIS POINT!!! we know that there's a problem, all we need to do is find out HOW TO CODE IT INTO THE GAME SO THAT nVIDIA CARDS DEFAULT TO FP16!!!

Maybe if you spent half as much time using YOUR brain instead of trying to tell everyone else in this thread that they dont have one then you could come up with something lol. :rolleyes:
 
Whew.................... 25 pages and going.

Thanks Kyle for keeping the thread going and filtering out the junk. This is one of the fastest growing, controversial, and in-depth threads in this forum in a while.

Back to the topic,

Waaaaay back on page 4, I posted this question, which, was never answered in the following 21 pages of posts:

Badger_sly said:
We're supposed to really believe that one techie guy (credit given to Presi at Guru3d) messing around with HL2 and his FX card on his home computer, figured out how to make the game run in DX9 with good performance........ yet no one out of Valves entire development team could do the same? :rolleyes:

Maybe I should get a freshly baked [H] cookie as a reward for my 21 page stumper?
 
Whoa, this thread is still going on? :p

I ran HL2 on my FX 5200 (PCI, 128-bit memory) and as described on the first page, DX9 doesn't work properly (reflections) unless dxsupport.cfg is hacked to make the 5200 look like an ATI card. I chose 9800 Pro like the other original poster at guru3d.

Before I go on, I want to say that DX8.1 mode looks fine, especially for such a low end card. At 1024x768 I get 40fps and 71fps at 800x600 in the timedemo ("newdemo" I created with lots of water, indoors and outdoors, flashlight) with default DX8.1 options. DX9 looks better, but this is very acceptable for a $40 video card. A higher end NV35/38 card owner would be disappointed, but for lower end owners this shouldn't be a problem.

DX9 mode gained 15% by going to FP16 (3D-Analyze). Not a big deal and it was still nowhere near playable framerates (low double digits either way). For a low end card (5200, 5700LE and probably 5700nu), forget about -dxlevel 90 unless you want to play at 640x480....

which brings me to an oddity. :D Using the hacked dxsupport.cfg (fake ATI id), forcing -dxlevel 81 and FP16 using 3D-Analyze gives almost a 100% increase in fps and the water reflections are still enabled. It's playable even on a FX 5200 in that mode.

I have to give back the FX 5200 PCI, but I have another system with an AGP slot to test later.
 
This thread is bullshit. I beat HL2 in 2 days with a GeForce 2 32Mb, on low settings and 800x600. My card should have blown up.

2.4C P4, 512mb, DFI i875, 129gb 8mb WD IDE.
 
pxc said:
Whoa, this thread is still going on? :p

I ran HL2 on my FX 5200 (PCI, 128-bit memory) and as described on the first page, DX9 doesn't work properly (reflections) unless dxsupport.cfg is hacked to make the 5200 look like an ATI card. I chose 9800 Pro like the other original poster at guru3d.

Before I go on, I want to say that DX8.1 mode looks fine, especially for such a low end card. At 1024x768 I get 40fps and 71fps at 800x600 in the timedemo ("newdemo" I created with lots of water, indoors and outdoors, flashlight) with default DX8.1 options. DX9 looks better, but this is very acceptable for a $40 video card. A higher end NV35/38 card owner would be disappointed, but for lower end owners this shouldn't be a problem.

DX9 mode gained 15% by going to FP16 (3D-Analyze). Not a big deal and it was still nowhere near playable framerates (low double digits either way). For a low end card (5200, 5700LE and probably 5700nu), forget about -dxlevel 90 unless you want to play at 640x480....

which brings me to an oddity. :D Using the hacked dxsupport.cfg (fake ATI id), forcing -dxlevel 81 and FP16 using 3D-Analyze gives almost a 100% increase in fps and the water reflections are still enabled. It's playable even on a FX 5200 in that mode.

I have to give back the FX 5200 PCI, but I have another system with an AGP slot to test later.

I figured as much that a mixed precision path would probably be a moot point for anything lower than a 5700NU. That's probably another reason why a high quality DX8.1 mode was chosen for the NV3x over a mixed precision mode.
 
...Am I the only one here who recalls Gabe's lengthy dissertation explaining Valve's choices on this very specific matter over a year ago? All of this argument is simply regurgitating stuff we've known since September of 2003.

Yes, Nvidia cards prior to the release of the 6 series had several design choices that caused performance issues. One was precision, and another was instruction order. ANYONE buying an Nvidia card during the 5 series instead of a Radeon 9700/9800 was taking either a calculated risk, or making an uniformed choice. NO ONE on the planet ever reviewed an Nvidia 5xxx card and claimed it was a compelling choice over it's equivilent Radeon 9xxxx card. The founder of Nvidia has publicly apologized for the many mistakes they made on the 5xxx series cards (the 5800 in particular.)

Of course the new Nvidia cards absolutely rock and have none of the shortcomings of their predecessors. You don't need to be a rocket scientist to see that.

The argument that the older generation of Nvidia cards represents a large population of gamers is a sad truth. If you are one of those sorry souls, here's my advice: Buy a new video card or be happy with DX 8.1 in HL2. Valve has spent at least 5 times (according to Gabe) the effort trying to improve performance for FX5xxxx cards than they did for ATI cards, which are essentialy running DX9 code without card specific optomizations.. Should HL2 have been coded with 16-bit precision everywhere instead? Should I cut off my balls because my wife doesn't want another baby?


Links added:

http://www.geek.com/news/geeknews/2003Sep/fea20030919021800.htm
http://www4.tomshardware.com/business/20030911/half-life-03.html
http://www.anandtech.com/video/showdoc.html?i=1863&p=4


This last one has a lot of relevant comparisons for the different cards, precisions, and codepaths...

http://graphics.tomshardware.com/graphic/20030912/half-life-03.html
 
Omega6_Virus said:
This thread is bullshit. I beat HL2 in 2 days with a GeForce 2 32Mb, on low settings and 800x600. My card should have blown up.

2.4C P4, 512mb, DFI i875, 129gb 8mb WD IDE.

That's like chugging a bottle of Opus One through a beer bong. Congratulations. The point of the journey is not to arrive..
 
rcolbert said:
That's like chugging a bottle of Opus One through a beer bong. Congratulations. The point of the journey is not to arrive..

:D :D :D :D
OMFG THAT'S HILARIOUS!!!!!!! BEST POST IN THE WHOLE THREAD!!!!
 
rcolbert said:
...Am I the only one here who recalls Gabe's lengthy dissertation explaining Valve's choices on this very specific matter over a year ago? ...........

Back when he said the game would be out in 2003? Not too many people take what Gabe says seriously anymore.
 
ID used codes to penalise ATI hardware, who ever said ID sucked ??

remember that beta of doom3 whcih was distributed with later FX5900s?? ATI hardware stuck a 10 FPS constant ??

All i see is : FX series are regognised as DX 8.1 hardware because of their support of SM2.0 (which i must remind si 8 times slower then ATI hardware in PS2.0 instructions). Valve said they had to do a specific path for the FX series...

And by the way, HL2 runs faster on nvidia hardware then doom3 on ATI. Yet ATI hardware is generally faster then nvidia hardware... so which one is more wrong???

By the way i cant beleive ANYONE bought those geforce FXs... especially the 5800s when the 9700 was out... Turns out, people are still using 9700s, when everyone forgot about the 5800...
 
People actually think this thread is in depth? We don't know anything other than public information
 
that's true... but the discussion is intense as all get out... mostly because it appears that people are intent on flaming anyone who says anything that disagrees with what the flamer thinks. this is true to a small extent in every thread anywhere... but on these hl2 threads is where it really gets going.
 
rcolbert said:
Valve has spent at least 5 times (according to Gabe) the effort trying to improve performance for FX5xxxx cards than they did for ATI cards, which are essentialy running DX9 code without card specific optomizations.. Should HL2 have been coded with 16-bit precision everywhere instead? Should I cut off my balls because my wife doesn't want another baby?

Well here is the problem, rcolbert, Valve claimed it took them that much longer to code a partial precision path for FX5xxx cards in DX9 than a regular path for ATI cards. But that path doesn't exist in the game, therefore the "it took X times longer" statement is invalid. So either Valve was BSing, or they purposely discarded the work they did.
 
tranCendenZ said:
Well here is the problem, rcolbert, Valve claimed it took them that much longer to code a partial precision path for FX5xxx cards in DX9 than a regular path for ATI cards. But that path doesn't exist in the game, therefore the "it took X times longer" statement is invalid. So either Valve was BSing, or they purposely discarded the work they did.

I don't believe Valve ever said they were coding a partial precision path (using hints presumably) for DX9 for the older Nvidia cards. I do believe that their work ultimately was to create the DX8.1 path and implement is as best they could for the older Nvidia cards which does exist. The DX9 with FPS16 is just a suggestion that comes out of this thread. I don't hold Valve accountable for writing a DX game either out of spec, heavily modified with pp hints everywhere, or based on an older version of DX altogether simply to make up for the shortcomings of bad hardware.

There's absolutely no way around the argument that 24-bit precision is part of the DX9 spec, Nvidia claims that their FX5xxxx cards are DX9 compatible, and Nvidia, yes Nvidia made the design choice in their cards that full precision would be executed at a very costly 32-bit even when the *code* and the *spec* only calls for 24-bit. That was a risk (one of several) that Nvidia took when creating their older cards. Thankfully, they have atoned for all of their past sins in the 6xxxx series.

I actually am glad that this game reveals the flaws of the old Nvidia architecture. I'm very biased towards doing things the right way. My last three video cards in order have been: 9700 Pro --- 9800XT ---- 6800GT. I'm not a !!!!!! of either manufacturer. However, since the 9700 Pro was released, I was surprised that anyone bought a single Nvidia card until the 6800 series was released. That last generation was completely owned by ATI, and Nvidia blundered continually for several years in a row.

I don't think that's entirely Valve's fault.


explanation: !!!!!!? I guess HardOCP's forum replaces words like f-a-n-b-o-y assuming that it is a flame or insult. Since I was using it in the negative and only referring to myself, I don't think that's always the case. Either way, that's what the !!!!'s are all about.
 
rcolbert said:
I actually am glad that this game reveals the flaws of the old Nvidia architecture. I'm very biased towards doing things the right way.

If thats the case then you should realize that Valve didn't do this the right way. There is no reason for them to use FP24/FP32 in situations where FP16 will work, which is the majority of the time.

Why use something that hurts performance more and yet yields absolutely no gain whatsoever?

In Far Cry Crytek used FP16 the majority of the time except when the situation called for FP32. This allowed performance to stay much higher then it would of been if the cards were forced to run FP32 ALL the time. ATI didn't like this though and they did ALOT of bitching about how they felt FP16 wasn't good enough even though there was no noticeable difference at all between the two. :rolleyes:
 
Wow this is soo bad for valve and FX card owners.....what i don't understand is why valve would do such a thing ...i would not want to piss off anyone buy purposely crippling the optimization of the fx cards like they did to ati card considering there is a large portion of hl2 owners using fx cards. I dunno...but im thankful valve released hl2 deathmatch it rocks! :D
 
Inside Gabes head: Shall we release the game now and start making money or go back and code for 2.55 percent of our userbase? We all know the answer to this.
 
tranCendenZ said:
Well here is the problem, rcolbert, Valve claimed it took them that much longer to code a partial precision path for FX5xxx cards in DX9 than a regular path for ATI cards. But that path doesn't exist in the game, therefore the "it took X times longer" statement is invalid. So either Valve was BSing, or they purposely discarded the work they did.

that's all it takes is one single statement?
 
tranCendenZ said:
Well here is the problem, rcolbert, Valve claimed it took them that much longer to code a partial precision path for FX5xxx cards in DX9 than a regular path for ATI cards. But that path doesn't exist in the game, therefore the "it took X times longer" statement is invalid. So either Valve was BSing, or they purposely discarded the work they did.

Or... this sounds pretty reasonable.

DaveBaumann said:
Quite a lot of expected elements were not actually in the final game: No HDR (it was demo'ed), no 3Dc, no mixed mode paths. Given the lateness of the game it looks like they just got rid of any extraneous elements in order to focus on getting the title out in a state that everyone could run it on. Further focusing on mixed mode of the FX path is a further dilution of efforts; when you consider that they were going to use DX8 for 5200/5600/5700 anyway its quite possible they looked at the Steam stats for the numbers of 5800/5900/5950 and made choce that the level of increased support may not be warrented given the numbers of boards out there and that the 6 series were powerful enough to run it in full precision.

I know you want to try and say Valve made FX cards slower on purpose, but give it a rest. You've got egg on your face before by jumping to conclusions.

The fact is, they didnt do mixed paths in the release version. Another fact is if NV hadnt of chosen only FP16 and FP32, and used FP24 this wouldnt be a problem. Try laying some of the blame on the very poor design of the FX cards.
 
R1ckCa1n said:
Inside Gabes head: Shall we release the game now and start making money or go back and code for 2.55 percent of our userbase? We all know the answer to this.

You think that only 2.55 percent of the userbase have FX series nVidia cards? lol

You need to slap yourself and wake up. That 2.55 percent is more likely the userbase that has NV40 and X800 series cards. The rest are running NV30, R300 and under.
 
burningrave101 said:
You think that only 2.55 percent of the userbase have FX series nVidia cards? lol

You need to slap yourself and wake up. That 2.55 percent is more likely the userbase that has NV40 and X800 series cards. The rest are running NV30, R300 and under.

"Here you go, a quick analysis of the Steam stats indicate that there are about 245,000 FX boards in total, which represents about 18.9% of their total userbase, however the number of boards that they stated they would not treat as DX8 for the FX series is 33,047 or 2.55% of their userbase. Conversly, the number of DX9 ATI boards is 386,956 or 29.85% of their userbase - and these will recieve no benefit from extra effort in coding and supporting different precisions.

Forget IHV favoritism or who paid who, if you were running a company that has to do cost benefits analysis, what would these stats tell you? "
 
You guys are hilarious. One guy on a forum comes up with a conspiracy, and suddenly it's gospel truth and front-page alert-the-presses news.

Who knows the circumstances that led up to the current outcome. Maybe they had problems with some of their test systems, with certain levels? I have no idea. But to just jump to the "OMG VALVE HATES NVIDIA LOVES ATI" conclusion makes you all look like assclowns.

Maybe next time you guys make a full game engine, designed to work across DX7-DX9 hardware and a multitude of system configurations.. you'll have room to talk, or jump to conclusions for that matter.
 
delusion_ said:
You guys are hilarious. One guy on a forum comes up with a conspiracy, and suddenly it's gospel truth and front-page alert-the-presses news.

Who knows the circumstances that led up to the current outcome. Maybe they had problems with some of their test systems, with certain levels? I have no idea. But to just jump to the "OMG VALVE HATES NVIDIA LOVES ATI" conclusion makes you all look like assclowns.

Maybe next time you guys make a full game engine, designed to work across DX7-DX9 hardware and a multitude of system configurations.. you'll have room to talk, or jump to conclusions for that matter.

The news is there is a huge performance increase for fx cards that was already suppose to be there according to valve. And believe it or not you dont need to make a game to have an opinion.
 
just out of curiosity... has anyone tried playing hl2 on an xgi volari graphics card?
 
I'm just glad this thread finally includes the word *assclowns*.

And to reiterate - Microsoft wrote the DX9 spec. When they did, they specified a function that would be calculated to 24-bit precision. Nvidia created a video card series. When they did they decided that the 24-bit precision instruction would be ignored and their card would process the request as the much more expensive 32-bit precision. Valve isn't calling for 32-bit precision. FX cards are slow in this situation because they choose to do much more than the software is asking them to.

I own an Nvidia card (now) because I believe the 6800GT is the best value in a top performance video card on the planet right now. I stand by my assertion that Nvidia's entire series of cards from the previous generation are hampered by serious design flaws. Too bad for their owners. However, this isn't news. I've been saying the same thing since way back when we were all playing BF:1942 and UT2003 (remember, last summer?)

Give Valve a break. They've just released the best FPS ever created (IMO) which happens to annoy a lot of people for a lot of reasons. If you want to flame them, try the easy targets. Steam Sucks and/or Why Securom 5 with CD-checks for the retail edition when a Steam download version is essentially the same thing without a CD?
 
fallguy said:
The fact is, they didnt do mixed paths in the release version. Another fact is if NV hadnt of chosen only FP16 and FP32, and used FP24 this wouldnt be a problem. Try laying some of the blame on the very poor design of the FX cards.

1) You cannot blame nVidia anymore, it's done with. So what, Valve knew it the time.

2) How exactly does that explain then that running in DX9 mode, that when a FX card is changed to a ATi card.. it runs without graphical glitches?

***

That #2 is still the kicker...
 
DropTech said:
1) You cannot blame nVidia anymore, it's done with. So what, Valve knew it the time.

2) How exactly does that explain then that running in DX9 mode, that when a FX card is changed to a ATi card.. it runs without graphical glitches?

***

That #2 is still the kicker...

#2 is easy to explain. Pay attention, I'll type slowly. When an FX card is run using the DX9 path (as in when you fool the game into thinking it's a 9800 Pro) the instructions are run and processed just fine. The problem is that it's slow as hell because all of the calculations in the areas we've been talking about are computed with 32-bit precision instead of the 24-bit precision that the spec calls for and the application expects. This is by design of the Nvidia cards. In order to speed things up, folks have been proposing that those calculations should have been done with partial precision (16-bit) instead of full precision. For an overwhelming majority of FX cards, this is a moot point since they run in DX 8.1 compatibility mode.

The thought that this is a conspiracy against Nvidia is absurd. Why? Two reasons. First off, this doesn't impact NVidia's newest generation of cards at all, and you can readily believe that Valve has had access to them for some time now regardless of politics. Second, because NVidia isn't going to be hurting if this is a problem for users, Valve is.
 
rcolbert said:
#2 is easy to explain. Pay attention, I'll type slowly. When an FX card is run using the DX9 path (as in when you fool the game into thinking it's a 9800 Pro) the instructions are run and processed just fine. The problem is that it's slow as hell because all of the calculations in the areas we've been talking about are computed with 32-bit precision instead of the 24-bit precision that the spec calls for and the application expects. This is by design of the Nvidia cards. In order to speed things up, folks have been proposing that those calculations should have been done with partial precision (16-bit) instead of full precision. For an overwhelming majority of FX cards, this is a moot point since they run in DX 8.1 compatibility mode.

The thought that this is a conspiracy against Nvidia is absurd. Why? Two reasons. First off, this doesn't impact NVidia's newest generation of cards at all, and you can readily believe that Valve has had access to them for some time now regardless of politics. Second, because NVidia isn't going to be hurting if this is a problem for users, Valve is.

I already know all that.. by but switching the name, it doesn't switch the physical features of the card (Driver level orientation) thus the game regardless of the name will recognize that the card physically can do 32 bit percision, and because it calls for full percision, it will run 32 bit percision. So the percision thing I don't buy as a viable answer.

And I think you think I mean FPS and performance.. that's not what I mean.. what I mean is water disappearing, walls disappear, parts of the map not rendering...
 
Actually the game does not make calls that specify precision in terms of bits of precision. The game calls for full precision (not 24 or 32 bit precision) in accordance with the DirectX specification. The card subsequnetly processes the instruction out of spec as 32-bit. Anyway, simple answer is that the root cause is Nvidia doesn't obey the DirectX spec. Don't even get me started on shader instruction order sequences. That's probably an equally weighty argument that condemns the FX5 series as the piles and piles of poop that it is.
 
rcolbert said:
Actually the game does not make calls that specify precision in terms of bits of precision. The game calls for full precision (not 24 or 32 bit precision) in accordance with the DirectX specification. The card subsequnetly processes the instruction out of spec as 32-bit. Anyway, simple answer is that the root cause is Nvidia doesn't obey the DirectX spec.

I said the game calls for full percision... but anyway.

They (nVidia) still do it, and offer the same performance for theoretically, more quality (and future..) But anyway. Yes, the FX series borked both FP and PS/VS requirements..

I just don't buy the wholer FP percision thing as the cause of graphical glitches (I don't mean shader related.. I mean rendered completely different in some cases when forced to DX9 -- and are fixed when the card is given an ATi ID) however, at the same time; if it was related to identification, there is no consipiracy because as you said, they would had ample time to add in the 6x00 series to the same "compsired" list. I just wanna know what it is..

W/e, lol, yeah, I heard enough over at FM about the whole Shader Instruction Orders on the FX series.
 
Status
Not open for further replies.
Back
Top