Valve sucks

Status
Not open for further replies.
The DirectX standard demands that every card/driver that supports shader model X, must also support all shader models < X.
As far as I know, SM2.0b is no exception... so GeForce 6800 should already support SM2.0b, and future ATis and other brands will have to aswell.
 
Scali said:
The DirectX standard demands that every card/driver that supports shader model X, must also support all shader models < X.
As far as I know, SM2.0b is no exception... so GeForce 6800 should already support SM2.0b, and future ATis and other brands will have to aswell.

But there will be no benefit to programming an sm2.0b path for any card but the x800. Cards newer than the x800 will benefit more from SM3.0 and cards that don't support sm2.0b will benefit more from SM2.0. Since the only card that will benefit most from sm2.0b is the x800, it is very much a series-specific shader model, and if the "percentages" are taken into account as the way to support card features, once the x800 goes out of production and the new ati card supports SM3.0, sm2.0b will not be worth programming and testing, because with all the new cards sm3.0 will be the new tech and that is what enthusiasts will be looking for games to support. The x800 will become like the GeForce FX in that to get the most out of it, you need to do special programming that doesn't benefit other cards much. So if you are advocating Valve not using partial precision, realize you are advocating the dropping of sm2.0b, because the two are very similar - both are necessities to get the most out of one specific series of hardware, but other series of hardware either don't benefit from it or have other ways they will benefit more.
 
Question:
If Valve already knew the effect of FP16 on shaders (which ones would be ruined, which ones would suffer "acceptable" errors, and which ones would come across just fine) then why not simply implement partial precision on DX9 for the subset of shaders that showed no errors, and let users choose between quality or speed? That way, DX9 wouldn't be as slow on FX hardware, and the user could decide for themselves whether it's worth it. (And of course, 6800 owners would get a few FPS as well.)

Perhaps we'll see PP hints if the rumors of the SM3.0 path to be patched in later are true. Unfortunately, I haven't heard that one in a while, so I'm not holding out much hope.
 
gsboriqua said:
They will, but that doesn't mean that all of a sudden ATi's new cards will no longer be able to run SM 2.0b

what would be the point of having the next-gen cards running sm2.0b when they can run sm3.0?
 
tranCendenZ said:
But there will be no benefit to programming an sm2.0b path for any card but the x800. Cards newer than the x800 will benefit more from SM3.0 and cards that don't support sm2.0b will benefit more from SM2.0. Since the only card that will benefit most from sm2.0b is the x800, it is very much a series-specific shader model, and if the "percentages" are taken into account as the way to support card features, once the x800 goes out of production and the new ati card supports SM3.0, sm2.0b will not be worth programming and testing, because with all the new cards sm3.0 will be the new tech and that is what enthusiasts will be looking for games to support.

That depends... Many shaders will not require SM3.0, and will compile and work equally well for SM2.0b (see 3DMark05 or FarCry for example).
If the X800 turns out to be popular enough, perhaps games will be written for SM2.0b rather than SM3.0. And developers can always choose to compile SM2.0 shaders to SM2.0b on cards that support it, which may give slightly better performance in some cases. So while it may not be as significant as SM2.0 or SM3.0, it can still have its uses. In many cases it's as simple as recompiling your shaders.

The x800 will become like the GeForce FX in that to get the most out of it, you need to do special programming that doesn't benefit other cards much. So if you are advocating Valve not using partial precision, realize you are advocating the dropping of sm2.0b, because the two are very similar - both are necessities to get the most out of one specific series of hardware, but other series of hardware either don't benefit from it or have other ways they will benefit more.

Not at all. The FX problem is that the card is simply too slow with regular SM2.0. An X800 is fast in both SM2.0 and SM2.0b, so whichever you choose, you win.
The difference is that SM2.0b is easy to use, just recompile your SM2.0 (or some/many SM3.0) shaders, and you're done. There's no need to rewrite all shaders and reduce precision by hand, to try and make the hardware run at acceptable framerates. They will run at acceptable framerates with standard SM2.0 anyway (in fact the X800 series is currently the fastest SM2.0 hardware you can buy), you just have the option to make the shaders slightly faster, or make them slightly more advanced. It's an option which you may or may not use. For the FX series there's no option, SM2.0 simply doesn't run anywhere near as fast as on ATi's competing SM2.0 hardware. In fact, even after optimizing for FX, they're still slower (see the mixedmode benchmarks).
 
tornadotsunamilife said:
what would be the point of having the next-gen cards running sm2.0b when they can run sm3.0?

Most games use a combination of various shader models. There's no need to use SM3.0 for everything. In fact, some things don't need shaders at all.
So, like many current games use SM1.1 or fixedfunction next to SM2.0, I don't see why future games wouldn't use SM2.0b next to SM3.0.
The lower the shader version you use, the less fallback paths you have to implement.
 
What I would really like to see is a text file with option settings for every shader in the game. That way, we can tweak each shader to our liking, choosing dx81 for this shader, dx9 for this one, dx9 partial precision for this one, etc.

That would be the best solution for most of us, and the great thing is that it would allow Valve to focus on updating/bug hunting/bonus content while the 1337 users make shader implementation configuration files and distribute them to others of like hardware.

Of course, more than likely, all that's actually going to happen is that NVidia is going to rewrite a majority of the shaders in HL2 in such a way as to help FX line gain performance (since it doesn't appear that the 6xxx's are much affected, at least by precision level).

Edit:
Sweet! I am now [H]Lite!
 
Scali said:
That depends... Many shaders will not require SM3.0, and will compile and work equally well for SM2.0b (see 3DMark05 or FarCry for example).
If the X800 turns out to be popular enough, perhaps games will be written for SM2.0b rather than SM3.0.

That's a rather far fetched scenario. Even in this generation the SM3.0 6800 cards are outselling the X800 cards by a wide margin, and even the $99 GeForce 6200 supports SM3.0. And next gen ATI cards are going SM3.0. X800 owners will definitely be in the minority come a year from now with sm2.0b support, probably a similar number of GeForce FX owners.

And developers can always choose to compile SM2.0 shaders to SM2.0b on cards that support it, which may give slightly better performance in some cases. So while it may not be as significant as SM2.0 or SM3.0, it can still have its uses. In many cases it's as simple as recompiling your shaders.[/qu[te]

Just compiling SM2.0 to SM2.0B doesn't even come close to taking full advantage of the X800.



Not at all. The FX problem is that the card is simply too slow with regular SM2.0. An X800 is fast in both SM2.0 and SM2.0b, so whichever you choose, you win.

Now it is. But next year, when longer shaders become commonplace, SM2.0 will either be slower or effects will be cut down. The year the GeForce FX came out, it was more than fast enough for all the games released. It wasn't until over a year later than a game came out that made it too slow to be playable in DX9 (Half Life 2) without special coding.

The difference is that SM2.0b is easy to use, just recompile your SM2.0 (or some/many SM3.0) shaders, and you're done.

Recompiling SM2.0 for SM2.0B doesn't really take advantage of SM2.0B and I don't consider that a true SM2.0B implementation. SM3.0, once it is taken full advantage of next year, won't be able to simply compile down to SM2.0B due to the features it has SM2.0B does not, some shader rewriting will need to take place.

There's no need to rewrite all shaders and reduce precision by hand, to try and make the hardware run at acceptable framerates.

??? You have it backwards there. There is *more* need to rewrite shaders with an SM2.0B path, because it does not support the full featureset of SM3.0, and no need to rewrite shaders for PP. Partial Precision just entails adding one line of code to your shaders, which could easily be batch processed and take no longer than recompiling shaders for SM2.0B. Both must be tested equally, so if anything SM2.0B is far more of a hassle than PP because SM2.0B will require some rewriting of SM3.0 shaders while PP does not require any rewriting.

They will run at acceptable framerates with standard SM2.0 anyway (in fact the X800 series is currently the fastest SM2.0 hardware you can buy), you just have the option to make the shaders slightly faster, or make them slightly more advanced. It's an option which you may or may not use. For the FX series there's no option, SM2.0 simply doesn't run anywhere near as fast as on ATi's competing SM2.0 hardware. In fact, even after optimizing for FX, they're still slower (see the mixedmode benchmarks).

Just like the GeForce FX couldn't run SM2.0 games nearly as fast this year once devs took full advantage of SM2.0, X800 won't be able to run SM3.0 games nearly as fast once they take full advantage of SM3.0 next year, especially if devs do not cater to X800's exclusive sm2.0b path - which will be by far in the minority, similar to GeForce FX owners this year.
 
tranCendenZ said:
That's a rather far fetched scenario. Even in this generation the SM3.0 6800 cards are outselling the X800 cards by a wide margin, and even the $99 GeForce 6200 supports SM3.0. And next gen ATI cards are going SM3.0. X800 owners will definitely be in the minority come a year from now with sm2.0b support, probably a similar number of GeForce FX owners.

Proof? From my understanding the 9800 series is selling the best still currently due to HL2. As for new gen cards both are same boat. But once again as Scali was saying (nice new "noob" I might add. Good read) It's not as hard to back down shaders anyway which they (programmers) will still be programming shaders for 2.0 for a long time so it's not hard to get the next step towards SM3 which is SM2.b which really isn't THAT far off. And once again we saw how Farcry did it easily.

Now it is. But next year, when longer shaders become commonplace, SM2.0 will either be slower or effects will be cut down. The year the GeForce FX came out, it was more than fast enough for all the games released. It wasn't until over a year later than a game came out that made it too slow to be playable in DX9 (Half Life 2) without special coding.
Nope nope. Farcry was already bring it down to it's knees. GFFX series simply wasn't "enough" for games. They (nvidia) didn't feel like they had to do as many leaps and bounds before. ATi much like the AMD and Intel wars came along and really brough on the heat with the 9700 offering fast performance at really high resolutions with AA.


Recompiling SM2.0 for SM2.0B doesn't really take advantage of SM2.0B and I don't consider that a true SM2.0B implementation. SM3.0, once it is taken full advantage of next year, won't be able to simply compile down to SM2.0B due to the features it has SM2.0B does not, some shader rewriting will need to take place.
you actually still don't know this. It's all speculation. Pacific Fighters I might add is an SM3 game that brings the 6800 series to it's knees.

Just like the GeForce FX couldn't run SM2.0 games nearly as fast this year once devs took full advantage of SM2.0, X800 won't be able to run SM3.0 games nearly as fast once they take full advantage of SM3.0 next year, especially if devs do not cater to X800's exclusive sm2.0b path - which will be by far in the minority, similar to GeForce FX owners this year.
Careful. Same thing might be happening with SM3 now. (Pacific Fighters)
Besides, writing and exclusive path for GFFX users, we are really talking about one game and one game company that didn't do that that still hasn't added all the features yet they were planning on. You're talking like there is an epidemic of poorly written games. The real problem is just crappy games in general :D
 
tranCendenZ said:
That's a rather far fetched scenario. Even in this generation the SM3.0 6800 cards are outselling the X800 cards by a wide margin, and even the $99 GeForce 6200 supports SM3.0. And next gen ATI cards are going SM3.0. X800 owners will definitely be in the minority come a year from now with sm2.0b support, probably a similar number of GeForce FX owners.

Oh please, let's not pretend that GeForce 6200 can actually RUN SM3.0. It's just like the 5200 was (and 9550/9600SE), a card with all the features, but horribly slow.
I'm not too sure about your claim of the 6800 outselling the X800 either. We can't really tell until they become widely available and less pricey (eg when the next generation is introduced). Currently they're all just in the margin (less than 2% on the Valve Steam Survey).
I'm not saying it's going to happen, I'm just not ignoring the possibility.

Just compiling SM2.0 to SM2.0B doesn't even come close to taking full advantage of the X800.

Firstly, I never claimed it would take full advantage, I said it "may give slightly better performance in some cases"... Secondly, I like the arguments that you present on this issue.

Now it is. But next year, when longer shaders become commonplace, SM2.0 will either be slower or effects will be cut down. The year the GeForce FX came out, it was more than fast enough for all the games released. It wasn't until over a year later than a game came out that made it too slow to be playable in DX9 (Half Life 2) without special coding.

The year the GeForce FX came out, there was no SM2.0 code at all yet.
Also, if you just need longer SM2.0 shaders, SM2.0b is perfect for the job. Just look at FarCry (as I mentioned before).

Recompiling SM2.0 for SM2.0B doesn't really take advantage of SM2.0B and I don't consider that a true SM2.0B implementation. SM3.0, once it is taken full advantage of next year, won't be able to simply compile down to SM2.0B due to the features it has SM2.0B does not, some shader rewriting will need to take place.

Some yes, not all. But whatever shaders can compile under SM2.0b, are an advantage to the X800 at no extra effort for the developers.

??? You have it backwards there. There is *more* need to rewrite shaders with an SM2.0B path, because it does not support the full featureset of SM3.0, and no need to rewrite shaders for PP. Partial Precision just entails adding one line of code to your shaders, which could easily be batch processed and take no longer than recompiling shaders for SM2.0B. Both must be tested equally, so if anything SM2.0B is far more of a hassle than PP because SM2.0B will require some rewriting of SM3.0 shaders while PP does not require any rewriting.

Apparently you have never developed any shaders. Partial precision is a suffix that can be added to an instruction in DX9's shader assembly language. So in the case of assembly shaders, you have to figure out which instructions can get away with partial precision in your particular shader, and then add the suffix.
In the case of HLSL, you use the datatype 'half' rather than 'float' to signal that the compiler should treat these variables with partial precision. Again you have to analyze the shader and change the variables that can get away with partial precision.

Ofcourse you could just bruteforce everything to partial precision, and then do visual inspection, but that's not doing your software or the hardware much justice in most cases. You'd just end up with a few shaders that luckily work okayish with partial precision (but could look better when hand-tuned), and a lot of shaders that wouldn't work with partial precision, so they have to run at full precision, even though they could have been running in partial precision at least partly, with much better performance.

SM2.0b will never be that much of a hassle. But the point I was trying to make is that developers have the OPTION. X800 will be fast in SM2.0 anyway, so it's not crucial that they develop SM2.0b paths. GeForce FX however has to be degraded to DX8.1 level to get the performance of Radeons in SM2.0. I'm sure that you will agree that the X800 is in a much better position than the FX. Option vs necessity.

Just like the GeForce FX couldn't run SM2.0 games nearly as fast this year once devs took full advantage of SM2.0, X800 won't be able to run SM3.0 games nearly as fast once they take full advantage of SM3.0 next year, especially if devs do not cater to X800's exclusive sm2.0b path - which will be by far in the minority, similar to GeForce FX owners this year.

So far it doesn't particularly look like the 6800 series will be able to outperform the X800 much with SM3.0 either, so the full SM3.0 games will probably be reserved for the next generation of hardware for both vendors.
The point was mainly that the X800 can benefit from any shaders that will compile under SM2.0b, but not under SM2.0. In the case of FarCry, it seems that all their SM3.0 shaders turned out to work in SM2.0b aswell. That's a nice little bonus for X800 users, is it not? No more than a bonus though, because they ran excellently in SM2.0 already (better than the 6800 series, ironically enough).

Let me summarize my point:
GeForce FX was sold as an SM2.0 part, and cannot live up to the expectations, not even when developers cater specifically for its needs (which could take a lot of extra development time).
X800 is sold as an SM2.0 part, and performs adequately, yet offers the option of SM2.0b to improve quality and/or performance a bit, if developers desire to do so (which doesn't have to take much time at all, depending on the case).

In other words: No X800 user is going to open a 'Valve Sucks' thread if they find that the SM3.0 path in Half-Life 3 doesn't work properly on their X800 card.
 
starhawk said:
then let's get to work... valve ain't gonna hand it to us you know!

I don't think you understand. Unless I'm mistaken, there are no such shader settings accessible outside recompiling the shader paths at the present. I would like to see the said text file with options for which water shader to use (i.e., simple reflections with dx81, reflect all with dx81, simple reflections with dx9, etc.), which glass shader to use (i.e., dx81, dx90, dx90pp, etc.), etc.

I'm not asking for individual tweaking of values in each shader (which I didn't clarify earlier) but I am asking for something that is not currently available in the game as each shader path is currently self-contained. At present, you can not specify each shader, but only which entire group of shaders, that you want to use.
 
that's what i thought.

that doesn't change the fact that if WE want change then WE have to implement it. the only other viable option is boycotting VALVe, and (1) i'm willing to bet that at least 3 people flame me for even suggesting this and (2) there aren't enough of us who hate hl2 to make a hint of a shadow of half a dent in VALVe's profits. they wouldn't even notice.
 
starhawk said:
that's what i thought.

that doesn't change the fact that if WE want change then WE have to implement it. the only other viable option is boycotting VALVe, and (1) i'm willing to bet that at least 3 people flame me for even suggesting this and (2) there aren't enough of us who hate hl2 to make a hint of a shadow of half a dent in VALVe's profits. they wouldn't even notice.

Even if you made a dent they'd just turn off volume maps.
 
Scali said:
Oh please, let's not pretend that GeForce 6200 can actually RUN SM3.0. It's just like the 5200 was (and 9550/9600SE), a card with all the features, but horribly slow.
I'm not too sure about your claim of the 6800 outselling the X800 either. We can't really tell until they become widely available and less pricey (eg when the next generation is introduced). Currently they're all just in the margin (less than 2% on the Valve Steam Survey).
I'm not saying it's going to happen, I'm just not ignoring the possibility.

Looking at both the sales figures released showed Nvidia capturing the high end market with the 6800, and steam (where ATI should be far ahead given the ATI-related promotions), Nvidia is clearly in the lead this round.

Firstly, I never claimed it would take full advantage, I said it "may give slightly better performance in some cases"... Secondly, I like the arguments that you present on this issue.

Well, my point is not about SM2.0 games being compiled to SM2.0B, its about games that take advantage of SM3.0 features not part of SM2.0B being reworked for SM2.0B. Those are the cases where the x800's/sm2.0b minority status may be an issue.


The year the GeForce FX came out, there was no SM2.0 code at all yet.

Sure there were, just not any games that took full advantage of SM2.0. For instance, Halo, Tomb Raider, and Gunmetal all came out their with some sm2.0 shaders. They didn't make nearly as intense use of SM2.0 however as games did this year. This is similar to how SM3.0 games next year will likely make far more intense use of SM3.0 than games did this year.

Also, if you just need longer SM2.0 shaders, SM2.0b is perfect for the job. Just look at FarCry (as I mentioned before).

Right, this is my point, but lets say a game takes advantage of other sm3.0 stuff too, like dynamic branching, and that needs to be reworked for SM2.0B. If the X800 is the only card to benefit from that SM2.0B rework, if the percentages game is played as Valve played it for the GeForce FX, the X800 will lose out. Instead, devs will just use the sm2.0 path with it, and the full potential will be lost.

Some yes, not all. But whatever shaders can compile under SM2.0b, are an advantage to the X800 at no extra effort for the developers.

As I stated above, if some shaders use SM3.0 features that could be reworked for SM2.0B and can't just compiled down, instead of just defaulting to SM2.0 for X800 cards, then that would be using full potential of the X800. Since reworking an SM3.0 shader for sm2.0b only benefits the X800, if you go by the "minority" status"

Apparently you have never developed any shaders. Partial precision is a suffix that can be added to an instruction in DX9's shader assembly language. So in the case of assembly shaders, you have to figure out which instructions can get away with partial precision in your particular shader, and then add the suffix.
In the case of HLSL, you use the datatype 'half' rather than 'float' to signal that the compiler should treat these variables with partial precision. Again you have to analyze the shader and change the variables that can get away with partial precision.

Ofcourse you could just bruteforce everything to partial precision, and then do visual inspection, but that's not doing your software or the hardware much justice in most cases. You'd just end up with a few shaders that luckily work okayish with partial precision (but could look better when hand-tuned), and a lot of shaders that wouldn't work with partial precision, so they have to run at full precision, even though they could have been running in partial precision at least partly, with much better performance.

Oh yes, apparently I've never coded shaders but when you restate the same process I described in your second paragraph here, it's valid! lol. Forcing everything to PP, then runinng through the game and seeing which shaders show artifacting from it, then removing the PP string from those shaders is going to look far superior to running DX8.1 as Valve did, and perform far better than straight full precision. And the process would take no longer than adding a sm2.0b codepath to a game that extensively uses SM3.0, in fact it would be much easier and take less time.

SM2.0b will never be that much of a hassle.

The SM3.0 and SM2.0B spec vary by a large amount, so yes, it could be quite a hassle in the future.

But the point I was trying to make is that developers have the OPTION. X800 will be fast in SM2.0 anyway, so it's not crucial that they develop SM2.0b paths. GeForce FX however has to be degraded to DX8.1 level to get the performance of Radeons in SM2.0. I'm sure that you will agree that the X800 is in a much better position than the FX. Option vs necessity.

Right, they have the option. So in the future, if they choose the option to save complex effects for the SM3.0 shaders and a more simplified version for SM2.0, then you won't complain, right? The X800 is actually in a similar position to the FX - to get the full potential out of it you need to code in a way that only the X800 will benefit from.

So far it doesn't particularly look like the 6800 series will be able to outperform the X800 much with SM3.0 either, so the full SM3.0 games will probably be reserved for the next generation of hardware for both vendors.

So far SM3.0 has only increased performance for the 6800 series, so full SM3.0 games may run great on 6800 cards.

The point was mainly that the X800 can benefit from any shaders that will compile under SM2.0b, but not under SM2.0. In the case of FarCry, it seems that all their SM3.0 shaders turned out to work in SM2.0b aswell. That's a nice little bonus for X800 users, is it not? No more than a bonus though, because they ran excellently in SM2.0 already (better than the 6800 series, ironically enough).

FarCry's shaders just scratched the surface of SM3.0 shaders, IIRC the only thing they have is one light shader that exceeds SM2.0 instructions plus geometry instancing, and thats is.

Let me summarize my point:
GeForce FX was sold as an SM2.0 part, and cannot live up to the expectations, not even when developers cater specifically for its needs (which could take a lot of extra development time).

X800 is sold as an SM2.0 part, and performs adequately, yet offers the option of SM2.0b to improve quality and/or performance a bit, if developers desire to do so (which doesn't have to take much time at all, depending on the case).

heh, actually the X800 as advertised by ATI as having "unparalleled DX9 shader support" and "latest DX9 shader technology." Not because they are being modest, but because they are behind in technology and advertising "SM2.0B" would be more hurtful than helpful to them when Nvidia has SM3.0.

In other words: No X800 user is going to open a 'Valve Sucks' thread if they find that the SM3.0 path in Half-Life 3 doesn't work properly on their X800 card.

We'll see, people generally don't like it when the full potential of their hardware isn't tapped :)
 
tornadotsunamilife said:
what would be the point of having the next-gen cards running sm2.0b when they can run sm3.0?
Well considering we are almost a year since the "paper" release of the 6800 and one game with a patch to sorta do SM3.0, I'd say it's too early. Now take a look at what performance or IQ improvements SM3.0 has brought and it's all hype at this point. HDR looks like a cartoon and a SM3.0 part gets beat by a SM2.0 part, go figure.

btw: why is this thread still going?
 
tranCendenZ said:
Looking at both the sales figures released showed Nvidia capturing the high end market with the 6800, and steam (where ATI should be far ahead given the ATI-related promotions), Nvidia is clearly in the lead this round.
LINK WITH VALID DATA?



tranCendenZ said:
We'll see, people generally don't like it when the full potential of their hardware isn't tapped :)
I would like a card that can take the full potential of the software!!!!!

But isn't is sad I have to lower my resolution when enabling HDR on my 6800??????
 
tranCendenZ said:
Looking at both the sales figures released showed Nvidia capturing the high end market with the 6800, and steam (where ATI should be far ahead given the ATI-related promotions), Nvidia is clearly in the lead this round.

My point was that the entire high-end market is very small in general, and until SM2.0b/SM3.0 hit the mainstream, most developers may choose not to support either.

Well, my point is not about SM2.0 games being compiled to SM2.0B, its about games that take advantage of SM3.0 features not part of SM2.0B being reworked for SM2.0B. Those are the cases where the x800's/sm2.0b minority status may be an issue.

Only if the SM2.0 reworkings of the shaders aren't sufficient. The way it looks now, X800s have little problems keeping up with 6800s, even when X800s run SM2.0 and 6800s run SM3.0. To me it's more likely that 6800s will run into problems when the next generation of SM3.0 hardware comes around, and software starts using SM3.0 more extensively.
And unlike X800 owners, 6800 owners would have a right to complain, especially if they chose the 6800 over an X800 mainly because of the SM3.0 advantage.

Sure there were, just not any games that took full advantage of SM2.0. For instance, Halo, Tomb Raider, and Gunmetal all came out their with some sm2.0 shaders. They didn't make nearly as intense use of SM2.0 however as games did this year. This is similar to how SM3.0 games next year will likely make far more intense use of SM3.0 than games did this year.

I don't consider them SM2.0 games. They're SM1.x games with some spiced-up shaders tagged on. Already the FX series ran into trouble there though.
I also think you cannot compare the move from SM1.x to SM2.0 with the move from SM2.0 to SM3.0. For starters, the move from SM1.x to SM2.0 was more significant from a technical point of view... And then there's the difference in speed between SM1.x parts and SM2.0 parts. Currently there is no virtually no speed difference between SM2.0 and SM3.0 parts. Which means that if the shader is too complex for SM2.0, it will most probably be too complex for SM3.0. And on 6800 things that could theoretically make a difference, such as conditional branching, aren't implemented all that efficiently. We've seen a Humus demo that emulated the branching with stencilbuffer and multipass on SM2.0, and getting very good performance.
So I think that in most cases, if the code can run on 6800, you can make it run on X800 (with reasonably little effort).
When we get to 'real' SM3.0 games, we'll have the next generation of hardware from both IHVs to deal with them.

Right, this is my point, but lets say a game takes advantage of other sm3.0 stuff too, like dynamic branching, and that needs to be reworked for SM2.0B. If the X800 is the only card to benefit from that SM2.0B rework, if the percentages game is played as Valve played it for the GeForce FX, the X800 will lose out. Instead, devs will just use the sm2.0 path with it, and the full potential will be lost.

If a game uses dynamic branching, it will obviously have to be reworked for SM2.0 anyway. In that case, it's only a small step to SM2.0b.
Besides, you seem to be grossly overrating the full potential of SM2.0b. The main advantage is longer shaders. If you have dynamic branching, you need to cut up the shader in multiple passes anyway, so you end up with small shaders for each pass.

Oh yes, apparently I've never coded shaders but when you restate the same process I described in your second paragraph here, it's valid! lol. Forcing everything to PP, then runinng through the game and seeing which shaders show artifacting from it, then removing the PP string from those shaders is going to look far superior to running DX8.1 as Valve did, and perform far better than straight full precision. And the process would take no longer than adding a sm2.0b codepath to a game that extensively uses SM3.0, in fact it would be much easier and take less time.

I've already explained that this will get far from optimal results, both in terms of visual quality and performance. It is basically the equivalent of recompiling SM2.0 shaders for SM2.0b. You may get some gain, but not the full potential. The proper way is to hand-tune the shaders, which takes time, especially for FX (the low precision can be a big problem for image quality).

Anyway, the comparison is invalid, since the FX is already SM2.0 hardware, effectively you're only optimizing, you're not targeting a different architecture, like with SM2.0b vs SM3.0.
Also, your claim that it would take a lot of time to add an SM2.0b codepath to a game that extensively uses SM3.0 is only valid if there is no SM2.0 codepath at all. Else most of the work for the SM2.0b codepath is done for the SM2.0 path already (no precision problems either). Basically the only thing left to be done is to combine shaders for multiple passes into longer singlepass shaders. This is little more than some copy-pasting of shader code.

The SM3.0 and SM2.0B spec vary by a large amount, so yes, it could be quite a hassle in the future.

In theory yes. In practice, most shaders will not use all the features of SM3.0, they will in fact not use all features of SM2.0, so most shaders should not pose a problem.

Right, they have the option. So in the future, if they choose the option to save complex effects for the SM3.0 shaders and a more simplified version for SM2.0, then you won't complain, right? The X800 is actually in a similar position to the FX - to get the full potential out of it you need to code in a way that only the X800 will benefit from.

Not true at all. As I tried to explain before, the FX is basically nothing but a GF4, with the option to execute ps1.4/SM2.0 code with very low performance. With a lot of effort, you can add some SM2.0-ish effects at reduced image quality, and on a 5900Ultra you may even get playable framerates... but that's about it. Problem is that apparently some people think it's a fully functional SM2.0 part, because that's what NVIDIA wants them to believe (and they didn't see the writing on the wall after 3DMark03, 3DMark05, TRAOD, etc?).

The X800 on the other hand has plenty of potential even with SM2.0 code. SM2.0b doesn't make a lot of difference. The step is way smaller than from ps1.3 to ps2.0, and ATi never pretended that the X800 would be capable of much more than the 9800. I suppose most people see it as little more than a faster version of the 9800. So there shouldn't be issues between what people think the hardware is capable of, and what it is actually capable of.

So far SM3.0 has only increased performance for the 6800 series, so full SM3.0 games may run great on 6800 cards.

That makes absolutely no sense.

heh, actually the X800 as advertised by ATI as having "unparalleled DX9 shader support" and "latest DX9 shader technology." Not because they are being modest, but because they are behind in technology and advertising "SM2.0B" would be more hurtful than helpful to them when Nvidia has SM3.0.

If ATi was speaking in the context of their product line, I see nothing wrong about those statements.
Also, I have yet to see the first real-world example of NVIDIA's SM3.0 hardware outperforming ATi's SM2.0b hardware.
By the time we actually get SM3.0 software, ATi's new SM3.0 cards may grossly outperform NVIDIA's parts again, just like they did with SM2.0, and your 6800 may struggle just as badly in SM3.0 as the FX series does in SM2.0.
To me it seems like ATi just isn't rushing their SM3.0 part. There is no reason to anyway. If this means we'll get another top performer like the 9700Pro was, nobody will be talking about ATi being behind with SM2.0b.

If we actually have some SM3.0 games, which actually run faster and look better than on ATi's hardware, then we can say ATi is behind in technology. Until then, I see no problems with ATi.

We'll see, people generally don't like it when the full potential of their hardware isn't tapped :)

Then they should buy consoles, not PCs.
 
R1ckCa1n said:
btw: why is this thread still going?

If something works on some small level then usually people are too lazy to change the status quo.

In other words, real new information came up around post #699 and if one follows that statistical data, something else important may happen in this thread by post #1398.

:D
 
Well, guys, here are my two cents (I admit I could handle only the first 10 pages though, so forgive me if what I'm about to say was already discussed in the last 30)....

To claim that Valve intentionally hurt the performance of the FX line (or the firm X - the product Line Y) is absurd. Valve is first and foremost a business organization. I am sure that for them, as for id, to make or not to make an optimization is strictly a cost/profit matter. Given how many people own the high end nv3x products (if Dave is right, 30 pages or so ealier in this thread) they probably made the right choice.

Arguments such as "CryTech did it [make optimizations for nVidia products], why can't Valve do it?" are rather weak, imho. CryTech was in one position during the development of their game, Valve in another. Perhaps CryTech designed their engine to allow easy addition of such features like the new shader model? I don't know. I guess after all the work Valve put in HL2, after the deadline fiascos and all the fuzz, they just wanted to deliver the best looking game they could. After all everyone was expecting a kick-ass game, weren't we? Especially after all that waiting...

About Gabe's comments on nv3x performance a while ago - to claim he was lying, especially based on benchmark results from third parties, is to forget when Valve made those benchmarks (driver versions), how they made them (I don't think anyone out there has simulated the exact conditions Valve ran their tests under), and to assume nVidia were sitting on their hands the whole time.

About the comparisons between engines and OpenGL performance - completely irrelevant here (but I can't resist to comment ;) ). To compare HL2 with Doom3 is a classical case of comparing apples with oranges.- HL2 is a DirectX game and would run pretty decently on my machine (if I believe every benchmark I've seen so far), whereas Doom3 is a OpenGL game and forced me to use 800x600 on my 9800 Pro... (meaning - yes, Doom3 does what it does very well, but the user is paying dearly for every fps). It's the same as comparing the graphics of WoW with those of EQ2 - different approaches altogether. And yes - nVidia does have the better OpenGL code overall (any ATI owner who's played KotOR or NWN would agree, I'm sure... On the other hand I already forgot what graphics look like without AA and AF on thanks to my ATI card... Oh, well... *shrugs*)

Let's all chill and try to do things in a civilized manner. By the way - has anyone tried to get Valve's opinion on this?
 
I'm just refer to
CHxATON's Power source "CHxATYST" driver.

This driver Great effective in Games and benchmarks.
in that time, CHxATON F-boys take no notice of this CHxAT driver's image quality.
looks like, poor than FP16!

Wow! people need TRUE Filtering!
now, Enable TRUE filtering (by change registry or TWEAK).... here!
Oh my! Dropped 50FPS? Great!

but, CHxATON F-boys take no notice of this....

What's very very Great effective!
 
I'm just refer to
CHxATON's Power source "CHxATYST" driver.

This driver Great effective in Games and benchmarks.
in that time, CHxATON F-boys take no notice of this CHxAT driver's image quality.
looks like, poor than FP16!

Wow! people need TRUE Filtering!
now, Enable TRUE filtering (by change registry or TWEAK).... here!
Oh my! Dropped 50FPS? Great!

but, CHxATON F-boys take no notice of this....

What's very very Great effective!

:eek: Woooah their Are you even on the same page as everyone else or planet even maybe :p
 
I investigated the FP16 vs. FP32 performance in Half-Life 2. I got really tired of all the debate and the absence of real numbers so I decided to look into this myself. I grabbed the 3dAnalyze program, and the time demos from Anandtech.com. I have the retail CD version of Half-Life 2 and it's patched to whatever the latest Steam desired to give me.

To start off here's my system specs:

Abit NF7-S version 2.0
AMD 2500+ (overclocked to 1992 MHz)
1024 MB of PC3200 memory
MSI GeforceFX 5700 with 128Meg memory
MSI version 66.72 drivers with no overclocking

I ran the at_canals_08-rev7 demo in four different ways. The Canals demo does contain a lot of water and metal but I didn't see any glass during the timedemo.

I ran the game at 1024x768 with the recommended settings in the advanced panel.

The first time I ran with all the defaults for my hardware -- DX 8.1 and I got 53.36 fps average with 3.446 variablity.

The second time I ran HL2 from the command line with the -dxlevel 90 flag, but otherwise all settings the same as the 8.1 run. The water did not display correctly during the timedemo and I got a 14.98 fps average with 0.441 variablity.

The third time I ran, as above, but using 3dAnalyze and forcing the settings you recommended -- low precision pixel shaders, force HOOK.DLL, and performance settings. The first time I used the 9800 Pro device ID listed in 3dAnalyze and it hard crashed my system. I dug out the 9600 Pro device ID from the config.txt in HALO for the PC (which is 16720) and this time the timedemo ran through. I got 22.49 fps average and 1.071 variablity. Also the water rendered correctly during this run.

The fourth time I ran using 3dAnalyze with all the settings you recommended, but this time I removed the low precision pixel shader checkbox just to compare to the 2nd run with the -dxlevel 90. I kept everything else the same (HOOK DLL, performance, and the 9600 Pro device ID). The water did display correctly during this run and I got 14.34 fps average with 0.578 variablity. That's certainly within the margin for error for the vanilla -dxlevel 90 run.

Generally I did not see any visual artifacts during the runs except for the water not rendering correctly in the 2nd run when I wasn't using 3dAnalyze to fake out my device ID. Again note that I didn't see any glass during the timedemo. I did check the advanced video options in each run to make sure HL2 was using the appropriate hardware level that I expected -- i.e.; DX 8.1 in run #1, and DX 9.0 in runs 2, 3, and 4.

So basically to sum up:

DX 8.1 -- 53.36 fps
DX 9.0 vanilla -- 14.98 fps
DX 9.0 low precision checked -- 22.49 fps
DX 9.0 low precision NOT checked -- 14.34 fps

Have fun,

help_me_spock
 
help_me_spock said:
I investigated the FP16 vs. FP32 performance in Half-Life 2. I got really tired of all the debate and the absence of real numbers so I decided to look into this myself. I grabbed the 3dAnalyze program, and the time demos from Anandtech.com. I have the retail CD version of Half-Life 2 and it's patched to whatever the latest Steam desired to give me.

To start off here's my system specs:

Abit NF7-S version 2.0
AMD 2500+ (overclocked to 1992 MHz)
1024 MB of PC3200 memory
MSI GeforceFX 5700 with 128Meg memory
MSI version 66.72 drivers with no overclocking

I ran the at_canals_08-rev7 demo in four different ways. The Canals demo does contain a lot of water and metal but I didn't see any glass during the timedemo.

I ran the game at 1024x768 with the recommended settings in the advanced panel.

The first time I ran with all the defaults for my hardware -- DX 8.1 and I got 53.36 fps average with 3.446 variablity.

The second time I ran HL2 from the command line with the -dxlevel 90 flag, but otherwise all settings the same as the 8.1 run. The water did not display correctly during the timedemo and I got a 14.98 fps average with 0.441 variablity.

The third time I ran, as above, but using 3dAnalyze and forcing the settings you recommended -- low precision pixel shaders, force HOOK.DLL, and performance settings. The first time I used the 9800 Pro device ID listed in 3dAnalyze and it hard crashed my system. I dug out the 9600 Pro device ID from the config.txt in HALO for the PC (which is 16720) and this time the timedemo ran through. I got 22.49 fps average and 1.071 variablity. Also the water rendered correctly during this run.

The fourth time I ran using 3dAnalyze with all the settings you recommended, but this time I removed the low precision pixel shader checkbox just to compare to the 2nd run with the -dxlevel 90. I kept everything else the same (HOOK DLL, performance, and the 9600 Pro device ID). The water did display correctly during this run and I got 14.34 fps average with 0.578 variablity. That's certainly within the margin for error for the vanilla -dxlevel 90 run.

Generally I did not see any visual artifacts during the runs except for the water not rendering correctly in the 2nd run when I wasn't using 3dAnalyze to fake out my device ID. Again note that I didn't see any glass during the timedemo. I did check the advanced video options in each run to make sure HL2 was using the appropriate hardware level that I expected -- i.e.; DX 8.1 in run #1, and DX 9.0 in runs 2, 3, and 4.

So basically to sum up:

DX 8.1 -- 53.36 fps
DX 9.0 vanilla -- 14.98 fps
DX 9.0 low precision checked -- 22.49 fps
DX 9.0 low precision NOT checked -- 14.34 fps

Have fun,

help_me_spock

Based on his tests, it seems they made the right decision by going with DX8.1. Mixed mode or not, DX9 is far too slow for me to enjoy on his system.

Got any pics to show a difference, if any, in DX9 vs. DX9 and mixed mode?
 
Yes, clearly DX9.0 is unacceptable, even when forced completely to low precision.
You get less than half the framerate of DX8.1.
And well, 22 fps average is not a whole lot. Most people will probably want a bit more for a smooth gaming experience.
If you compare to the benchmarks here: http://www.anandtech.com/video/showdoc.aspx?i=2281&p=4, it seems that the 5700 is slower than anything they tested there, including the 9550, which should be as low-end as the 5200.
 
incidentally,
HL2 Using ARGB16integer for Reflection, DiffuseMap, in "DX9path".
by the way, if you think this a lie, contact Mr.Bill Van Buren(VALVE HL2 Designer).
bvb.jpg


But I don't know other game which use this "ARGB16 integer" format.
Because, "Integer" is a Useless Format in DX9 generation.
(It is generations of DX8 that "integer buffer" was used often.)

We know, DX9's advantage and Many Developers demanded "Floating Precision Buffer"
However, VALVE chose "integer"...
Not Support This Format DeltaChrome, Vorali and FX,,,,
implemented CHEATON only.

in short, It is not "DX9path",
This is a "CHEATON path", I think.
 
Sorry I don't have any screen shots. The only difference I saw was with the water as mentioned in my earlier post.

dderidex said that the 68.xx and 70.xx versions of the Nvidia drivers might give more of a performance boost. When official versions of those series of drivers come out I'll give them a try, post the numbers, and try to get some screen shots too.

As for the Anandtech mainstream tests I was a little disappointed that they didn't include a 5700 in the mix just to compare numbers. The 5900XT is at the bottom of the pile in those tests in DX 9 at 26.9 fps and with my tests showing the 5700 at 14 fps it's easy to see why they wouldn't bother testing the 5700. It's interesting that with the FP16 forced I get in the same ballpark as the 5900XT. In DX 8, the 5700 falls in nicely with the 6200.

--help_me_spock
 
Where are you getting your numbers for the % of FX cards? I just took the steam survey and it says the following:

FX 5600 3.7%
FX 5700 2.05%
FX 5900(series) 1.95%
FX 5700LE 1.02%
FX 5900 .9%
FX 5500 .55%
FX 5950 .51%


Total = 10.68%

Though I believe this is based on steam and not particularly HL2, meaning some of these people might only have HL1.
 
How dare they program Doom 3 in OpenGL *knowing* that ATI users would suffer! ID should have coded a DX9 path for Radeon users! All they would have had to do was ...blah ...blah ..blah..

:eek:
 
rcolbert said:
How dare they program Doom 3 in OpenGL *knowing* that ATI users would suffer! ID should have coded a DX9 path for Radeon users! All they would have had to do was ...blah ...blah ..blah..

:eek:

QFT

~Adam
 
PiratePowWow said:
Where are you getting your numbers for the % of FX cards? I just took the steam survey and it says the following:

FX 5600 3.7%
FX 5700 2.05%
FX 5900(series) 1.95%
FX 5700LE 1.02%
FX 5900 .9%
FX 5500 .55%
FX 5950 .51%


Total = 10.68%

Though I believe this is based on steam and not particularly HL2, meaning some of these people might only have HL1.

hmm, how did HL1 get into your closing sentence there? as far as i understand it, the steam survey.... surveyed (austin powers moment there) te specs of computers playin multiplayer vavle games. if you look in otehr parts of the survey it shows you that those games include: CS, DoD, NS... etc

and again, a bunch of folks with the real beefed up machines to play cs competitively DID NOT take the steam survey.... so think of it as a sample... and tho samples are supposed to be representative, its my opinion that a crap load of folks out there had 5900xt's and either didnt take the survey or didn't get their card recognized properly.
 
help_me_spock said:
dderidex said that the 68.xx and 70.xx versions of the Nvidia drivers might give more of a performance boost. When official versions of those series of drivers come out I'll give them a try, post the numbers, and try to get some screen shots too.
There were some improvements in those drivers.

I've always used the betas with no problems, so....
 
fallguy said:
Based on his tests, it seems they made the right decision by going with DX8.1. Mixed mode or not, DX9 is far too slow for me to enjoy on his system.

I am willing to concede that they made the right decision for the 5700. But there are still the 5800, 5900, and 5950 in vanilla, Ultra, and XT flavors that can handle this the regular dx9 path with water on screen at 1024x768 with greater than 30 fps. A mixed mode path would only increase this number, thus allowing a playable increase in framerate.

help_me_spcok,
Was there any problems attempting the test using 3dAnalyze? Were any files adversely changed or did you have to reinstall anything to return HL2 to normal?
 
Optimus said:
I am willing to concede that they made the right decision for the 5700. But there are still the 5800, 5900, and 5950 in vanilla, Ultra, and XT flavors that can handle this the regular dx9 path with water on screen at 1024x768 with greater than 30 fps. A mixed mode path would only increase this number, thus allowing a playable increase in framerate.

The problem is though, that the 5800 and higher will get considerably LOWER framerates than the 5700, and let's not even get started on the competing Radeons in the pricerange of the 5800 and higher.
I don't think many people would accept that, not after all the money they spent on their card, thinking they were buying decent framerates at high resolutions.
 
Optimus said:
I am willing to concede that they made the right decision for the 5700. But there are still the 5800, 5900, and 5950 in vanilla, Ultra, and XT flavors that can handle this the regular dx9 path with water on screen at 1024x768 with greater than 30 fps
Until Valve droped the mixed mode path in the spring, that was exactly Valve's plan.

What Scali doesn't understand above is that the 5800 has: twice as many pipelines as the 5700, faster core speed than the 5700 and more memory bandwidth than the 5700. :rolleyes:
 
Scali said:
The problem is though, that the 5800 and higher will get considerably LOWER framerates than the 5700, and let's not even get started on the competing Radeons in the pricerange of the 5800 and higher.
I don't think many people would accept that, not after all the money they spent on their card, thinking they were buying decent framerates at high resolutions.

By your logic, the 9600PRO/XT should be running in DX8.1 mode also:
http://www.tomshardware.com/business/20030911/half-life-02.html#mixed_mode_for_nvidia_cards

The 5900 Ultra in mixed mode is at least as fast as if not faster than the 9600PRO, when playing HL2. And according to Valve's benchmarks, the 5900 in DX9.0 mixed mode would be significantly faster than the 5700 in DX8.1
http://www.tomshardware.com/business/20030911/half-life-03.html

The graphs are a bit tricky, but basically what they show is that the 5900 is likely somewhere in between the 9600PRO and 9800PRO in mixed mode, and that the 5900 doesn't benefit much from going to DX8.1 mode from mixed mode.

its like this

Initial HL2 benchmarks:
9800PRO DX9.0: 60fps
5900 Ultra DX8.1: 52fps
5900 Ultra DX9.0 Mixed Precision: 48fps
9600PRO DX9.0: 48fps
5900 Ultra DX9.0 Full precision: 31fps
5600 Ultra DX8.1: 26fps

Clearly mixed precision should have been implemented for the 5800/5900 series. And note, these were using old, unoptimized for dx9 NV drivers.
 
I think the bullet points at the bottom of page one of that same report are aimed squarely at the FX5xxx cards and drivers. Looking at all the pissed off FX5xxx owners out there, I'd say Gabe was right on target.

And of course we all know the mixed mode path that existed on Shader Day 2003 does not exist today. Hence the conspiracy theory in the first place.
 
Status
Not open for further replies.
Back
Top