Valve sucks

Status
Not open for further replies.
Optimus said:
Since "-dxlevel 82" is implemented for all intents and purposes with the exception of full reflection on water, it would only make sense that they would finish this part and then release a patch which enabled mixed mode at default. Finishing this is no light task, but it is neither impossible nor hard. It is simply a matter of placing the very same camera view shader on the water with the camera placed at the correct angle with respect to the players view and then running the water shader on that output. The hardware is doing all the work. The programmer only needs to optimize the code. But I do not believe that any major optimization is necessary since the two are probably already optimized individually for dx9. Reflection is hard. Unreal has been doing it for nearly a decade.

The reflection/refraction code in the water and other objects seems to be by far the heaviest in all of HL2... Perhaps they intentionally left out the 'full' option, because that is the main reason why FX cards can't run the full DX9 path, FP16 or not?

What I meant by "theoretically impossible" is that if an unoptimized shader that uses standard dx9 calls is implemented in either FP16 or FP24, it should theoretically run with the same differential speed no matter the complexity of the shaders used. In other words, a six pipelined GPU should, theoretically, always run 1.5 (6/4) times as fast as a four pipelined GPU.

Not at all. You won't get perfect parallelism in all cases. Videocards render in 2x2 blocks... At the edges of polygons, part of these blocks will be processed in vain, because they don't belong to the polygon itself. So you'll never have 100% gain from the extra pixel processing power, and with bad cases (thin/small polygons), it will become quite inefficient. So even theoretically, six pipelines won't always run 1.5 times as fast as four.

While I understand the fundamentals of what you are trying to say, you should be more specific. What I believe you mean is that ATI cards are much more powerful with DirectX shaders. On the other hand, nVidia cards are much more powerful with OpenGL shaders. Anyone can verify that for themselves... Download the OpenGL SDK, write some shaders, and benchmark them (that way you know there won't be any driver 'optimizations')... and you'll see exactly what you see with Doom 3, Unreal Tournament 2004, and Far Cry... Oh wait! UT2k4 and FC are DirectX games... hmmm...

This is nonsense actually. There's really no such thing as 'DirectX shaders' or 'OpenGL shaders'. The hardware only supports one kind of shaders, and D3D's SM2.0 is very close to the ARB vertex/fragment programs... Which makes perfect sense, since they're both designed to run on the same kind of hardware.
The difference is that NV offers specific shader extensions for NV3x, which could improve performance... except Doom3 doesn't use them.
The reason why Doom3 runs well on FX cards is simple: driver optimizations. Even John Carmack himself mentioned that. He said that originally the ARB shaders ran much slower on FX than on the R300. The NV3x path ran slightly faster than the R300, at a slight cost of image quality. At some point, the ARB path started performing about the same as the NV3x path, so the NV3x path was removed... However, whenever a shader was modified even slightly, performance would drop back to the old level.
Anyone can modify the Doom3 shaders for themselves and verify that... the shaders are stored in one of the zip files, in sourcecode form.
So Doom3 doesn't prove anything, except that NV is still up to its old tricks with 'damage control' for the FX.
And by the way, there *IS* no OpenGL SDK...

Exactly, and all I'm asking is that they do this. They should go back and finish their mixed mode path so that nVidia GPUs can get the same IQ as their ATI counterparts, even if at a slower framerate.

As I said many times before, the choice is either good framerates or good IQ, on FX cards. Apparently Valve chose good framerates over good IQ. Which is probably the best choice for most gamers. It's much nicer to play a game that runs smoothly.
You may not agree with the choice Valve made, but that doesn't mean that Valve made the wrong choice for everyone. I think most people don't want the same IQ... Especially not the people with the slower FX models, like the 5200/5600. The game would probably be so slow that it's no longer enjoyable. DX8 looks very good, and gets excellent framerates on all FX cards. The kind of framerates people would expect from the kind of money they paid for their cards.
 
Why in the world did we revive a thread that was left to die peacefully back in December? And it's really great that we all learned computer history by watching the same 3 documentaries on G4/TechTV.
 
Optimus said:
On this point, though it is slightly off topic, I must respectfully disagree. I have an FX card and I have yet to see a card that looks better than it at the same resolution, AA, AF. Granted, other cards will have a higher framerate, but while framerate is important, I believe it was ATI themselves that stated that framerate isn't everything.
Take off your green-o-vison, ati clearly as better AA, if not AF, seriously look at some reviews comparing sshots- their gamma corrected AA is jawsome.


I never argued this point. I'd be the first to agree. I don't even use either in games because it slows my system to a slide show.
Thats pretty sad, I played HL2 with 16AF with my 8500, with the game maxed minus reflect all.
Finally, you're on topic. I really don't think so why drop something entirely when it's almost finished. I'd understand dropping it until the first or second patch, but the only thing missing from the "-dxlevel 82" path that I've seen so far is full reflections in water. This actually suggests that they were bending over backwards for ATI rather than avoiding the notion towards nVidia.
What do you mean? the whole DX8 path looks different, different filterting and different lighting, differeing bump mapping etc. IGN has an article about the different paths.


[quote[Could not agree more. In fact, I think it is not humanly possible to agree with this statement more than I do at this moment. This may be why my other nickname is Captain Obvious.
Yet you say
I have an FX card and I have yet to see a card that looks better than it at the same resolution, AA, AF
Can you atleast acknowledge ati's AA being far superior?
Ati's FSAA is even superior to the 6XXX series which has "gamma adjustment", what makes you think the FX series is going to be better?



I'd honestly like to know when we will get some mature adults into this discussion. Do I go around calling those, like yourself, who clearly prefer ATI ATwIts or something equally assinign? No, I don't. This is because I am a big boy now and use words that you can look up in a dictionary to prove a point, rather than lobbing mindless insults to avoid the issue since you can't prove your argument.

You need to grow up.
Why don't do you act like an adult, and admit the flaws of the FX series, and stop defending nvidia and slamming valve for decisions which can only be speculated why?
It's their game. They wanted to keep NV3X performance consistent with other recently released games, such as far cry, we all know how well the FX loves the game;)



I agree. Yadda yadda yadda. Insert comment about wasting time and being off topic. Get mad and flame me. Move on.
Who's getting mad? Not I, it's the internet sir, saying harsh things don't mean you're pissed.


Now there is a flaw in this logic. If one only uses FP32 in places where FP16 does not look the same as FP24, then the FX series will look better, simply by the definition of these terms. I grant you that it may not run as fast as an ATI card of the same technology level, but it will approach it, and according to the leaked beta of HL2, at least in my experience, it was faster. I have noticed something odd about the leaked beta benchmarks and the official beta benchmarks. Even on tests where the ATI cards seem to increase in performance, the ratio of percent change in fps of ATI vs. nVidia is not a 1:1 ratio... Interesting... (In other words, ATI gets a bigger percentage of performance boost than nVidia does, when that is theoretically impossible.)
Evidence:
http://www.digit-life.com/articles2/digest3d/1003/itogi-video-hl2_2-w2k.html
http://www.digit-life.com/articles2/digest3d/1003/itogi-video-hl2_1-w2k.html
http://anandtech.com/video/showdoc.aspx?i=1863&p=8
Thanks for proving how little you know about the whole ordeal.
FP24 will never look worse than FP32 when you simply don't need the precision FP32 has.
There are places were FP16 looks perfectly fine, indisguisable from FP32, is it not hard to think FP24 would the same?

I have this same problem with people who misquote the Bible. If you don't pay attention to context, then you might have heard one of the great Dr Martin Luther King, Jr.'s speeches and thought he was a racist, or you might hear a football coach's speech and think we're at war with some enemy country of tigers or elephants... a little Alabama/Auburn humor there for you.

If you go to the page just previous to your link there, you may notice the following statement on the very first line:



Now why would Valve require a certain test system? And why only Intel? Why not AMD? What was Valve afraid of?
Maybe because nvidia are/ were cheaters back then.
lol2mj.gif
 
fallguy said:
I still feel sorry for people with FX's. Im glad I sold mine long ago.
Why did yu have one in the first place?
Wasn't their DX9 benchmarks(synthetic.. o noes.. those suck bad.. because they have shown how inaccurate they can be when it comes to inferior dx9 cards) by then?
When did you buy it?
it's even sadder when people think of the 5800 ultra as the voodoo 5 6000.
 
Moloch said:
Why did yu have one in the first place?
Wasn't their DX9 benchmarks(synthetic.. o noes.. those suck bad.. because they have shown how inaccurate they can be when it comes to inferior dx9 cards) by then?
When did you buy it?
it's even sadder when people think of the 5800 ultra as the voodoo 5 6000.

No real DX9 games were out at the time. I got it a few months after they were released. The 5900NU was easily flashed to a 5950U. I got it for like $180, and it became a $500 card, minus 128mb ram. With older games it was very fast, as long as you didnt try to use AA. When newer games came out, that were DX9 shader heavy, it became very apparant it wasnt a good card for those games. I sold it at that time.

At the time, and with certain games it was a great card, for a great price. However, AA would kill it, and DX9 games just bring it to a crawl.
 
fallguy said:
No real DX9 games were out at the time. I got it a few months after they were released. The 5900NU was easily flashed to a 5950U. I got it for like $180, and it became a $500 card, minus 128mb ram. With older games it was very fast, as long as you didnt try to use AA. When newer games came out, that were DX9 shader heavy, it became very apparant it wasnt a good card for those games. I sold it at that time.

At the time, and with certain games it was a great card, for a great price. However, AA would kill it, and DX9 games just bring it to a crawl.

Excuse me... but even though there were no DX9 games out, we did have 3DMark03, which made the FX' weakness painfully obvious.
Frankly I find it rather pathetic if people actually bought all that anti-Futuremark-hype (which HardOCP was spreading aswell), and trusted NVIDIA over Futuremark, when it came to the performance of the FX series. I hope that by now everyone (except Optimus perhaps) finally sees that Futuremark was indeed being fair, and their results were spot-on. And NVIDIA was nothing more than a lying and cheating loser.
Well, at least the people that were silly enough to trust NVIDIA were punished plenty when they actually bought the FX, despite the writings on the wall.

But to be fair, NVIDIA redeemed itself nicely with the 6800 series. And they no longer have to cheat in 3DMark either.
 
Scali said:
Excuse me... but even though there were no DX9 games out, we did have 3DMark03, which made the FX' weakness painfully obvious.
Frankly I find it rather pathetic if people actually bought all that anti-Futuremark-hype (which HardOCP was spreading aswell), and trusted NVIDIA over Futuremark, when it came to the performance of the FX series. I hope that by now everyone (except Optimus perhaps) finally sees that Futuremark was indeed being fair, and their results were spot-on. And NVIDIA was nothing more than a lying and cheating loser.
Well, at least the people that were silly enough to trust NVIDIA were punished plenty when they actually bought the FX, despite the writings on the wall.

But to be fair, NVIDIA redeemed itself nicely with the 6800 series. And they no longer have to cheat in 3DMark either.
QFT.
I can't really take kyle seriosly on the whole FM debate.
 
I dont depend on 3dmark to show me anything. Game benches showed it to be a pretty good card, and it was a very hot item for a few months. When I got in the Farcry beta, and my 5900 card sucked compared to my 9800XT. As I said, it became apparant that it didnt do well in DX9 games. I sold it not long after that, end of story.
 
fallguy said:
I dont depend on 3dmark to show me anything. Game benches showed it to be a pretty good card, and it was a very hot item for a few months. When I got in the Farcry beta, and my 5900 card sucked compared to my 9800XT. As I said, it became apparant that it didnt do well in DX9 games. I sold it not long after that, end of story.
And look where not depending on it got you.
You could have atleast looked at shader mark benchmarks- that showes how slow it was.
Buts not a game :rolleyes:
That's why being anti synthethic is flawed.
It showed the FX had really low performance in DX9 applications.
 
And then some drivers came outand increased the 3dmark score by a lot. Look at it now days. A SLI config will get a huge score, much faster than a single ATi card. Will it be that much faster in games? Not if SLI doesnt work in the game it wont. Making 3dmark not a valid benchmarking tool.

Relying on games to test is good enough for me. Because you actually play games.
 
fallguy said:
And then some drivers came outand increased the 3dmark score by a lot.

And you thought this sudden increase in score wasn't in the least suspicious?
Especially since 3DMark seemed to be pretty much the only application that received any kind of performance increase at all?

Look at it now days. A SLI config will get a huge score, much faster than a single ATi card. Will it be that much faster in games? Not if SLI doesnt work in the game it wont. Making 3dmark not a valid benchmarking tool.

Actually, 3DMark is still valid, just not for games that don't work with SLI. Ofcourse 3DMark was one of the first applications where NVIDIA made sure that SLI worked.

Relying on games to test is good enough for me. Because you actually play games.

Apparently it wasn't good enough, because you still bought the FX, and later found out it wasn't suitable for DX9 games, so you had to get rid of it and buy a new card.
And to get back to your SLI-issue... Testing one game still can't tell whether another game will work with SLI or not, so the problem is a general problem, not just a problem of 3DMark.
 
For the games it was reviewed with, it played them fine. I will admit that 3dmark did show the FX's weakness with DX9. However, when a score can be altered by a different driver and get a huge score increase, it doesnt make me want to trust it very much. The same thing happened with the 6800 and X800 with 2005. A simple driver update, increased the score, a lot for both cards. Did it reflect in real games? No.

I didnt have to sell it and get another card. I already had a 9800XT in my main machine, the 5900NU was in my daughters, and for me to play with. I didnt have to get sell it because it was slow in DX9, I had to sell it because her SFF didnt have enough power to run it. Although it ran my 9800XT just fine.

Then there is the fact that it doesnt rely on any thing but the video card for the most part. According to it, a P4 2.0 with a X850XT/PE will be a better gaming machine than a A64 3500+ and a X800 Pro. Which I hope we can all agree is false.

You like 3dmark? Good for you, I dont rely on it, and I wont.
 
fallguy said:
For the games it was reviewed with, it played them fine. I will admit that 3dmark did show the FX's weakness with DX9. However, when a score can be altered by a different driver and get a huge score increase, it doesnt make me want to trust it very much. The same thing happened with the 6800 and X800 with 2005. A simple driver update, increased the score, a lot for both cards. Did it reflect in real games? No.

That is not a problem with 3DMark, but with the driver developers. They do the same for games, replacing shaders to artificially inflate performance, usually at the cost of image quality.
As for the X800, that was only one particular model (I believe the 256 mb X800 Pro), and this was a bugfix rather than an optimization... If it was an optimization, it would affect all models in the X800 series, since they all use the same chip. Only the number of activated pipelines and clockspeeds differ.

Then there is the fact that it doesnt rely on any thing but the video card for the most part. According to it, a P4 2.0 with a X850XT/PE will be a better gaming machine than a A64 3500+ and a X800 Pro. Which I hope we can all agree is false.

If 3DMark's scenes run faster on the P4 2.0 GHz, then any game with the same workload will run faster on the P4 2.0 GHz. It's just a case of this particular situation being GPU-limited on at least CPUs as fast as the P4 2.0 GHz.
That your favourite game doesn't reflect this behaviour with your favourite settings doesn't mean that 3DMark is false.
Just crank the resolution and AA/AF up far enough, and most games will probably indeed run faster on the P4 system than the A64.
So no, I don't agree that 3DMark is false. I just think you don't know how to interpret 3DMark's results properly.

You like 3dmark? Good for you, I dont rely on it, and I wont.

Hey, you're the one wasting lots of money on rubbish, not me. You're just being silly by not admitting your mistake, and even implying that you'd do the same all over again.
 
The only way a game will run faster on a p$ over an A64 is if both are compiling a video while running a game. I fo one would not even consider doing that nor did I ever when i owned a P4.

I just moved to the AMD platform and I will say this. The A64 3500+, as it relates to gaming absolutely spanks the P4 3.2 at least. I can only make that statement since I have only owned the 3500+ and just recently owned a 3.2 P4
 
You dont have enough of a clue enough to even have a discussion with. Do you really think there are any games that will run faster, with a P4 2.0, and X800XT/PE, than a FX-55 and a X800 Pro? Do you think it will even be close? Not to mention the amount of ram doesnt really make a difference either. Put 256megs in that 2.0 system, and a gig in the FX system, and see what the scores are. Its flawed in many ways.

I am not wasting money on anything. I can play all of my games with high detail, which is what I like.

Have with playing 3dmark.

/done
 
fallguy said:
You dont have enough of a clue enough to even have a discussion with. Do you really think there are any games that will run faster, with a P4 2.0, and X800XT/PE, than a FX-55 and a X800 Pro? Do you think it will even be close? Not to mention the amount of ram doesnt really make a difference either. Put 256megs in that 2.0 system, and a gig in the FX system, and see what the scores are. Its flawed in many ways.

I didn't say there are any games that do such. I just said that it is perfectly possible.
I also said that if you put the resolution and AA/AF up high enough, you may be able to create this situation with most games.

I have written plenty of code like 3DMark, which actually does run faster on the faster card, regardless of CPU. Why? Because the code was completely GPU-limited.
You have to understand that 3DMark03 does pretty much EVERYTHING on the GPU, unlike for example Doom3, which uses the CPU to calculate the shadowvolumes (the whole point of 3d acceleration is obviously to do as much as possible on the GPU, if that is the fastest way... so actually 3DMark is ahead of its time, and games will use this strategy aswell in the future).
So 3DMark03 rules out the CPU in the most heavy part of the Doom3 rendering algorithm. What is left is just a CPU that has to do a handful of render calls and state changes per frame. And yes, even a P4 2.0 GHz is perfectly capable of that, it is 'fast enough'. The actual shadowvolumes etc are all done on the GPU, and obviously the fastest GPU will win.

Basically neither GPU is fast enough to execute the rendering calls faster than the CPU can send them to the hardware, not even if this CPU is 'only' a P4 2.0 GHz.
And again, as I say, crank the resolution and AA/AF up high enough, and you will probably see the same in most games, since the GPU will have to do more work per frame, so the framerate goes down, and the CPU has to process less frames per second.

So basically, for CPU speed, 'enough is enough'. The same goes for memory by the way. Would you be surprised if a game requires 512 mb to run, and it's not faster if you use 16 gb instead?

Now who doesn't have enough of a clue to have a discussion with?
 
Wow this thread is still going on, funny :)

In summary,

A) The FX5900 with a partial precision path would have been at least as fast as a 9600PRO in Half Life 2 DX9 with similar visual quality according to Valve's initial benchmarks. They also showed that the FX5900 didn't lose much performance (5-10%) going from DX8.1 to DX9.0 w/pp.
B) Valve had the path implemented enough to run initial benchmarks on a number of levels
C) Valve complained that they spent a long time implementing the path.
D) The path did not make it into the final release despite over a year passing between A, B, & C and the game's release.
E) It's possible they dumped the path because they never 100% finished it and ran out of time working on other stuff, though that is pretty amazing considering they had over a year since they released the benchmarks with their intial pp path enabled.
F) It's also possible they dumped the path because of ATI money, since partial precision not only does not help ATI cards but it also makes their primary competitors cards run significantly faster.

In conclusion:

1. I wouldn't be surprised if E) was true because now 3 months after release there are still a huge amount of people who get stuttering & crashing in Half Life 2. The Source engine does not appear well coded compared to others released around the same time (Doom3 engine, CryEngine), as the other two engines have no problem running on most varieties of hardware; on the other hand, plenty of high end spec'd machines have compatibility problems with HL2 that appears to be related to sound and texture management.

2. I also wouldn't be surprised if F) was true, as ATI paid big money for the game. Valve delivered the game a year late, and has been very aggressive hyping their cards. They are even releasing an expansion pack called "the ati levels." I don't see them purposely disabling or not finishing the partial precision path due to ATI's cash involvement being a particular reach, especially when the only IHV it gives speed boosts to is the primary competitor of the one that paid Valve all the money. In addition, even using the method of brute forcing partial precision, then testing the shaders one by one couldn't possibly take more than a week or two with a competent programming staff; most users in this forum were able to identify the shaders that needed full precision within 2 days. A more elegent method could take more time but nothing all too significant especially when Valve already had a good chunk of the path implemented over a year before the game's release. This strengthens the argument that it was done because of ATI money.

However, either way we will never know if the true answer is 1) or 2). All we really know for sure is that according to Valve's benchmarks, it could have made HL2 as playable on a FX5900 as it is on a 9600PRO at a minimum in DX9 mode with similar graphics quality.

Personally I returned HL2 last week to Vivendi because of the technical mess that is the Source engine that still stutters to date on my relatively high end sig rig, and a large number of others' high end rigs with various brand components including ATI cards - Valve now admits its an engine problem that they will continue to attempt to fix in patches over the next few months. I also returned it because of the bloat/spyware that is Steam. Even if Valve doesn't suck, their programming, drm/content delivery system, and customer service this go around does suck.

The links speak for themselves:
http://www.halflife2.net/forums/showthread.php?t=69715
http://soulcake.freestarthost.com/poll.htm
http://www.blep.net/hl2stutter/
http://www.halflife2.net/forums/showthread.php?t=51549
http://www.halflife2.net/forums/showthread.php?t=70427
http://www.hardforum.com/showthread.php?t=861497
http://forums.steampowered.com/forums/forumdisplay.php?s=&forumid=43
http://forums.steampowered.com/forums/forumdisplay.php?s=&forumid=14
http://www.theinquirer.net/?article=21105
http://www.thebbb.org/commonreport.html?bid=22005081

Eh, nevermind OP is right, Valve sucks ;)
 
tranCendenZ said:
However, either way we will never know if the true answer is 1) or 2). All we really know for sure is that according to Valve's benchmarks, it could have made HL2 as playable on a FX5900 as it is on a 9600PRO at a minimum in DX9 mode with similar graphics quality.

We also know that only a very small amount of the FX cards sold are 5900 models or better... Meaning that the majority of FX cards are slower, and would not even reach 9600Pro levels of performance. Hard to justify developing a specific path for such a small audience anyway... especially when your game is already running a year late.

Other than that, owning a 9600Pro myself, I can say that the performance in HL2 is not all that hot in some areas. And if I had bought an FX5900, which is at least 3 times as expensive as a 9600Pro, I would expect better performance than that.
 
Scali said:
We also know that only a very small amount of the FX cards sold are 5900 models or better... Meaning that the majority of FX cards are slower, and would not even reach 9600Pro levels of performance. Hard to justify developing a specific path for such a small audience anyway... especially when your game is already running a year late.

This is why games have "Video Options," and "default configurations." Valve could have used these high tech features to enable PP DX9 for the 5900 and DX8.1 for anything less than that by default. ;) Valve didn't have to develop a path when the game was a year late, because they had already developed most of it a year prior, they just had to finish it.

Other than that, owning a 9600Pro myself, I can say that the performance in HL2 is not all that hot in some areas. And if I had bought an FX5900, which is at least 3 times as expensive as a 9600Pro, I would expect better performance than that.

Well using DX8.1 isn't going to help much, because Valve's partial precision benchmarks also showed that dx8.1 wasn't more than 5-10% faster than DX9 w/ pp on the FX5900. Therefore your argument here is not a good one if you are trying to imply that the FX5900 got some big speed boost going from DX9 pp to DX8.1, because it didn't.

Bottom line, partial precision should have been made available on HL2 like every other major DX9 game released to date. It would have helped the FX5900 and even could have given a small boost to the new GeForce 6 series even if unnecessary with the latter.
 
tranCendenZ said:
This is why games have "Video Options," and "default configurations." Valve could have used these high tech features to enable PP DX9 for the 5900 and DX8.1 for anything less than that by default. ;) Valve didn't have to develop a path when the game was a year late, because they had already developed most of it a year prior, they just had to finish it.

Finishing it is still developing it.

Well using DX8.1 isn't going to help much, because Valve's partial precision benchmarks also showed that dx8.1 wasn't more than 5-10% faster than DX9 w/ pp on the FX5900. Therefore your argument here is not a good one if you are trying to imply that the FX5900 got some big speed boost going from DX9 pp to DX8.1, because it didn't.

Excuse me, but it has already been shown many times that forcing partial precision on NV cards will result in bad blocky aliasing problems on flat surfaces and such.
So even if DX8.1 is not much faster, at least it looks a lot better, because there's no aliasing.

Bottom line, partial precision should have been made available on HL2 like every other major DX9 game released to date. It would have helped the FX5900 and even could have given a small boost to the new GeForce 6 series even if unnecessary with the latter.

As I already said, it won't work with HL2 because partial precision is not enough and results in bad aliasing.
And it has also been shown many times that the GF6 gets little or no boost from partial precision.
Just face it, the FX is a flop. The whole partial precision should not have been necessary in the first place, since both the Radeons and the GF6 can run all DX9 code at full precision without a problem, and on GF6, partial precision is barely faster.. so apparently there is something wrong in the FX. Nothing you can say will change that. These are simple facts.
 
Scali said:
Excuse me, but it has already been shown many times that forcing partial precision on NV cards will result in bad blocky aliasing problems on flat surfaces and such.
So even if DX8.1 is not much faster, at least it looks a lot better, because there's no aliasing.

lol. "Forcing partial precision" here means using FP16 on everything, which results in the artifacting you see. By using a mixed precision path, like the one Valve developed and benchmarked in 2003 for Half Life 2, you can get the same quality as full precision by using full precision only on shaders that need it. Valve's 2003 benchmarks showed that with a mixed precision path, there was little gain going from DX8.1 to DX9 w/ pp in Half Life 2 in the FX5900, and when using mixed precision there should be no quality loss if programmed properly. I really think you are smart enough to know this, so I assume you are just arguing something you know is false to attempt to prove your point.


As I already said, it won't work with HL2 because partial precision is not enough and results in bad aliasing.

Partial precision looks fine as long as it is programmed properly, FarCry 1.3 is a good example of this.

And it has also been shown many times that the GF6 gets little or no boost from partial precision.

No it hasn't, there has been no extensive benchmarks of the GF6 with mixed precision on shader-intensive scenes in HL2.

Just face it, the FX is a flop.

That is irrelevant to this discussion.

The whole partial precision should not have been necessary in the first place, since both the Radeons and the GF6 can run all DX9 code at full precision without a problem, and on GF6, partial precision is barely faster.. so apparently there is something wrong in the FX. Nothing you can say will change that. These are simple facts.

I'm not debating the gffx is slow with dx9 code, it is. But you seem to be applauding Valve for sloppy, lazy, or downright unoptimized coding. Partial precision does give speed boosts even to the GF6 in shader intensive scenes (at least it does in other games, so I assume it would in HL2 also) and it should be used for one very simple reason: if used properly, it can offer a performance boost with no quality loss. This is something every good developer wants. Half Life 2, however, does not appear to be coded well. It is not optimized graphically in terms of shader precision nor in terms of texture management, it has stutter problems, sound problems, crashing/memory problems, etc.

Like I said, I wouldn't be surprised if the lack of a pp path was due to Valve's sloppy programming of this game, which has come out in so many more places than just the video area. But I also wouldn't be surprised if they deliberately decided not to support optimizing their shader code for pp simply because their paycheck source did not benefit from it. We will never really know which was the case.

One thing is for sure. In its current state, no informed, sane developer would choose the source engine over the Doom3 Engine or CryEngine. The Source engine is plagued with problems all the way from video to audio and through general stability. The Source engine is a customer support nightmare waiting to happen for any company unlucky enough to license it.
 
tranCendenZ said:
lol. "Forcing partial precision" here means using FP16 on everything, which results in the artifacting you see. By using a mixed precision path, like the one Valve developed and benchmarked in 2003 for Half Life 2, you can get the same quality as full precision by using full precision only on shaders that need it. Valve's 2003 benchmarks showed that with a mixed precision path, there was little gain going from DX8.1 to DX9 w/ pp in Half Life 2 in the FX5900, and when using mixed precision there should be no quality loss if programmed properly. I really think you are smart enough to know this, so I assume you are just arguing something you know is false to attempt to prove your point.

Yes I know about the mixed path... And no, I don't agree that it's almost as fast as DX8.1.
So I'm not arguing something I know is false. I just think you are not looking at the mixed mode results objectively.
http://www.anandtech.com/video/showdoc.aspx?i=1863&p=8
Look there, the 5900U is still way behind the ATis in its pricerange, while with DX8.1 mode, it would be about equal. It struggles to keep up with the 9600Pro.

Partial precision looks fine as long as it is programmed properly, FarCry 1.3 is a good example of this.

Shows how much you know about programming shaders.
Some things are just impossible to do with only fp16 precision. You can't compare the FarCry shaders with the HL2 ones, because they implement very different lighting models.
HL2 uses a novel bumpmapped radiosity scheme, using cubemaps as 'lightprobes'. No other engine uses that approach afaik.

No it hasn't, there has been no extensive benchmarks of the GF6 with mixed precision on shader-intensive scenes in HL2.

Search around on this forum... People have reported that forcing fp16 in HL2 only increased the framerate by a handful of frames. Mixed precision would only be slower than full fp16.

That is irrelevant to this discussion.

Actually it is the very reason why this whole thread was started. So it is highly relevant.

But you seem to be applauding Valve for sloppy, lazy, or downright unoptimized coding. Partial precision does give speed boosts even to the GF6 in shader intensive scenes (at least it does in other games, so I assume it would in HL2 also) and it should be used for one very simple reason: if used properly, it can offer a performance boost with no quality loss. This is something every good developer wants. Half Life 2, however, does not appear to be coded well. It is not optimized graphically in terms of shader precision nor in terms of texture management, it has stutter problems, sound problems, crashing/memory problems, etc.

You must be kidding me. Sure, HL2 may not be perfect... but if you see the framerates it can pull from simple 9600Pro cards (or 6600 for that matter), you can't possibly say the shaders are unoptimized. In fact, it seems that of the three 'big shader games', FarCry, Doom3 and HL2, it is HL2 that gets the highest framerates. And not because it looks that much more ugly like the others either.

Like I said, I wouldn't be surprised if the lack of a pp path was due to Valve's sloppy programming of this game, which has come out in so many more places than just the video area. But I also wouldn't be surprised if they deliberately decided not to support optimizing their shader code for pp simply because their paycheck source did not benefit from it. We will never really know which was the case.

My theory is that the gain from the mixed mode path wasn't worthwhile... Sure, you get 40% gain... but over what? Over absolutely appalling performance.
Judging from the benchmarks on Anandtech I can understand perfectly that Valve didn't want to use the mixedmode path... Not many people would have appreciated their game running slower AND uglier than a 9600Pro, on NVIDIA's top card, which costs 3 times as much!

One thing is for sure. In its current state, no informed, sane developer would choose the source engine over the Doom3 Engine or CryEngine. The Source engine is plagued with problems all the way from video to audio and through general stability. The Source engine is a customer support nightmare waiting to happen for any company unlucky enough to license it.

We'll have to see about that... there are already quite a few Source licensees, and even some games out. As far as I know, there are no Doom3 or CryEngine games out, other than Doom3 and FarCry themselves.
 
Scali said:
Yes I know about the mixed path... And no, I don't agree that it's almost as fast as DX8.1.

Well, Valve's official benchmarks showed otherwise, that DX9 mixed mode wasn't much slower than DX8.1:

http://techreport.com/etc/2003q3/valve/index.x?pg=2

FX 5900 Ultra DX8.1 - 52fps
FX 5900 Ultra DX9 Mixed Precision - 48fps
FX 5900 Ultra DX9 Full Precision - 31fps

Only around a 5-10% loss going from DX8.1 -> DX9 mixed mode according to Valve. If you read the beginning of the anand review you posted, it shows similar figures.

So I'm not arguing something I know is false. I just think you are not looking at the mixed mode results objectively.
http://www.anandtech.com/video/showdoc.aspx?i=1863&p=8
Look there, the 5900U is still way behind the ATis in its pricerange, while with DX8.1 mode, it would be about equal. It struggles to keep up with the 9600Pro.

The 5900U is performing fine at 1024x768, faster than the 9600PRO even. It's slower at 1280x1024, but I'm sure DX8.1 is also since none of the DX8.1/mixed mode benchmarks have shown mixed mode to be more than 10% slower than DX8.1.

Shows how much you know about programming shaders.

:eek:

Some things are just impossible to do with only fp16 precision.

Right... which is where the FP32 precision part of mixed precision comes in :)

You can't compare the FarCry shaders with the HL2 ones, because they implement very different lighting models.
HL2 uses a novel bumpmapped radiosity scheme, using cubemaps as 'lightprobes'. No other engine uses that approach afaik.

Even if hypothetically there was some artifacting, some banding in DX9 mode would still look superior to DX8.1 mode.

Search around on this forum... People have reported that forcing fp16 in HL2 only increased the framerate by a handful of frames. Mixed precision would only be slower than full fp16.

Haven't seen anyone who tried to actually run any of the shader intensive HL2 benchmarks with FP16 on HL2, just general fraps impressions which really isn't useful info.

You must be kidding me. Sure, HL2 may not be perfect... but if you see the framerates it can pull from simple 9600Pro cards (or 6600 for that matter), you can't possibly say the shaders are unoptimized. In fact, it seems that of the three 'big shader games', FarCry, Doom3 and HL2, it is HL2 that gets the highest framerates. And not because it looks that much more ugly like the others either.

Hello? Tons of people like me have super fast machines that the game runs like crap on due to shoddy programming. It stutters and sputters like a $9.99 budget game. Forget 9600PRO, the Source engine doesn't even work properly on many Athlon64 systems with 6800GTs and X800XTs.

Examples:
http://www.halflife2.net/forums/showthread.php?t=69715
http://soulcake.freestarthost.com/poll.htm
http://www.blep.net/hl2stutter/

The writer of the 270 page WindowsXP Tweak Guide at tweakguides.com can't get his HL2 to run without stutter despite his highly spec'd out system. The engine, very simply, has serious problems.

At least Doom3 and FarCry reliably run well on mid to high end systems, when HL2 is like a crap shoot on any system.

My theory is that the gain from the mixed mode path wasn't worthwhile... Sure, you get 40% gain... but over what? Over absolutely appalling performance.

Again, the "apalling performance" isn't fixed by DX8.1 mode according to Valve's benchmarks. The game just ran slow on the FX series period.

Judging from the benchmarks on Anandtech I can understand perfectly that Valve didn't want to use the mixedmode path...

Apparently you did not examine the benchmarks closely enough in your own link, because the ones comparing DX8.1 to DX9 mixed mode show less than a 10% increase going from DX9 Mixed Mode to DX8.1 in HL2 - usually the difference being between 0-4fps. In fact on the 5600 Ultra it shows a difference of less than 1fps.

Not many people would have appreciated their game running slower AND uglier than a 9600Pro, on NVIDIA's top card, which costs 3 times as much!

Right, which is why Valve should have implemented mixed mode for the FX series. From the benchmarks provided by Valve it is apparent using DX8.1 mode showed little performance gain over mixed mode, while reducing the image quality. In case you didn't notice, this thread contains numerous FX owners, who are all requesting guess what, DX9 mixed mode.

We'll have to see about that... there are already quite a few Source licensees, and even some games out. As far as I know, there are no Doom3 or CryEngine games out, other than Doom3 and FarCry themselves.

The only major non-Valve game that is using source currently is Vampire: Bloodlines, and Troika is currently experiencing the technical support nightmare I detailed above. And the stutter from the Source engine has bled over into Troika's game.

Avault on Vampire Bloodlines said:
Even an act as simple as turning a street corner can cause heavy stutter; I’m still undecided as to whether it’s a symptom of the display or the sound. Both chop with each other, and so far adjusting the settings downward for either one doesn’t seem to make any difference.

Source was supposed to be "the engine." It turned out to be "the (massively technically flawed) engine". Wouldn't be suprised if devs originally interested dropped it for Doom3 or CryEngine simply because of the horrendous stutter bug Valve still has been unable to fix 3 months after release of the game. The "memory cannot be read" crash bug is equally as bad, and also hasn't been fixed.
 
I think HL2 is sweet payback for the whole nvidia + id = DoomIII fiasco. It also shows whats wrong with game makers bending over for one companies hardware.
 
tranCendenZ said:
Well, Valve's official benchmarks showed otherwise, that DX9 mixed mode wasn't much slower than DX8.1:

http://techreport.com/etc/2003q3/valve/index.x?pg=2

FX 5900 Ultra DX8.1 - 52fps
FX 5900 Ultra DX9 Mixed Precision - 48fps
FX 5900 Ultra DX9 Full Precision - 31fps

Only around a 5-10% loss going from DX8.1 -> DX9 mixed mode according to Valve. If you read the beginning of the anand review you posted, it shows similar figures.

But why are all other benchmarks results from the mixed mode beta different then?

Right... which is where the FP32 precision part of mixed precision comes in :)

Again, shows how little you know about programming shaders.
If you have to write most of the shader in fp32, because of precision problems... there won't be any way to actually gain performance.
I would also like to point out that the 'mixed mode' of Valve is not just mixed precision, but rather DX8 and DX9 shaders used together.
So are you sure you even know what you're talking about?

Even if hypothetically there was some artifacting, some banding in DX9 mode would still look superior to DX8.1 mode.

It's not hypothetical, and it's not what most people said after seeing the artifacts.

Haven't seen anyone who tried to actually run any of the shader intensive HL2 benchmarks with FP16 on HL2, just general fraps impressions which really isn't useful info.

Ah I see... Just disqualify the evidence if it doesn't suit your agenda.

Hello? Tons of people like me have super fast machines that the game runs like crap on due to shoddy programming. It stutters and sputters like a $9.99 budget game. Forget 9600PRO, the Source engine doesn't even work properly on many Athlon64 systems with 6800GTs and X800XTs.

We were discussing SHADER performance... not the problem of the game having to load textures and sounds in the middle of something.
Please try to at least read my post and understand what I'm saying.

Right, which is why Valve should have implemented mixed mode for the FX series. From the benchmarks provided by Valve it is apparent using DX8.1 mode showed little performance gain over mixed mode, while reducing the image quality. In case you didn't notice, this thread contains numerous FX owners, who are all requesting guess what, DX9 mixed mode.

I was talking about the mixed mode!
Mixed mode is slower and uglier than DX9 mode.
Again, do you understand that Valve's mixed mode is DX8 and DX9 combined, and not just DX9 with some partial precision added?
 
Mister E said:
I think HL2 is sweet payback for the whole nvidia + id = DoomIII fiasco. It also shows whats wrong with game makers bending over for one companies hardware.

Despite what some people may think, I can assure you that Valve did not 'bend over' for ATi, and the FX series actually ARE this bad.
While I could go into a big technical debate, it is probably lost on most people here... But I know from my own experience with developing shaders that the FX series is indeed very poor at ps2.0... even with partial precision.

But just think about this then... If Valve was out to get NVIDIA, why did they stop with FX? The game runs fine on GF6, in full DX9 mode.
If I were out to get NVIDIA, I'd plan ahead, and make sure that their new generation of cards would perform badly aswell... the FX series wasn't popular in the first place, and before long, most people will have upgraded to GF6.

As for ID 'bending over' for NVIDIA... Unlike Valve, they actually used some NVIDIA-specific optimizations, while they included no specific optimizations for any other vendor.
So it would be slightly easier to argue that ID was 'bending over' for NVIDIA... However, the reality is probably closer to something like this: NVIDIA added some technology to make the Doom3 shadowing method of choice more efficient, and ID chose to use this. No other vendor offers this technology, so they could not use it there, even if they wanted to.
The choice of OpenGL over Direct3D could also be seen as NVIDIA-specific... however, strictly speaking, that's only the fault of the other vendors for not being up to NVIDIA's standards when it comes to OpenGL drivers. And in practice we see that ATi got its drivers together quite well, at least for Doom3.
 
Moloch said:
Take off your green-o-vison,
Grow up.


Optimus in post #576 said:
Okay, look this is exactly what I was trying to get away from.

Stop acting like little children and discuss the issue like adults...please.

You guys have missed quite a bit, apparently and have not caught up on what you missed.
Optimus said:
The thread topic was intended to get recognition, at which it succeeded, but the point of the thread is to give compelling evidence that a future patch is needed to allow partial precision for at least the FX line of cards, since no others seem to benefit greatly.

If you have constructive counter-evidence, then please stay and submit it.

However, if all you're going to do is bash some poor folks' decisions on hardware purchases, and thereby contribute nothing constructive or helpful, then leave. This includes those who keep saying that Valve or ATI sucks, because that gets us no where pleasant.

Moloch said:
ati clearly as better AA, if not AF, seriously look at some reviews comparing sshots- their gamma corrected AA is jawsome.

First of all, I don't play games in slide show mode (when I can help it) so screenshots do not make good AA or AF. If it can't make it look good while moving, it is worthless. Second of all, the topic is "Valve Sucks" not "ATI sucks" so I apologize for stepping out of bounds. :p


Moloch said:
Thats pretty sad, I played HL2 with 16AF with my 8500, with the game maxed minus reflect all.

Well, aren't you something special? Do you have a lolly, too? Do you have a shoe size of 17, too?


Moloch said:
Can you atleast acknowledge ati's AA being far superior?
Ati's FSAA is even superior to the 6XXX series which has "gamma adjustment", what makes you think the FX series is going to be better?

No, I can't. First of all:
Optimus in post #795 said:
Moloch said:
the AF and AA is much better.
I never argued this point. I'd be the first to agree. I don't even use either in games because it slows my system to a slide show.
Second, AA is never as good as the real thing: a higher resolution. AA was originally designed back when games had to specifically support the resolution that you wanted to use. It made it possible to play games in what seemed a higher resolution even though it wasn't supported. In fact, the video card is running at a higher resolution as it is processing the information and then dropping back down by averaging the colors together to pick a color for each particular, less resolute pixel. 8x roughly equals twice the resolution being calculated with the current resolution actually being renderred on screen. Honestly, it's a waste of processing power if your screen supports the higher resolution, because it is calculating some undisplayed pixels multiple times.


Moloch said:
Why don't do you act like an adult, and admit the flaws of the FX series, and stop defending nvidia and slamming valve for decisions which can only be speculated why?
It's their game. They wanted to keep NV3X performance consistent with other recently released games, such as far cry, we all know how well the FX loves the game;)

Why don't you act like an adult and read?
While you're at it, why don't you stop putting words in my mouth? I have already admitted on countless occasions in this very topic that the FX line sux. I never suggested it didn't. I'd seriously consider shooting myself if I suggested as much accidentally.
How many times do I have to repeat it for you guys to stop suggesting that I have ties to a specific company? Seriously, how many times? I'll make a post devoted to repeating it as many times as you deem necessary.
I may prefer NVidia, but that's more of a price/performance issue than an IQ/performance issue.
Allow me to quote:
Optimus in post #795 said:
Moloch said:
No matter how you cut the cake, the FX series was a flop.
Could not agree more. In fact, I think it is not humanly possible to agree with this statement more than I do at this moment. This may be why my other nickname is Captain Obvious.
Optimus in post #617 said:
rcolbert said:
3) The fact that NO ONE will ever admit the video card they bought sucks so long as they still own it.

With this I will have to disagree.

The FX line utterly and totally sux when compared to what came out at the same time. It was cutting edge for it's intended release date, however it did not make it's intended release date. As far as currently, it sux more than a Dyson on a 240 volt power supply.... don't ask.... I personally want to upgrade now, but I'm waiting for the dust to settle on all these interface changes lately. And while all this is true, it doesn't mean it's not capable of better when it is babied.

I'm not saying they did anything wrong or that they should have done anything different. I'm just saying they should devote a little more time to finishing their engine if they hope to sell it to other companies when they can easily opt for doom 3 which can run well on all dx9 compliant hardware with one shader path.
Optimus in post #617 said:
rcolbert said:
And finally, "optomized instruction order" ONLY means optomized for the bad architecture of the FX5xxx series cards which will kick a pixel back to the beginning of the pipeline when it receives shader instructions in a sequence that it can't digest properly. All other cards can handle multiple shaders in a single pass through the pipeline in whatever sequence they happen to arrive. THAT is the way it's meant to be played, mon frere.

On the nose, mon frere; Completely and totally nail on the head logic.
Optimus in post #566 said:
I set myself, before I purchased the 5900, to upgrade once every two NVidia development cycles... by that I mean nv3x, nv5x, nv7x, etc. unless something fundamentally changes in the industry, because I expected that by the time I was ready to upgrade, I'd be hitting the end of the video card's natural life cycle. For the most part this is true, except that my current card has horrible dx9 performance unless it is specifically programmed for in a certain way. Granted I knew that this was the case when I bought it, I just hoped that, since the dx9 spec provided for such a possibility, it would never become a problem. Honestly, I'm surprised it hasn't b4 now.

Actually, on that rambling note, I wonder if the TWIMTBP marketing (propaganda) was just a way to encourage the game companies to write their code in such a way as to clean up after their mistake... hmmmmm....
Optimus in post #558 said:
rcolbert said:
So the question again is, how exactly is Valve hurting NVidia in all of this?

Actually, no. The question is will Valve submit to the will of 10% of their Steam userbase and add partial precision capability to their shader logic? The supposition that Valve was intentially jabbing at NVidia was an unwarranted, unfounded (albeit believable), reactionary statement. I'm sure most of them will wish they could take it back when the patch is released.

Until then, we let Alien Conspiracy theorists and Government Conspiracy theorists continue to talk and we just say they're crazy and ignore them, so I request:

Please, call us crazy and ignore us. That way we can move on and get this problem fixed.
If this is not enough evidence, please, let me know, and I will consume more of hardforum's precious database storage space to fulfill your futile whims.


Moloch said:
Who's getting mad? Not I, it's the internet sir, saying harsh things don't mean you're pissed.

No, saying harsh things is impolite, dishonorable, and insulting. I respectfully request that you come up to the level of polite discussion and refrain from a gestapo, "shutup, stupidhead" attitude while debating this topic. Insults do not prove points. All they do is display how childish you are.


Moloch said:
Thanks for proving how little you know about the whole ordeal.
FP24 will never look worse than FP32 when you simply don't need the precision FP32 has.
There are places were FP16 looks perfectly fine, indisguisable from FP32, is it not hard to think FP24 would the same?

Yet again, grow up.
Normally, I would stop at that, and say no more for fear of giving you too much credit. But I would like to take away credit by pointing out that after attempting to insult me you followed up by agreeing with me. Now, maybe it's just me, but this particular method of insulting is starting to get confusing.


Moloch said:
Maybe because nvidia are were cheaters back then.
lol2mj.gif

Yeah, too bad ATI couldn't be the bigger company and play fair now as well as then. Honorable companies will not stoop to their opponents dishonorable level.
For an example of an honorable company: AMD. No matter what Intel says to insult AMD, AMD never even mentions Intel, I believe for fear of accidentally insulting Intel.


Scali said:
As I said many times before, the choice is either good framerates or good IQ, on FX cards. Apparently Valve chose good framerates over good IQ. Which is probably the best choice for most gamers.

This is exactly what I mean. Valve chose for us, rather than letting us choose for ourselves. The Source engine was touted as one of the most customizable 3d game engines in history. Truth in advertising: "The Source engine is one of the most customizable 3d game engines in history, unless you have an NV3x based video card."

Optimus in post #727 said:
What I would really like to see is a text file with option settings for every shader in the game. That way, we can tweak each shader to our liking, choosing dx81 for this shader, dx9 for this one, dx9 partial precision for this one, etc.

That would be the best solution for most of us, and the great thing is that it would allow Valve to focus on updating/bug hunting/bonus content while the 1337 users make shader implementation configuration files and distribute them to others of like hardware.

Of course, more than likely, all that's actually going to happen is that NVidia is going to rewrite a majority of the shaders in HL2 in such a way as to help FX line gain performance (since it doesn't appear that the 6xxx's are much affected, at least by precision level).

--------------------------------
Off Topic:
Scali said:
I hope that by now everyone (except Optimus perhaps) finally sees that Futuremark was indeed being fair, and their results were spot-on.
I agree. Futuremark was being perfectly fair in that they allowed the same modifications NVidia makes in all games into 3dMark2kX, so long as they were optional. It is a dispicable thing to add modifications to a benchmark if they are not either (1) available in all games or (2) optionally implemented through a simple IQ setting or some such easily understandable method (unlike their current SLI implementation).

Regarding the snide remark: allow me to refer you to my first sentence in this post.
 
Optimus said:
Second, AA is never as good as the real thing: a higher resolution. AA was originally designed back when games had to specifically support the resolution that you wanted to use. It made it possible to play games in what seemed a higher resolution even though it wasn't supported. In fact, the video card is running at a higher resolution as it is processing the information and then dropping back down by averaging the colors together to pick a color for each particular, less resolute pixel. 8x roughly equals twice the resolution being calculated with the current resolution actually being renderred on screen. Honestly, it's a waste of processing power if your screen supports the higher resolution, because it is calculating some undisplayed pixels multiple times.

Erm... You are talking about supersampling, but all modern cards use multisampling instead, which is quite different from rendering the screen at a higher resolution. And for the record... you need 4x (ordered grid) supersampling to render at twice the screen resolution (width*2 and height*2).
With modern multisampling it's often much faster to render with 4xAA than to increase the resolution by 1 or 2 notches, to get the same perceived smoothness.

This is exactly what I mean. Valve chose for us, rather than letting us choose for ourselves. The Source engine was touted as one of the most customizable 3d game engines in history. Truth in advertising: "The Source engine is one of the most customizable 3d game engines in history, unless you have an NV3x based video card."

I don't think Valve meant that kind of 'customizable'.
I think they were talking about the ability of developers to customize the engine to suit their needs. You are talking about the users customizing a game to suit their needs. That is quite a different thing.
Obviously any developer can choose to add NV3x-specific paths to their Source-engine powered game, if they wish to do so.
I just think you're making a huge deal out of nothing.
 
Wow never heard anything about this before. Not TOO surprising, but still interesting none-the-less.
 
Not to mention that increasing the resolution alone still stacks all the pixels in a perfectly horizontal/vertical grid, wheras most AA subpixel processing uses rotated grids. The net result is that jaggies are somewhat more prevelant on the higher resolution no/AA configuration for lines that are nearly but not quite horizontal or veritcal.

Best answer is to go 1600x1200 with 4XAA on. Everyone can do that, right?
 
Scali said:
Erm... You are talking about supersampling, but all modern cards use multisampling instead, which is quite different from rendering the screen at a higher resolution. And for the record... you need 4x (ordered grid) supersampling to render at twice the screen resolution (width*2 and height*2).
With modern multisampling it's often much faster to render with 4xAA than to increase the resolution by 1 or 2 notches, to get the same perceived smoothness.



I don't think Valve meant that kind of 'customizable'.
I think they were talking about the ability of developers to customize the engine to suit their needs. You are talking about the users customizing a game to suit their needs. That is quite a different thing.
Obviously any developer can choose to add NV3x-specific paths to their Source-engine powered game, if they wish to do so.
I just think you're making a huge deal out of nothing.
QFT.
Closing arguments- valve chose what they did for a reason, whatever that reason is, no one knows, but nothing can change that.
Having a HUUUGE thread about won't change that.
We've established the NV3X products were a failure, what more is there to say?
You want your DX8/9 mixed mode back? I'm afriad you whining falls on deaf ears.
 
Are you guys still flaming each other? GG I'm unsubscribing to this thread lmao, have fun with your flame show.

~Adam
 
Moloch said:
QFT.
Closing arguments- valve chose what they did for a reason, whatever that reason is, no one knows, but nothing can change that.
Having a HUUUGE thread about won't change that.
We've established the NV3X products were a failure, what more is there to say?
You want your DX8/9 mixed mode back? I'm afriad you whining falls on deaf ears.

Why are you saying this in reply to my post? I'm not the one whining about wanting the mixed mode back, or defending the NV3x products!
 
Scali said:
Why are you saying this in reply to my post? I'm not the one whining about wanting the mixed mode back, or defending the NV3x products!

Pssstt I Think he is agreeing with you and informing to the others that a partial mode for HL2 just is not going to happen :)
 
Status
Not open for further replies.
Back
Top