Which is better: 1900XT or 7800GTX 512mb?

No we don't. Doom3 (and all games based on its engine) supports the depth-range extension of UltraShadow. The other part of UltraShadow (double output of z values) is automatically enabled when rendering to a shadowmap, so all games with shadowmapping support it. PCF is at least used by Far Cry, and probably by most other games using shadowmaps (just check Far Cry on NV vs ATi cards, you'll see a slight difference in shadow quality). It is also used by 3DMark.
Vertex texturing is the only thing we haven't seen yet in games afaik, but I think we will, since you can do some nice effects with it, like (animated) displacement mapping.

i had forgotten that they were getting their still nice speed increases in the doom3/quake4 games with that, thanks for pointing that out, regardless still the ATI cards were not far behind them lacking this feature, and with an inferior OpenGL driver support, if it is going to show me true performance in the future it sure isn't pointing it out right now

i'm not trying to say the 7800's aren't going to be usefull in the future, i'm just saying with the triple ALU power the X1900 is more substantial right now
 
Trimlock said:
i had forgotten that they were getting their still nice speed increases in the doom3/quake4 games with that, thanks for pointing that out, regardless still the ATI cards were not far behind them lacking this feature, and with an inferior OpenGL driver support, if it is going to show me true performance in the future it sure isn't pointing it out right now

What's an inferior driver when it's completely optimized for Doom3?
And while stencil shadows will probably disappear, we still have the shadowmap advantage on NV cards.
Sure, ATi can currently compete with NV because it has more raw processing power, and not all advanced features are being used... but I'm not interested in performance per se. Both cards can run all the latest games in insanely high resolutions with all eyecandy turned on, so who cares?
I just think that NV cards might have more/better eyecandy in future games.
 
I really can not understand people claiming that the GTX 512 is currently a better buy compared to the x1900, cause it simply is NOT , it IS pricier, rarer,slower and simply older technology.

and I own a GTX.
 
Suflex said:
I really can not understand people claiming that the GTX 512 is currently a better buy compared to the x1900, cause it simply is NOT , it IS pricier, rarer,slower and simply older technology.

I'm only arguing against the 'older technology' bit. And I've given plenty of arguments for that.
 
What's an inferior driver when it's completely optimized for Doom3?
And while stencil shadows will probably disappear, we still have the shadowmap advantage on NV cards.
Sure, ATi can currently compete with NV because it has more raw processing power, and not all advanced features are being used... but I'm not interested in performance per se. Both cards can run all the latest games in insanely high resolutions with all eyecandy turned on, so who cares?
I just think that NV cards might have more/better eyecandy in future games.

they have optimized the use of AA when applied to this engine, their over all OpenGL support is still lacking but not nearly as bad as it was with the X800's and prior products

you can argue back and forth about raw processing power, especially on the GTX512 with their insane memory clocks but to be able to have an architecture such as the X1900 support such a massive increase in raw TMU processing is fairly impressive

i won't argue with the fact that NV could have better future but it could also be said with ATI's, right now its just trying to read the minds of the developers and where they want to go, personally i feel safer with the path ATI took
 
Since the release of ATI’s X1000 series of products we’ve seen a couple of different takes on Shader Model 3.0 from the two main vendors. The first caused a small controversy with NVIDIA clearly believing Vertex Texturing to be part of the VS3.0 specification, but ATI (and apparently Microsoft’s WHQL certification process) disagreeing such that this wasn’t included in the X1000 series, with “Render to Vertex Buffer” being provided as an alternative. Another divergence has been highlighted with the recent X1900 release and ATI keeping a comparatively low number of texture units in their high end, whilst scaling up their math processing capabilities significantly.

We’ve quizzed both id Software’s John Carmack and Epic’s Tim Sweeney on their thoughts on these differing directions between the two vendors:

In our interview with Eric and Richard of ATI they mentioned they went in the direction of tripling the ALUs versus the TMUs after talking with you for instance. My question would be: do you see this as a good direction? Are you working on shaders which require a lot of ALU while keeping TMU usage constant to today?

John: I think it is clear that the ratio of math to texture fetches is increasing.

Tim: It's a definite trend that ALU usage in shaders is going up at a faster rate than TMU usage, so it's reasonable that the hardware should increase ALU's faster than TMU's. What ratio is ideal is debatable; it depends on a whole lot of variables, but fortunately it's easy to see whose tradeoffs win at a given price level by running some benchmarks.

The X1000 series of ATI cards don't implement an actual texture fetch in the vertex shader, unlike NVIDIA's GeForce 6 and GeForce 7 series, preferring instead to get the texture information from a vertex buffer that the programmer has to setup in the pixel shader. Which implementation do you prefer?

John: For vertexes, I think more often about looking up data in a table rather than indexing an image, but I can see either perspective.

Tim: We don't use vertex texture fetch in UE3 right now, but I expect we'll be using it in the future for moving more of our displacement-mapped terrain logic to the GPU.

Tim also dropped the following comment to us with regards to Unreal Engine 3:

Tim: We'll be making a UE3 benchmark available several months before shipping UT2007 on PC, in order to encourage the hardware folks to optimize their drivers. We're not doing this now, because at our stage in development many aspects of our rendering pipeline aren't fully optimized, and if we encouraged IHV's to optimize for it now (by releasing a benchmark), they would end up wasting a lot of time optimizing code paths that aren't reflective of a final, shipping UE3 project. Regarding the timeline, we'll be actively developing Unreal Engine 3 throughout the current hardware generation -- all the way through 2009.


http://www.beyond3d.com/#news27836
 
John: For vertexes, I think more often about looking up data in a table rather than indexing an image, but I can see either perspective.

Tim: We don't use vertex texture fetch in UE3 right now, but I expect we'll be using it in the future for moving more of our displacement-mapped terrain logic to the GPU.

So they're basically agreeing with me on the vertex texture part.
 
John said he can see it from either perspective. And, "with “Render to Vertex Buffer” being provided as an alternative", can't you use that and acheive the same results?
 
it will work and theoretically has better performance, but is harder to code for

but with that interview, it seems that the two biggest dev's (john and tim) don't really show too much attention to this feature, as in it has no aspect in how they code their engine
 
"Render to Vertex Buffer" assumes you are actually rendering something in the first place.
If you just want to use a displacement-map, and perhaps scroll or rotate it, you don't actually have to render anything with vertex texture fetch. You just rotate or scroll the coordinates. This is far more efficient than having to update the vertex buffer in an extra pass. Not to mention more intuitive to implement.
 
Scali said:
I'm not sure about the LCD support and high-res z-buffer, and dual link DVI, but I'm talking about rendering features here.
I don't consider higher resolutions a feature per se. You can't render more realistic surfaces and effects with that. Games will look sharper, not more realistic, with higher resolutions.
So I think we're not arguing about the same here.
Let's stick strictly to rendering quality, and disregard variables such as resolution. Resolution is limited by performance anyway.


Scali, lets not forget, people are selling GTX 512's for better high res support. Thats the reason you see them on ebay.

My friend did in fact sell his GTX 512 for over $750, and then won an auction for an XTX for $550. You know why? Because the GTX does not support high res LCD's like the Apple and Dell 30" models (which he bought two of)

As for high res gaming, teh GTX does not hold up because it is zbuffer limited.

So, if your stuck at 12x10 on a small monitor, thats not my problem but thats definitely NOT the future. The future is HIGH RES. Plus, the ONE weakness you claim ATi has actually doesn't make a difference in performance anyway.

I think you are holding onto last weeks news

... and the ATi is only going to get better with newer drivers.
 
This is a little off topic but I am looking to upgrade my system with a 1900xtx but want to wait for a little while until the price drops. Does anyone have any guesses on when noticable cuts might happen? I am going to upgrade the mobo, cpu and video card and looking to do it for around $1000.00. Right now it looks like it will run 1500 with current prices. Just a tad too rich for my blood.
 
BBA said:
So, if your stuck at 12x10 on a small monitor, thats not my problem but thats definitely NOT the future.

I thought the future was more realistic graphics. I never cared about resolution at all. That's just a variable in my sourcecode.

BBA said:
The future is HIGH RES. Plus, the ONE weakness you claim ATi has actually doesn't make a difference in performance anyway.

Which one weakness is that? Vertex fetch, texture filtering, PCF filtering, UltraShadow?
 
THEREALJMAN said:
This is a little off topic but I am looking to upgrade my system with a 1900xtx but want to wait for a little while until the price drops. Does anyone have any guesses on when noticable cuts might happen? I am going to upgrade the mobo, cpu and video card and looking to do it for around $1000.00. Right now it looks like it will run 1500 with current prices. Just a tad too rich for my blood.


I can do that right now for $900
 
x1900xt FTW.

I asked myself the same question as the poster did. I started looking.. and never found a GTX 512. So the choice was easy for me, price and avalabilty. Not to mention the better card.


FWIW, I've always been a big NVIDIA !!!!!!. :rolleyes:
 
Scali said:
I thought the future was more realistic graphics. I never cared about resolution at all. That's just a variable in my sourcecode.



Which one weakness is that? Vertex fetch, texture filtering, PCF filtering, UltraShadow?

Scali do you work for Nvidia’s community outreach program by any chance? It’s run by a company called AEG that basically does what you’re doing right now (marketing their product no matter what). If you’re not, then you should get in contact with them and get some money out of this.
http://www.beyond3d.com/forum/showthread.php?t=26199


Oh and btw, I’d take higher resolution over anything you claimed so far because it makes a bigger difference in image quality. You game at 640x480 and I’ll game at 2048x1536 and lets see who’ll enjoy a same game more.

I’d also like an example of this claimed better and more realistic graphics from Nvidia.

Lol Nvidia didn’t even have better image quality when they supported SM 3.0 and ATI only had 2.0 and you think that either one is going to look noticeably better now? Why do you think reviewers are only contrasting AA, AF, HDR and high-resolution gaming, because those are the only things that make a difference in image maybe?

So far ATI seems to do better with all the bells and whistles turned on (Super AA anyone) so ill let you decide on which product has more “realistic graphics” at this point.
 
rackus said:
Scali do you work for Nvidia’s community outreach program by any chance? It’s run by a company called AEG that basically does what you’re doing right now (marketing their product no matter what).

Funny, it feels like the opposite. I name 4 technologies that NVIDIA supports and ATi lacks, yet everyone acts as if ATi is the undisputed king, and NV is 'older technology'.
And as I said, I am actually an ATi user at this moment.

rackus said:
Oh and btw, I’d take higher resolution over anything you claimed so far because it makes a bigger difference in image quality. You game at 640x480 and I’ll game at 2048x1536 and lets see who’ll enjoy a same game more.

Talk about "marketing no matter what". The X1900 is not THAT much faster than the 7800. It's not like you have to game at 640x480 with a 7800. Perhaps you have to game at one resolution lower, but probably not even that.

rackus said:
I’d also like an example of this claimed better and more realistic graphics from Nvidia.

An example is a bit hard at this point, since there are no games yet that use this extra technology. Which is why I used the word 'future games'.

rackus said:
Why do you think reviewers are only contrasting AA, AF, HDR and high-resolution gaming, because those are the only things that make a difference in image maybe?

No, because they don't have a clue about graphics technology, and these things are the only things they can test with their current drivers and games.
And I still don't get why nobody seems to be concerned about unfiltered textures. Try disabling texture filtering for all textures in a game, and see how you like it.
 
Scali said:
Talk about "marketing no matter what". The X1900 is not THAT much faster than the 7800. It's not like you have to game at 640x480 with a 7800. Perhaps you have to game at one resolution lower, but probably not even that.

Depends on the game to be honest and the IQ, at 1900x1200 in fear(or something like that
:p ) the x1900xt performs on par with 7800gtx 512's in SLI. In other games sometimes it's 20% faster, sometimes it's 2%. Also, several of the things you mentioned were developed by nvidia and aren't really a 3d specification such as ultra shadow. I don't really think you're an NV !!!!!!, and that you're just stating differances, however I think you're ranking ultra shadow and the vertex fetch thing a little bit too highly in future games, Carmack and sweeney themselves didn't place too much focus on it, if anything they said that a better tmu/alu ratio is a good thing. They seemed to place more focus on that.
 
sabrewolf732 said:
Also, several of the things you mentioned were developed by nvidia and aren't really a 3d specification such as ultra shadow.

Does that matter though? It's already being used in today's games, as I said before.

sabrewolf732 said:
I think you're ranking ultra shadow and the vertex fetch thing a little bit too highly in future games, Carmack and sweeney themselves didn't place too much focus on it, if anything they said that a better tmu/alu ratio is a good thing. They seemed to place more focus on that.

Yes, well, I think in Carmack's case, we can ignore his comments altogether anyway. He releases one game every 10 years or so. The game he's currently working on, is probably still in VERY early stages of development, and won't be released until everyone has upgraded his or her X1900/7800 already.
Sweeney on the other hand will release a game that will be played with both X1900 and 7800.
But this 'tmu/alu ratio' just doesn't mean anything by itself. As I said before, if you want more alu power so you can emulate texture/shadowmap filtering, what point is there?
We will probably be moving towards more alu-heavy games, but that will be slowly, not in the X1900/7800 era, I think. I think this is still the era of lots of textures and fancy effects. How about that realtime water effect that NV implemented with their vertex texturing?
http://download.nvidia.com/developer/SDK/Individual_Samples/samples.html#VertexTextureFetchWater

Or cloth: http://download.nvidia.com/developer/SDK/Individual_Samples/samples.html#Cloth

That's the sort of thing that vertex texture fetch can do, and that's what I think may be in future games. That's what makes games more realistic. Not a higher resolution.
(and yes, those are GeForce6 demos, talk about 'older technology').
 
Scali said:
Does that matter though? It's already being used in today's games, as I said before.



Yes, well, I think in Carmack's case, we can ignore his comments altogether anyway. He releases one game every 10 years or so. The game he's currently working on, is probably still in VERY early stages of development, and won't be released until everyone has upgraded his or her X1900/7800 already.
Sweeney on the other hand will release a game that will be played with both X1900 and 7800.
But this 'tmu/alu ratio' just doesn't mean anything by itself. As I said before, if you want more alu power so you can emulate texture/shadowmap filtering, what point is there?
We will probably be moving towards more alu-heavy games, but that will be slowly, not in the X1900/7800 era, I think. I think this is still the era of lots of textures and fancy effects. How about that realtime water effect that NV implemented with their vertex texturing?
http://download.nvidia.com/developer/SDK/Individual_Samples/samples.html#VertexTextureFetchWater

Or cloth: http://download.nvidia.com/developer/SDK/Individual_Samples/samples.html#Cloth

That's the sort of thing that vertex texture fetch can do, and that's what I think may be in future games. That's what makes games more realistic. Not a higher resolution.
(and yes, those are GeForce6 demos, talk about 'older technology').

Sweeney said there won't be vertex fetch in 07, and that a better tmu/alu ratio is good. Games don't need a better tmu/alu ratio? Look at fear, a game that in certain situations, the 1900 is as fast as gtx 512's in SLI. And ignore carmack? Possibly one of the greatest programmers? He said either way is feasible. Nothing about utilizing it in his game engines
 
sabrewolf732 said:
Sweeney said there won't be vertex fetch in 07, and that a better tmu/alu ratio is good. Games don't need a better tmu/alu ratio? Look at fear, a game that in certain situations, the 1900 is as fast as gtx 512's in SLI.

I'm not saying games don't need a better tmu/alu ratio, I'm just disagreeing on the order of importance.


sabrewolf732 said:
And ignore carmack? Possibly one of the greatest programmers? He said either way is feasible. Nothing about utilizing it in his game engines

I thought we all agreed that he was no longer one of the greatest programmers after Doom3?
I just meant that his next game probably won't be released until 2008 or 2009 or so, which means
1) By then most people have upgraded their X1900 or 7800 already, so his next game is of no interest to this discussion.
2) He himself probably hasn't worked out the exact specifications of the rendering algos used in his game yet, so he can't really give any detailed comments on what he will be needing and what not.

But just look at the pretty NV demos that I've linked to, and tell me if you think this will make games more realistic or not.
And as you can see, these are possible on GF6 already, so there are a lot of people with capable hardware at home. Good reason to start using those features.
 
Do people like "insert name here" ever really admit that they are wrong even if the debatable issue is clear? No I dont think so.

Must add that in todays current situation, I can´t see how somebody could recomend 7800 over X1900. Maybe 7900 but that remains to be seen.
 
a-lamer said:
Do people like "insert name here" ever really admit that they are wrong even if the debatable issue is clear? No I dont think so.

Perhaps it's not as clear to those who aren't biased towards ATi, and are not just looking at the framerates in some game benchmarks.
 
well to ignore frame rates is to ingore the playability :p

why would doom3 degrade Carmak from his legendary spot?
 
Scali said:
I'm not saying games don't need a better tmu/alu ratio, I'm just disagreeing on the order of importance.




I thought we all agreed that he was no longer one of the greatest programmers after Doom3?
I just meant that his next game probably won't be released until 2008 or 2009 or so, which means
1) By then most people have upgraded their X1900 or 7800 already, so his next game is of no interest to this discussion.
2) He himself probably hasn't worked out the exact specifications of the rendering algos used in his game yet, so he can't really give any detailed comments on what he will be needing and what not.

But just look at the pretty NV demos that I've linked to, and tell me if you think this will make games more realistic or not.
And as you can see, these are possible on GF6 already, so there are a lot of people with capable hardware at home. Good reason to start using those features.

Idk, I have to disagree. I don't think the vertex fetch is that important of a feature. Guess we're going to have to wait and see? I think the x1900 is better suited for performance in future games vs. the 7800 series. As in it will be a decent card for a much longer time.
 
Trimlock said:
well to ignore frame rates is to ingore the playability :p

Not in these days, where high-end cards tend to score hundreds of fps.

Trimlock said:
why would doom3 degrade Carmak from his legendary spot?

'Too little, too late', I'd say, with games like Far Cry and Half Life 2 to compete against.
Doom3 is a fine game, but it was nowhere near as big an impact as some of Carmack's earlier games.
 
Scali said:
Not in these days, where high-end cards tend to score hundreds of fps.



'Too little, too late', I'd say, with games like Far Cry and Half Life 2 to compete against.
Doom3 is a fine game, but it was nowhere near as big an impact as some of Carmack's earlier games.

I was talking about the engine, the engine is excellent IMO.
 
sabrewolf732 said:
I was talking about the engine, the engine is excellent IMO.

I was talking about the engine aswell. I think Far Cry is the most impressive and flexible engine of the three. Doom3 comes in last, on my list.
 
sabrewolf732 said:
I was talking about the engine, the engine is excellent IMO.
exactly.
He's makes the engines that power the games.

Not the games themselves.

The doom3 engine is a great engine.
 
'Too little, too late', I'd say, with games like Far Cry and Half Life 2 to compete against.
Doom3 is a fine game, but it was nowhere near as big an impact as some of Carmack's earlier games.

well you could argue no engine coming out or has come out has been as revolutionary as carmack's previous releases, i'd have to say the engine was great, not to say the Crytek engine wasn't or the UE2.5 isn't either, the use of stencil shadows as well as how awsome it scales is proof enough to me that is it is a fine peice of software, no where near the lvl of inovation that is the quake2 engine, or the quake3 engine for that matter (yay real time reflections)

to tell you the truth i'm not at all impressed with the Source engine, its fine and works and is an improvent over engine they used for HL1, but i was in no way impressed with it
 
Trimlock said:
the use of stencil shadows as well as how awsome it scales

Except that Far Cry also uses stencil shadows, and stencil shadows are already out of fashion now. It's shadowmaps these days. Which Far Cry also uses.
I also think Doom3 scales quite badly to older hardware. It's not really playable on a GF2, and barely playable on an R8500. Half Life 2 runs very smoothly on both, and still looks great too.

Trimlock said:
to tell you the truth i'm not at all impressed with the Source engine, its fine and works and is an improvent over engine they used for HL1, but i was in no way impressed with it

The most impressive thing about the Source engine is that it doesn't look impressive. I'll explain what I mean by that. The real world doesn't look impressive. You don't have all shiny surfaces everywhere, and not everything is nice and smooth and reflective etc.
Doom3 has a very 'clinical' look to it. Everything looks like brand new metal or plastic (including people). Half Life 2 looks realistic. Metal is rusted, tiles are broken and chipped etc... It doesn't impressive, it looks realistic. And that's the beauty of Half Life 2. Where Doom3 basically has one material for the entire world, which is tweaked a bit with bumpmaps and colourmaps, Half Life 2 actually has true materials for everything. Metal, porcelain, cloth, wood, brick etc. It all looks different. And thanks to the physics engine, it also behaves and sounds differently. The thing is just that the game is so realistic that you have to know what to pay attention to, else you'll miss it, because it looks so 'common'. The effects don't jump out and grab you. Because that's not what happens in real life, is it?
 
I also think Doom3 scales quite badly to older hardware. It's not really playable on a GF2, and barely playable on an R8500. Half Life 2 runs very smoothly on both, and still looks great too.

well the Doom3 engine is a complete gen above HL2, to tell you the truth i don't think HL2 will work on a GF2 with out some heavy modding to the config files, and the R8500 comment, i think ATI is to blame for that one :(

i know FC has stencil shadows, and i know they are on their way out, but i have never been more impressed with the realistice shadowing i exeperienced in Doom3 in any other game, where in FEAR everything seemed to meld with the environment so well i still wasn't very drawn in by the shadows

The most impressive thing about the Source engine is that it doesn't look impressive. I'll explain what I mean by that. The real world doesn't look impressive. You don't have all shiny surfaces everywhere, and not everything is nice and smooth and reflective etc.
Doom3 has a very 'clinical' look to it. Everything looks like brand new metal or plastic (including people). Half Life 2 looks realistic. Metal is rusted, tiles are broken and chipped etc... It doesn't impressive, it looks realistic. And that's the beauty of Half Life 2. Where Doom3 basically has one material for the entire world, which is tweaked a bit with bumpmaps and colourmaps, Half Life 2 actually has true materials for everything. Metal, porcelain, cloth, wood, brick etc. It all looks different. And thanks to the physics engine, it also behaves and sounds differently. The thing is just that the game is so realistic that you have to know what to pay attention to, else you'll miss it, because it looks so 'common'. The effects don't jump out and grab you. Because that's not what happens in real life, is it?

so what you are saying is they have really good artists :D which they do, i love high rest textures and is one of the reasons why i will always have high AF on instead of AA if i can't have both, ID did their models extremely well but their textures are really lacking, actually in Quake4 you can see how different things can look with the same engine, same basic models but the textures were so different, it was like another world (literally)
 
Trimlock said:
well the Doom3 engine is a complete gen above HL2

I'd say the exact opposite. The use of materials, physics and character animation in HL2 was a step ahead of Doom3, if you ask me. Doom3 looked just like Quake with shadows enabled. Sorta like Tenebrae.
The only thing that Doom3 does better than HL2 is shadows. But since shadows are the main visual component in Doom3, I suppose the effect gets amplified.
But adding better shadows to Source is much easier than making Doom3 do everything that HL2 does.

Trimlock said:
to tell you the truth i don't think HL2 will work on a GF2 with out some heavy modding to the config files, and the R8500 comment, i think ATI is to blame for that one :(

I've actually played it on a GF2. Even on my laptop with Radeon IGP340M.
I've also played it on a PII 400 with an R8500. Scales very well. Doom3 was barely doable on my 1800+ with R9600Pro. Completely CPU-limited, because it uses the CPU where it could be using the GPU, like HL2 does (mainly skinning and such).

Trimlock said:
so what you are saying is they have really good artists :D which they do, i love high rest textures and is one of the reasons why i will always have high AF on instead of AA if i can't have both, ID did their models extremely well but their textures are really lacking, actually in Quake4 you can see how different things can look with the same engine, same basic models but the textures were so different, it was like another world (literally)

It's not just the artists. It's also the engine that is so flexible that it allows all these different materials, and combine effects such as reflection and refraction, on a per-pixel basis. And all that with excellent performance aswell.
 
I have to agree, imo the hl2 engine is the most impressive engine there is. When I played hl2 lost coast I was stunned, especially in the church, wow. Also, hl2 scales EXTREMELY well. I would have to say Quake 4 also looked very impressive, and that it ran well, but I ran it on a highly oc'ed x800 pro.
 
Scali, the problems I have with you arguments are many fold. But the principle one is this:

First, you point out all the features that the 7800GTX has that X1900 doesn't.

Then when someone points out a feature that the X1900 has that the 7800 does not (like a 50% larger Zbuffer that helps high-end resolution) you say that you only care about rendering quality, and not resolution.

Then, you also admit (as far as I can tell) that the X1900 has work-arounds available for basically all the features you brought up, but with the caveat that the X1900 is slowed down when running these work arounds.

(Correct me if I am wrong, but the end result of these work-arounds is the X1900 producing essentially the same image as the 7800GTX, but at, most likely, lower resolutions because of the increased slowness.)

So if you don't care about resolution, just image, and both cards can likely produce the same image, just at a lower resolution on the X1900, then what exactly is your problem with the X1900?

You can't have it both ways, either resolution matters, or as long as the X1900 can produce the same image (even if only at 320x240) you have no argument.
 
Scali said:
The only thing that Doom3 does better than HL2 is shadows. But since shadows are the main visual component in Doom3, I suppose the effect gets amplified.
But adding better shadows to Source is much easier than making Doom3 do everything that HL2 does.

You forget HL2 is actually built on modifications of the original Quake engine Carmack made. (Maybe you weren't around back then to know?)


Doom3 was barely doable on my 1800+ with R9600Pro. Completely CPU-limited, because it uses the CPU where it could be using the GPU, like HL2 does (mainly skinning and such).

Right there...you showed your complete lack of understanding. Doom3 engine has not yet been CPU limited, not even on the SLi 7800GT's.

I guess you go on folklore instead of logic. Look, if it was CPU limited, then a 20% increase in CPU speed would produce more than maybe 3-6% (5-10 FPS) difference from a 3800 to an FX60 on an SLi 7800GT rig. On a 9600, your video card limited no matter what the CPU, even an Athlon 1GHz would do as good as your 1800+

BUT: Since you stated that was your hardware, I believe you have NO experience with new hardware, so your whole argument thus far is based on second hand knowledge, were as people here actually have the hardware and have first hand knowledge (myself included)

You just need to push another one of your 'I beleive' buttons and absorb what people here are trying to tell you. It will help your credibility recover from where you have put it in this one thread.




It's not just the artists. It's also the engine that is so flexible that it allows all these different materials, and combine effects such as reflection and refraction, on a per-pixel basis. And all that with excellent performance aswell.[/QUOTE]
 
You forget HL2 is actually built on modifications of the original Quake engine Carmack made. (Maybe you weren't around back then to know?)

Well actually HL1 was a modified quake engine, but HL2 (source engine) was built from the ground up and is independent from any other engine.
 
Moofasa~ said:
Well actually HL1 was a modified quake engine, but HL2 (source engine) was built from the ground up and is independent from any other engine.

HL1 was almost stock Quake Engine with OpenGL and Direct3D

HL2 even still has quake 1 libraries in it.. :D
 
Back
Top