x1900xt or 7900GTX for TES4: Oblivion

FiringSquad? Wouldn't surprise me if these numbers were true, I suppose. But the graph is very little to go by - is this outdoors, indoors, both?

5% seems like a very large performance hit to me, but my tastes are without a doubt different than the tastes of many of you. I prefer framerate stability (and vertical sync) to image quality.
 
phide said:
I prefer framerate stability (and vertical sync) to image quality.

I'll second that. That's one of my beefs with the reviews done at [H].. a lot of their "playable settings" dip into the 20's quite often, and that's far from playable IMO. I prefer to remain above 50 fps at all times. Anything below 40 (even just dips here and there) is unacceptable.
 
Sabrewulf165 said:
I'll second that. That's one of my beefs with the reviews done at [H].. a lot of their "playable settings" dip into the 20's quite often, and that's far from playable IMO. I prefer to remain above 50 fps at all times. Anything below 40 (even just dips here and there) is unacceptable.


lol , you must game on nvidia cards
 
Bona Fide said:
Not on Oblivion...I bet 16xAF would result in a ~30% framerate loss, at least.

Gonna depend on the amount of off-angle surfaces in the scene being rendered.

ManicOne:

I doubt either company considers their ROPs as part of their pixel shader pipeline. And ATI does not consider the texture units a part of the pipeline with the 1xxx parts since they were decoupled (though they are grouped amongst themselves and assigned to quads). An ATI shader pipe should simply consist of the ALUs and branch execution unit. For NVIDIA it should just be the ALUs since one of them also operates as the texture unit.

It's easy to get all this mixed up, as is apparent from the endless discussions on what now constitutes a pixel pipeline.
 
Apple740 said:
What could make the X850 almost as fast as the 7900GT :confused:
Hard to believe those numbers...
Completely ignore that graph and wait for some real world analysis.
 
retard said:
lol , you must game on nvidia cards

What on earth are you talking about? I got the numbers directly from the [H] review :rolleyes:

What does my vsync and framerate preference have to do with with ATI or nVidia anyway?
 
dderidex said:
Linkage



All I gotta say is...."so much for the XBox and PC version 'looking identical'"

HDR + AA is confirmed to work fine on the XBox360

Course, the dev summed it up better than I:



Yup. Clear enough. Now that I understand where they sit with PCs vs console graphics quality, me = passing on this game. Too bad, looked cool, too. :mad:


Sounds like a dumb reason to not get what looks like the most kick ass game in a while.

Suit yerself. That's just one more copy for somebody else.
 
kcthebrewer said:
Completely ignore that graph and wait for some real world analysis.

An explanation could be that the X850 is running a sm2.0 path (with less IQ) while the others are sm3.0
 
ATI pipe = TMU-ROP, with ALUs not in pipeline stage.

NV pipe = TMU-ALU, with ROPs not in pipeline stage.

Why would you have TWO pipeline definitions instead of one for both companies? Then you are comparing apples and oranges.

Then you are also saying an X1600XT is a four pipe card. So I guess that makes it equal to a GF4? Yeah right it would kill a GF4. Because it is really a 12 pipe card. Because it has 12 pixel pipes.

And it also doesn't make sense. ATi and NV pixel pipes are much more similar. Both contain two ALU's. Except NV has the texturing on one. A minor difference.

They are much more similar than TMU-ROP and TMU-ALU. Which contain totally different units and make no sense at all. Also, your way has no relevance to performance. AKA, a X1800 and X1900 are both 16 pipes. Whereas the latter is much faster due to 48 pixel pipes.

Oh well, it seems you guys will never admit you are wrong. I believe what started this was Phide claiming ATI has 48 ALU's not pipes. In which case X800 was 16 ALU's and 6800 was 32 ALU's which doesn't make sense.

I dont mind the wording my problem is people slighting ATI. What number Manic one agrees on strangely always makes ATI the lower number of pipes. That is what bugs me.

I believe by normal standards an X1900 must be considered a 48 pipe card.

As to the oblivion graph, that's clearly a Firing Sqaud style graph. Unless their review was accidentaly posted early then pulled back or something..then again FS is a site I would expect to run oblivion benches.

The performance difference is about what I'd expect, though.
 
I really doubt those performance #s will be anything close to the truth. I dont think this game will be nearly as hard on hardware that people think.

35fps for a X1900XTX with those settings is a bit far fetched. I bet it will be much better than that. Plus for the 7900GTX the drivers are still early for this card so I bet that will be up signifigantly too.
 
John Reynolds said:
I doubt either company considers their ROPs as part of their pixel shader pipeline. And ATI does not consider the texture units a part of the pipeline with the 1xxx parts since they were decoupled (though they are grouped amongst themselves and assigned to quads). An ATI shader pipe should simply consist of the ALUs and branch execution unit. For NVIDIA it should just be the ALUs since one of them also operates as the texture unit.

It's easy to get all this mixed up, as is apparent from the endless discussions on what now constitutes a pixel pipeline.


Thanks for the response. This actually does make alot more sense.

Sharky, it would be fair to say then that X1900s are 48 shader pipe parts. They have 16 Texture units though, which is where the 16 pipeline statement comes from, ie 16 textures per pass (AFAIK).
 
Lord_Exodia said:
I really doubt those performance #s will be anything close to the truth. I dont think this game will be nearly as hard on hardware that people think.

35fps for a X1900XTX with those settings is a bit far fetched. I bet it will be much better than that. Plus for the 7900GTX the drivers are still early for this card so I bet that will be up signifigantly too.

At 1600x1200 with 4xAA? Why would that be unusual? In FEAR at those settings, the X1900XTX only gets 48 fps, and 'Oblivion' is said to use MUCH more complicated shaders than 'FEAR'.

Those numbers seem very much in line with what we were expecting. The X850XT is definitely doing suspiciously well, though - must be some SM1.x codepath in there or something as a fallback...
 
I doubt the 7900 will improve with drivers. Its the same ol core. Minor improvements, yes, major? no.
 
Apple740 said:
An explanation could be that the X850 is running a sm2.0 path (with less IQ) while the others are sm3.0

But that explanation doesn't make sense. It was confirmed by developers that there is no visual difference between SM2.0 and SM3.0 except for the lighting, Bloom and HDR.

I don't see those numbers as being that far fetched though I would love to know the CPU that put up those kinds of numbers.
 
PCMusicGuy said:
there is no visual difference between SM2.0 and SM3.0
You have heard wrong. There is no visual difference between PS2.0 and PS3.0. VS3.0 allows visual improvement over VS2.0

(PS = pixel shader, VS = vertex shader, SM = shader model, which encompasses both pixel shader and vertex shader)
 
i vote to make it a [H]ard rule to never compare the number of piplines beetween two cards as it would clear up this whole argument. i think if you want to compare theoreticle performance you should have to state the count of all stages of the pipline. i think the word pipeline is becoming something like the word planet in the scientific world with all the debate about pluto.
 
{NG}Fidel said:
I doubt the 7900 will improve with drivers. Its the same ol core. Minor improvements, yes, major? no.

Same core maybe, but it's a brand new game, and there are almost always game improvements in later drivers. Hell look at FEAR, the latest nvidia driver just fixed a bug in that game that was killing nvidia's performance with AA enabled.
 
dderidex said:
At 1600x1200 with 4xAA? Why would that be unusual? In FEAR at those settings, the X1900XTX only gets 48 fps, and 'Oblivion' is said to use MUCH more complicated shaders than 'FEAR'.

Those numbers seem very much in line with what we were expecting. The X850XT is definitely doing suspiciously well, though - must be some SM1.x codepath in there or something as a fallback...

Regardless of all of that it still seems a bit too low on both cards. I mean cmon that means the lowest SM3 cards you can actually run this game at would be a 7900 and a X1900. Lets all just throw our 7800GTs and X1800XL in the garbage then. Those cards will get like 18fps. I can understand these type of #s at 1920x1200 ore even 2560x1600 with 8xAA 16 AF everything maxed possibly. I just have a feeling that the game will be a bit less strenuous on hardware than that.
 
{NG}Fidel said:
I doubt the 7900 will improve with drivers. Its the same ol core. Minor improvements, yes, major? no.

Nvidia always improves with drivers. Always :D when the official driver set comes out there is gonna be performance improvements all around the board for the 7600GT 7900GT/GTX. These cards are a refesh but they didn't rub a magic wand over the card and poof they shrunk from 110 nm to 90nm. there was a restructions in the transistors and circuitry. the pipelines were laid out again and are more efficient. They will throw out a driver fairly soon that will tap into that. It's not too hard to believe.
 
You need to remember to wipe your boots after reading some posts in this thread. :p

The game comes out in a little over a week. No need to make up stuff...
 
pxc said:
You need to remember to wipe your boots after reading some posts in this thread. :p

The game comes out in a little over a week. No need to make up stuff...

Who's making stuff up?
 
bbordwell said:
i vote to make it a [H]ard rule to never compare the number of piplines beetween two cards as it would clear up this whole argument.

If you say so, then let it be so. Dissolve the word. Shift the focus to ROPs and ALUs almost entirely (somewhat underway already). Publish the numbers and leave it at that. This entire debate was really about semantics, and I'm not too thrilled about those sorts of debates.

Still very curious about the supposed leaked numbers. We'll find out what's what in a few days, though.
 
Sharky974 said:
Why would you have TWO pipeline definitions instead of one for both companies? Then you are comparing apples and oranges.

Then you are also saying an X1600XT is a four pipe card. So I guess that makes it equal to a GF4? Yeah right it would kill a GF4. Because it is really a 12 pipe card. Because it has 12 pixel pipes.

And it also doesn't make sense. ATi and NV pixel pipes are much more similar. Both contain two ALU's. Except NV has the texturing on one. A minor difference.

They are much more similar than TMU-ROP and TMU-ALU. Which contain totally different units and make no sense at all. Also, your way has no relevance to performance. AKA, a X1800 and X1900 are both 16 pipes. Whereas the latter is much faster due to 48 pixel pipes.

Oh well, it seems you guys will never admit you are wrong. I believe what started this was Phide claiming ATI has 48 ALU's not pipes. In which case X800 was 16 ALU's and 6800 was 32 ALU's which doesn't make sense.

I dont mind the wording my problem is people slighting ATI. What number Manic one agrees on strangely always makes ATI the lower number of pipes. That is what bugs me.

I believe by normal standards an X1900 must be considered a 48 pipe card.

As to the oblivion graph, that's clearly a Firing Sqaud style graph. Unless their review was accidentaly posted early then pulled back or something..then again FS is a site I would expect to run oblivion benches.

The performance difference is about what I'd expect, though.
The X1800 and X1900 are both 16 Pipeline cards, however, the difference is that the X1900 has 3x the amount of Pixel processors per pipeline since they are decoupled fro the textruing units now and no longer need to be in 1:1 ratios.

If ATI had a "true" 48 Pipeline card, then it would flat out have to be 2x the performance of the 7900 GTX 24 Pipeline card, across the board, this just isn't the case at all.

You can't just simplify things now as Nvidia and ATI are both going in different directions with regard to their architectures. The definition of "pipeline" is starting to disappear so hence it would just be easier if you define each card by all 3 parameters now and not use the word pipeline anymore.

And like what was said the difference between ATI's and Nvidia Pixel Processors are not just a little, NV uses a 2 Full ALU with Texturing linked to one or something like that, and ATI uses a Full + mini ALU design. So they aren't equal.

Then tell me why does Nvidia's 7600 GT defeat the X1600 XT so easily as both are "12 pipeline" cards by your definition clocked at similar core and memory speeds. A X1600 XT if indeed were a 12 Pipeline card, would have to defeat a 6800 Ultra fairly easily given it's fillrate, the X1600 XT doesn't satisfy this condition, while the 7600 GT does as it's basically even with 7800 GS up to 1600x1200 which itself defeats the 6800 Ultra.

Can the X1600 Pro kill a 6600 GT? It SHOULD have no problem and do it with ease as they both are at 500MHZ core so X1600 should be 50% faster for them most part, this just doesn't happen, X1600 XT is barely 20-30% faster then 6600 GT if that.
 
If ATI had a "true" 48 Pipeline card, then it would flat out have to be 2x the performance of the 7900 GTX 24 Pipeline card, across the board, this just isn't the case at all.

If it leads 35-25 in that one oblivion graph, it might be getting close. Or how about instances where one XTX can outrun 7800 in SLI? Yes there are a few of those. Plus isn't the memory speed lower on a stock X1900XTX than a 7900? That accounts for some more of the difference. But you will need the games to be shader limited to really see the benefits of 48 pipes.

Also yes one ATI pipe is not as strong as one Nvidia pipe. Even though the X800 was 16 just like trhe 6800 Ultra, the X800 ran at 500 mhz and the ultra only needed 400. Still people had no problems calling the X800 16 pipes.

48 pipes in an ATI card would only be like 1.7X 24 Nvidia pipes. And yes in some games you probably start to get close to that ratio of lead by X1900XTX.

The cards are probably TMU or bandwidth limited. That doesn't mean in the right game they dont have 1.5-1.7X the power, though. The newer games will show this better.

And like what was said the difference between ATI's and Nvidia Pixel Processors are not just a little, NV uses a 2 Full ALU with Texturing linked to one or something like that, and ATI uses a Full + mini ALU design. So they aren't equal.

No they aren't. My guess is a ATI PIpe is about 85% as strong as a Nvidia pipe.

Then tell me why does Nvidia's 7600 GT defeat the X1600 XT so easily as both are "12 pipeline" cards by your definition clocked at similar core and memory speeds. A X1600 XT if indeed were a 12 Pipeline card, would have to defeat a 6800 Ultra fairly easily given it's fillrate, the X1600 XT doesn't satisfy this condition, while the 7600 GT does as it's basically even with 7800 GS up to 1600x1200 which itself defeats the 6800 Ultra.

In some games the X1600XT probably does defeat the 7600GT. I think you might very well see this in oblivion as well. I think one of the supposed leaked graphs showed just that. The point is once you are shader limited then the X1600 will behave like..guess what..a 12 pipe card.

http://www.xtremesystems.org/forums/attachment.php?attachmentid=44838&stc=1&d=1142559572

Well this slide looks to be from Nvidia, and touting SLI, but even in an Nvidia slide (if it's legit which I believe it is) you can see, the X1800XL almost beating a 24 pipe 7900GT. So a X1600 should do pretty well against a 7600.

Also a X1600XT can get pretty close to a 7600GT in FEAR.

Even if they are both 12 pipes, I think ATI pipes are weaker as I said. 85% or something. So clocked the same and both 12 pipes I'd still expect it to lose..which even in FEAR it does..but it gets somewhat close..

If it's NOT 12 pipes, then you've got a problem explaining those results where it nearly does match a 6800GS or 7600GT.

A lot of times it is texture limited, I guess. But it's still 12 pipes where it counts.
 
Sharky974 said:
If it leads 35-25 in that one oblivion graph, it might be getting close. Or how about instances where one XTX can outrun 7800 in SLI? Yes there are a few of those. Plus isn't the memory speed lower on a stock X1900XTX than a 7900? That accounts for some more of the difference. But you will need the games to be shader limited to really see the benefits of 48 pipes.

Also yes one ATI pipe is not as strong as one Nvidia pipe. Even though the X800 was 16 just like trhe 6800 Ultra, the X800 ran at 500 mhz and the ultra only needed 400. Still people had no problems calling the X800 16 pipes.

48 pipes in an ATI card would only be like 1.7X 24 Nvidia pipes. And yes in some games you probably start to get close to that ratio of lead by X1900XTX.

The cards are probably TMU or bandwidth limited. That doesn't mean in the right game they dont have 1.5-1.7X the power, though. The newer games will show this better.



No they aren't. My guess is a ATI PIpe is about 85% as strong as a Nvidia pipe.



In some games the X1600XT probably does defeat the 7600GT. I think you might very well see this in oblivion as well. I think one of the supposed leaked graphs showed just that. The point is once you are shader limited then the X1600 will behave like..guess what..a 12 pipe card.

http://www.xtremesystems.org/forums/attachment.php?attachmentid=44838&stc=1&d=1142559572

Well this slide looks to be from Nvidia, and touting SLI, but even in an Nvidia slide (if it's legit which I believe it is) you can see, the X1800XL almost beating a 24 pipe 7900GT. So a X1600 should do pretty well against a 7600.

I don't believe that graph at all. It shows a full 92% increase in performance from SLI. That just not possible.
 
Sharky974 said:
If it leads 35-25 in that one oblivion graph, it might be getting close. Or how about instances where one XTX can outrun 7800 in SLI? Yes there are a few of those. Plus isn't the memory speed lower on a stock X1900XTX than a 7900? That accounts for some more of the difference. But you will need the games to be shader limited to really see the benefits of 48 pipes.

Also yes one ATI pipe is not as strong as one Nvidia pipe. Even though the X800 was 16 just like trhe 6800 Ultra, the X800 ran at 500 mhz and the ultra only needed 400. Still people had no problems calling the X800 16 pipes.

48 pipes in an ATI card would only be like 1.7X 24 Nvidia pipes. And yes in some games you probably start to get close to that ratio of lead by X1900XTX.

The cards are probably TMU or bandwidth limited. That doesn't mean in the right game they dont have 1.5-1.7X the power, though. The newer games will show this better.

No they aren't. My guess is a ATI PIpe is about 85% as strong as a Nvidia pipe.

In some games the X1600XT probably does defeat the 7600GT. I think you might very well see this in oblivion as well. I think one of the supposed leaked graphs showed just that. The point is once you are shader limited then the X1600 will behave like..guess what..a 12 pipe card.

http://www.xtremesystems.org/forums/attachment.php?attachmentid=44838&stc=1&d=1142559572

Well this slide looks to be from Nvidia, and touting SLI, but even in an Nvidia slide (if it's legit which I believe it is) you can see, the X1800XL almost beating a 24 pipe 7900GT. So a X1600 should do pretty well against a 7600.
F.E.A.R is an exception, as it was programed predominantly on ATI hardware through it's development cycle. Oblivion is looking to be the same. These 2 games are going to show favortism to ATI until Nvidia can get some optimizations in to level the playing field as they did with the Forceware 84.20 drivers for F.E.A.R, so I think their is plenty of improvement to be had in Oblivion as well.

What part of in all cases don't you get, 35-25 is an increase of 40%, this is in no way " close" to a 100% increase at all. You can't have 0.85 or something of a pipeline.

What I am saying is you only need to be Shader Limited in ATI's design as they aren't full pipelines, if you can use "pipelines" anymore if they were then you would automatically see a 2x performance increase across the board unless the bottleneck is elsewhere. Since what was increased between X1800 to X1900 was only Shader Processors, but not Texturizing power, only when Shader Limits come into play do you see massive performance increases, if you increase Pixel Shaders and Texturizing power together you will see performance improvements across the board in basically every case.

If you want to hand pick your games I can do that too with Nvidia's cards as well, in Quake 4 for example the 6600 GT is EVEN with the X1600 XT, while the 7600 GT is twice as fast at 16x12. I would not recommend hand picking your games, it doesn't prove any point, only on average across the board with all games available is it valid to compare.

1 X1900 XTX can outrun a 7800 what setup be specific please GT? GTX? GTX 512?

Of course they had no problem, 6800 Ultra vs X800 XT, they both had 16 Pipelines because everything on the high end in that era was till in a 1:1:1 ratio Pixel Shader to Texture Mapping Unit to Rasterization Operator was still 1:1:1. In the current era plenty of cards are not in this 1:1:1 ratio hence the definition of a pipeline is becoming less and less valid.

A real pipeline would not require a Shader Limited crutch to show a performance benefit. It would be across the board performance improvements in predictable amounts.

I would prefer a 7600 GT as it behaves like a 12 Pipe Card across the board rather then something like the X1600 XT which behaves like a 12 pipe card only when it is Shader Limited, why should I limit myself to 12 Pipe performance only in Shader limited games when I can have 12 Pipe performance across the entire board?

You actually think a 1.55GHZ to 1.6GHZ memory speed difference between X1900 XTX and 7900 GTX is gonna make a difference ha, this is really grasping at straws 3% is only a bare difference.
 
F.E.A.R is an exception, as it was programed predominantly on ATI hardware through it's development cycle. Oblivion is looking to be the same. These 2 games are going to show favortism to ATI until Nvidia can get some optimizations in to level the playing field as they did with the Forceware 84.20 drivers for F.E.A.R, so I think their is plenty of improvement to be had in Oblivion as well.

You mean continually decreasing image qaulity to get back anywhere near competitive? Sure, I believe Nvidia will do probably that with Oblivion just like they did with FEAR.

But ATI does well in ALL the most demanding, newer games. Be it Call of Duty 2, FEAR, Battlefield 2, Oblivion, or what have you. Because those games are more shader intensive. This is the trend. I am sure whenever Medal of Honor or what have you comes out ATI will kick tail there too.

What part of in all cases don't you get, 35-25 is an increase of 40%, this is in no way " close" to a 100% increase at all. You can't have 0.85 or something of a pipeline.

No but ATI pipes have never been 100% as good as Nvidia pipes. Even going back to the X800 days. Nvidia pipes now have two full stregth ALU's where ATI pipes have one full strength ALU and one "mini" alu.

Somebody said if ATI has 48 pipes they should be twice as fast. I was merely showing no, it will be more like 1.7X theoretical max (just a number I made up, mind you) because ATI pipes aren't as good. And you aren't going to reach theoretical numbers most times. However 1.4X is starting to get close. In the right, completely shader limited game you might get 1.7X eventually.


What I am saying is you only need to be Shader Limited in ATI's design as they aren't full pipelines, if you can use "pipelines" anymore if they were then you would automatically see a 2x performance increase across the board unless the bottleneck is elsewhere. Since what was increased between X1800 to X1900 was only Shader Processors, but not Texturizing power, only when Shader Limits come into play do you see massive performance increases, if you increase Pixel Shaders and Texturizing power together you will see performance improvements across the board in basically every case.

Sure, but you also have to add 60 million transistors just to get 8 more pipes. Where ATI can add 32 pipes in the same and have the X1900XTX be the faster card when all is said and done. It's all tradeoffs man..


If you want to hand pick your games I can do that too with Nvidia's cards as well, in Quake 4 for example the 6600 GT is EVEN with the X1600 XT, while the 7600 GT is twice as fast at 16x12. I would not recommend hand picking your games, it doesn't prove any point, only on average across the board with all games available is it valid to compare.

Qauke 4 was programmed predominmantly on Nvidia cards by the makers throughout it's development cyce. That's why Nvidia does well.

Hey if you can use it for you needs, then I can turn it around on you..

But yeah, it's tradeoffs. And I think ATI was stupid with it's designs at some points. Especially at the tailkicking they get in the mid-range. They are shooting for a 3:1 math to texture ratio and most old games dont have that. Newer games will. The newer the game the better a X1600XT should compete with that 7600GT, for example.
 
kcthebrewer said:
You have heard wrong. There is no visual difference between PS2.0 and PS3.0. VS3.0 allows visual improvement over VS2.0

(PS = pixel shader, VS = vertex shader, SM = shader model, which encompasses both pixel shader and vertex shader)

No, I didn't. I'm pretty sure the developers know what they did on their own game. The ONLY thing SM3.0 is used for is performance optimizations and SM3.0 HDR, period. All textures, paralax mapping, bumping mapping, will look the same regardless granted you have a SM2.0 video card.

And I think it's pretty safe to say that Oblivion will run better on an ATI card. Afterall, they sent their preview PC out with all ATI cards.
 
pxc said:
You need to remember to wipe your boots after reading some posts in this thread. :p

The game comes out in a little over a week. No need to make up stuff...

Contributory as always. You rock, dude.
 
Sharky974 said:
You mean continually decreasing image qaulity to get back anywhere near competitive? Sure, I believe Nvidia will do probably that with Oblivion just like they did with FEAR.

But ATI does well in ALL the most demanding, newer games. Be it Call of Duty 2, FEAR, Battlefield 2, Oblivion, or what have you. Because those games are more shader intensive. This is the trend. I am sure whenever Medal of Honor or what have you comes out ATI will kick tail there too.

No but ATI pipes have never been 100% as good as Nvidia pipes. Even going back to the X800 days. Nvidia pipes now have two full stregth ALU's where ATI pipes have one full strength ALU and one "mini" alu.

Somebody said if ATI has 48 pipes they should be twice as fast. I was merely showing no, it will be more like 1.7X theoretical max (just a number I made up, mind you) because ATI pipes aren't as good. And you aren't going to reach theoretical numbers most times. However 1.4X is starting to get close. In the right, completely shader limited game you might get 1.7X eventually.

Sure, but you also have to add 60 million transistors just to get 8 more pipes. Where ATI can add 32 pipes in the same and have the X1900XTX be the faster card when all is said and done. It's all tradeoffs man..

Qauke 4 was programmed predominmantly on Nvidia cards by the makers throughout it's development cyce. That's why Nvidia does well.

Hey if you can use it for you needs, then I can turn it around on you..

But yeah, it's tradeoffs. And I think ATI was stupid with it's designs at some points. Especially at the tailkicking they get in the mid-range. They are shooting for a 3:1 math to texture ratio and most old games dont have that. Newer games will. The newer the game the better a X1600XT should compete with that 7600GT, for example.
Any proof that the Forceware 84.20 reduces quality in order to get performance in F.E.A.R? Or are you just making an assumption here? Oh yeah has to be noticable in motion of gameplay not something you have to blow up in size 4x to see in a stillshot.

ATI X1600 XT still loses all those games, in comparison to 7600 GT the margin of loss is less then other games but still it loses noetheless anyway. Like I said again no reason to get a card that only performs well in Shader Limited game when you can get one that performs well in all.

ATI Radeon X1900 XTX vs 7900 GTX is a different comparison though as it's twice as many Pixel Shaders vs the 7900 GTX.

The X1900 XTX is a faster card marginally in the certain Shader Intensive games, at a much larger cost in comparison to Nvidia. Nvidia's 7900 GTX remember is only 278 Million Transistors with a Die size of 196mm2, in comparison to the Radeon X1900 XTX with 384 Million Transistors with a Die size of 352mm2 much more expensive to make then 7900.
Nvidia could have easily added 8 more pipelines to this card and still come under the transistor count of X1900 XTX and STILL be cheaper to make as Nvidia has better transistor density on a given process in comparison to ATI. Why should they though when the present situation they are even with ATI anyway and make tons more money in the process.

Quake 4 is a Doom 3 engine based game of course it favors Nvidia, as are Oblivion and F.E.A.R. ATI programed games. Not denying anything here. I don't see how this is turning this around on me, I said i would like to bench a large suite of games, to get an average so we don't have to use these specific examples.

Pointless now anywya for the X1600 XT, the 7600 GT wins again this product and is cheaper to make anyway with a die size of 7600 125mm2 vs the X1600 149mm2. Even though Nvidia has 178 Million vs ATI's 157 Million Transistor Count.

I am well ware of trade offs though I beleive Nvidia has done well in that regard, in cuting costs and maintaining performance at the same time.
 
Even if they are both 12 pipes, I think ATI pipes are weaker as I said. 85% or something. So clocked the same and both 12 pipes I'd still expect it to lose..which even in FEAR it does..but it gets somewhat close..

If it's NOT 12 pipes, then you've got a problem explaining those results where it nearly does match a 6800GS or 7600GT.

A lot of times it is texture limited, I guess. But it's still 12 pipes where it counts.
What results does it nearly match a 6800GS or 7600 GT? I don' t think I have seen any where it almost matches 7600 GT, but "almost" is subjective, what do you mean???

It has 12 Pixel Shader Units, so if the load is Shader Limited then it can come close, but with only 4TMU and ROP, it isn't a real 12 Pipeline card like the 7600 GT or 6800 GS. It's is not 12 Pipeline, there is a difference.

6800 GS you can explain some as it has a considerably lower clock then X1600 on it's Pixel Shader units, so it can act like 12 Pipelines in the Shader Limited scenarios, but as I said a real 12 Pipeline card should act as that level in all scenarios and not just the Shader Limited ones. The 6800 GS and 7600 GT are real 12 Pipeline GPU and hence act like that across the board given their core frequencies and memory bnadiwdth constraints.

To me a "pipeline" is 1 PS/1 TMU/ 0.5-1 ROP, as this won't hinder the performance of the full pipeline as the ROP's aren't saturated.
 
coldpower27 said:
Nvidia could have easily added 8 more pipelines to this card and still come under the transistor count of X1900 XTX and STILL be cheaper to make as Nvidia has better transistor density on a given process in comparison to ATI. Why should they though when the present situation they are even with ATI anyway and make tons more money in the process.

yes , they could have added 8 more pipes but they didnt .

you know why? because they knew you were gonna pay for anything they come out with anyway ,

so they just make whole bunch of money off of you right now , then they add the 8pipes in next gen and make you pay for it again

they also could have fixed the hardware based "performance optimizations" since they were improving on the core , but they didnt
 
coldpower27 said:
Quake 4 is a Doom 3 engine based game of course it favors Nvidia, as are Oblivion and F.E.A.R. ATI programed games. Not denying anything here

Are you implying that FEAR is an ATI optimised game? I most certainly hope you're not.
 
ivzk said:
Are you implying that FEAR is an ATI optimised game? I most certainly hope you're not.
F.E.A.R was programmed predominantly on ATI hardware up until the public beta.
This is one of the games where an X850 XT PE is competitive with a 7800 GTX before Nvidia released Forceware 84.20.
 
coldpower27 said:
F.E.A.R was programmed predominantly on ATI hardware up until the public beta.
This is one of the games where an X850 XT PE is competitive with a 7800 GTX before Nvidia released Forceware 84.20.


So Nvidia wasted money on this game, as FEAR is in the TWIMTBP program?
 
Back
Top