ATI 2950PRO DX 10.1 Launch Nov 19

quite interesting, what is its expected performance compared to the new nVidia cards?
 
Well, that link does not say anything about this card's texture filling rate, which is R600 main weakness.
If nothing changes, this will perform essentially the same as a HD 2900 Pro / XT, with more features and the obvious differences in clock frequencies.
 
Well, that link does not say anything about this card's texture filling rate, which is R600 main weakness.
If nothing changes, this will perform essentially the same as a HD 2900 Pro / XT, with more features and the obvious differences in clock frequencies.

I find it unlikely that they would increase pixel pipes and ROPs within the same generation product. Without a die shrink, the amount of power and heat would only get worse; I want to say that the 2900 has something like 770 million transistors.

As much as I've enjoyed my ATi cards and still enjoy my X1950XTX (my main card) ATi has always seemed to buck trends. When the 7000 series came out the cards had 2 pixel pipes, but 3 texture units per pipe. Unfortunately, no one had adopted that level of multi-texturing within games, so it suffered performance hits on pure fill rate. Fast forward to newer generations and ATi is at it again. They are pushing massive shader power, but no one is yet utilizing that much shader power, and most games are still demanding more fill rate. They are still using the same 16 pixel pipe and ROP configuration they have used since the X800 series.

In other words, I agree that ATi could do with less shader power and more fill rate.
 
According to the resource from the same site. It says, that it will be faster than 8800GT, so Nvidia might increase clock speed of 8800GT to compete with it.
 
I find it unlikely that they would increase pixel pipes and ROPs within the same generation product. Without a die shrink, the amount of power and heat would only get worse; I want to say that the 2900 has something like 770 million transistors.

The RV670 is on a 55nm process ;).
 
According to the resource from the same site. It says, that it will be faster than 8800GT, so Nvidia might increase clock speed of 8800GT to compete with it.

Of course it's faster...according to this rumor, It's exactly the same as a HD 2900 XT, which is roughly equal to a 8800 GTS 320/640.
A 8800 GT is meant to fill the gap between the 8600 GTS and 8800 GTS 320 and thus, not meant to break any records. Just a good mid-range card, with a good price/performance ratio.
 
if this is true i am glad i did not sell my soul for a 8800 or a 2900xt:D
 
I find it unlikely that they would increase pixel pipes and ROPs within the same generation product. Without a die shrink, the amount of power and heat would only get worse; I want to say that the 2900 has something like 770 million transistors.

As much as I've enjoyed my ATi cards and still enjoy my X1950XTX (my main card) ATi has always seemed to buck trends. When the 7000 series came out the cards had 2 pixel pipes, but 3 texture units per pipe. Unfortunately, no one had adopted that level of multi-texturing within games, so it suffered performance hits on pure fill rate. Fast forward to newer generations and ATi is at it again. They are pushing massive shader power, but no one is yet utilizing that much shader power, and most games are still demanding more fill rate. They are still using the same 16 pixel pipe and ROP configuration they have used since the X800 series.

In other words, I agree that ATi could do with less shader power and more fill rate.

ATI is remarkably bad at taking a realistic look at what games will require. It's like they think that graphics needs three years down the road are more important that current requirements.
 
The only problem ATI is having with this generation is lack of efficiency and them trying to brute force their way out of it. It almost seems as if they were able to get them selves out of a pinch by adding certain features to increase their performance if the program is written for its use. All this means is the card will probably not match the performance of the G80 but should get close and be a PITA to code for.

Once they break away from the Vec shaders and use something more like Nvidia's system they will start to release more enthusiastic cards. ATI has always managed to wow us in someway after certain pitfalls, the slides we have about the R700 are certainly exciting and being set for 55nm is just as awsome. Hopefully we'll see a repeat of the X1800/X1900 release!

And with all this doom and gloom people keep spreading about the card they are still releasing damn good cards for the sub $150 market.
 
Of course it's faster...according to this rumor, It's exactly the same as a HD 2900 XT, which is roughly equal to a 8800 GTS 320/640.
A 8800 GT is meant to fill the gap between the 8600 GTS and 8800 GTS 320 and thus, not meant to break any records. Just a good mid-range card, with a good price/performance ratio.

It's actually faster ;). 55nm die shrink, higher stock clocks and much more oc'ing headroom, some small architectural improvements, and rumored AA bug fixes on-die will stack up to be quite a beast at $250ish each :D.
 
It's actually faster ;). 55nm die shrink, higher stock clocks and much more oc'ing headroom, some small architectural improvements, and rumored AA bug fixes on-die will stack up to be quite a beast at $250ish each :D.

65nm, not 55. You cant just go from 80 to 55nm in 1 jump.
 
It's only got a 256bit memory bus - half that of a 2900XT.

Considering how badly the 2900XT suffered when you turned on things like anti aliasing and traditionally anti aliasing was greatly effected by memory bandwidth I wonder how much it will hurt the 2950XT?
 
It's only got a 256bit memory bus - half that of a 2900XT.

Considering how badly the 2900XT suffered when you turned on things like anti aliasing and traditionally anti aliasing was greatly effected by memory bandwidth I wonder how much it will hurt the 2950XT?

The R600 was limited by the number of ROPs, not the memory bandwidth. It should have little to no effect, and with some small on-die bug fixes to AA it should perform better. Plus, the clock speeds will be ramped way up, especially when overclocking ;), not to mention some other small things.

2x RV670 will cost less than one GTX, consume less power, and far outperform it. 1x RV670 will cost less than one G92/8800GT, and outperform it. Nothing not to like :D!
 
It's only got a 256bit memory bus - half that of a 2900XT.

Considering how badly the 2900XT suffered when you turned on things like anti aliasing and traditionally anti aliasing was greatly effected by memory bandwidth I wonder how much it will hurt the 2950XT?

the poor AA performance was due to the fact that R600 uses the shaders to do AA I thought?
 
The R600 was limited by the number of ROPs, not the memory bandwidth. It should have little to no effect, and with some small on-die bug fixes to AA it should perform better. Plus, the clock speeds will be ramped way up, especially when overclocking ;), not to mention some other small things.

2x RV670 will cost less than one GTX, consume less power, and far outperform it. 1x RV670 will cost less than one G92/8800GT, and outperform it. Nothing not to like :D!

Any tests to back up these claims?
 
I hope it is good, and it seems like it will have some significant improvements over the R600. We need some new products to fuel innovation. NVidia has gotten way too comfortable, and it is time to knock them off the throne.
 
I hope it is good, and it seems like it will have some significant improvements over the R600. We need some new products to fuel innovation. NVidia has gotten way too comfortable, and it is time to knock them off the throne.
In Q1'08 rumor has it that AMD will release a dual 670 on a single PCB. If true, I think NVidia will start sweating then.
 
So why bother with the 2900Pro for 50 less? It doesn't sound like that one "fixes" anything and this one could be the perfect replacement for us with 1950's. I'm definitely passing on the 2900pro since it's just not enough performance from my current card, but this 2950 pro, if true sounds like the perfect upgrade, with crossfire later on.
 
I have everything for my new build minus the CPU and GPU. I can wait another month or 2.
 
how sure is the directx 10.1?that has got to piss off some pepole
 
the poor AA performance was due to the fact that R600 uses the shaders to do AA I thought?

No one knows what is hurting their AA performance, their ROPS are the same as the generation prior, but with higher bandwidth and clock speed its doing worse then the prior release. They are running the algorythm's through the shader process but with 320? who knows, it could also be due to a lack of AA engine on the card and doing emulation through shaders could be the culprit. :confused: who knows.

how sure is the directx 10.1?that has got to piss off some pepole

there is no difference image quality differences between 10 and 10.1, the 10.1 is stiffer requirements that even last gen products could produce. It shouldn't piss off anyone, and if it does I know someone who will sell you a violin.
 
I wonder if this will be any good. I don't know what revisions there are for It would seriously piss me off if it was like Shader model 2 to Shader model 3. If so, that would seriously piss me off.
 
for image quality it mainly pertains to AA and AF, they have to be able to do it to a certain extent to match 10.1, there is a new sound model they use though, thats about it.
 
Back
Top