AMD ATI Radeon HD 4870 X2 Preview

So the GDDR5 doesn't help the 4870 at all ... why did they use it then? It has to be at least part of the reason the 4870 is faster than the 4850.

The design teams will know what to optimise for and the products will be limited by the designs.
for example, if your design team is told they can have DDR5 they will optimise for higher memory bandwidth.
NVidia will have optimised for GDDR3 and ATI 's 4870 is optimised for GDDR5.
Its not sensible to suggest that NVidia will benefit as much as ATI's designs from using GDDR5 when their hardware isnt designed to make best use of it.
 
So the GDDR5 doesn't help the 4870 at all ... why did they use it then? It has to be at least part of the reason the 4870 is faster than the 4850.
did you even read what I said? I never mentioned ONE thing about the 4870 which of course is 256-bit. I said that a 512-bit bus and GDDR5 would be waste on current cards because they arent strong enough to utilize that much bandwidth.
 
So the GDDR5 doesn't help the 4870 at all ... why did they use it then? It has to be at least part of the reason the 4870 is faster than the 4850.

I don't think anyone said it didn't help, just not that much. at the same core speed the 4870 is still 8 to 10% faster (heard different number) and that is probably going to increase as the core speed does.
 
They have professionals out there that can help you with your problems. You are probably one of the saddest fanboys I have ever seen in all my years here and on any other forum. You sir need a break from computers.

Edit: I feel bad saying this to anyone, but the honest truth is that we could all use a break from computers...

You haven't seen anything. Want some mild amusement of denial or interesting cases just check out the EVGA forums. :D
The DSM IV TR is gonna have to publish this growing disorder in their diagnosing criteria.
 
GDDR5 additional bandwidth will only benefit in bandwidth limited situations like Full Scene Anti Aliasing, otherwise, the performance difference between the HD 4850 and HD 4870 in other scenarios should be closer and more relative to their core speed differences (Except when Anti Aliasing is On) That explain why the HD 4870 users can use 4x FSAA pretty much for free, and even sometimes 8x. CFAA uses shaders, so bandwidth doesn't matter there. I don't think that the GTX 280 would benefit greatly of DDR 5 with it's already wide memory bus width, GTX 280 bottlenecks doesn't come from Anti Aliasing performance, but instead of some other internal bottlenecks which faster RAM will not solve, but it should be cheaper to use a 256 Bit BUS with DDR 5 than a 512 Bit with DDR3 in the long run, right now since DDR5 is so new, probably the price difference isn't that much, but enough to be 100 cheaper than the rival's SKU.
 
You haven't seen anything. Want some mild amusement of denial or interesting cases just check out the EVGA forums. :D
The DSM IV TR is gonna have to publish this growing disorder in their diagnosing criteria.

they are giving me a special section
 
GDDR5 additional bandwidth will only benefit in bandwidth limited situations like Full Scene Anti Aliasing, otherwise, the performance difference between the HD 4850 and HD 4870 in other scenarios should be closer and more relative to their core speed differences (Except when Anti Aliasing is On) That explain why the HD 4870 users can use 4x FSAA pretty much for free, and even sometimes 8x. CFAA uses shaders, so bandwidth doesn't matter there. I don't think that the GTX 280 would benefit greatly of DDR 5 with it's already wide memory bus width, GTX 280 bottlenecks doesn't come from Anti Aliasing performance, but instead of some other internal bottlenecks which faster RAM will not solve, but it should be cheaper to use a 256 Bit BUS with DDR 5 than a 512 Bit with DDR3 in the long run, right now since DDR5 is so new, probably the price difference isn't that much, but enough to be 100 cheaper than the rival's SKU.

So you're saying that this could be why (at least with AoC), AA is basically free (as in zero performance penalty for enabling it)? Notice that the AoC tests have not only FSAA maxed, but CFAA maxed as well (and while there is no bandwidth penalty, the shaders doing AA aren't usable for anything else, so you would think there would be some impact somewhere). But the fact is....there's no impact elsewhere. That means that the DAAMIT engineers have a rather hefty shader budget with the 4870X2 (and it's far larger than the similar budget for GTX280, despite the supposed disadvantage X2 has in terms of shaders). Now I'm in an area where I will admit I'd normally be far at sea because it gets heavily into GPU design (specifically, the design of a single shader): what is the per-shader capacity (which is not the same as bandwidth) of each GPU? And could that be why nVidia never considered taking the DAAMIT approach of shader-based AA?
 
So you're saying that this could be why (at least with AoC), AA is basically free (as in zero performance penalty for enabling it)? Notice that the AoC tests have not only FSAA maxed, but CFAA maxed as well (and while there is no bandwidth penalty, the shaders doing AA aren't usable for anything else, so you would think there would be some impact somewhere). But the fact is....there's no impact elsewhere. That means that the DAAMIT engineers have a rather hefty shader budget with the 4870X2 (and it's far larger than the similar budget for GTX280, despite the supposed disadvantage X2 has in terms of shaders). Now I'm in an area where I will admit I'd normally be far at sea because it gets heavily into GPU design (specifically, the design of a single shader): what is the per-shader capacity (which is not the same as bandwidth) of each GPU? And could that be why nVidia never considered taking the DAAMIT approach of shader-based AA?

The shader core used on the RV770 is pretty much identical to the ones used on the RV670. But the main difference is that Anti Aliasing is resolved on the ROP instead of the shader like the RV670. So it would benefit better of the additiona bandwidth than the RV670 which was more ROP bound. ATi was in great disadvantage before, because in reallity, the RV670 had only 64 Stream Processors (They would work as 320 when it was possible to execute 5 Operations per Cycle, since is a Super Scalar Architecture), against 128 from nVidia (Which would perform 128 Operations per Cycle no matter what, because is a Scalar Architecture), now the difference is less, ATi has 160 (Which would work as 800 when it is possible to perform the same 5 operations per Cycle) while nVidia has 240 (Can execute 240 instructions regardless), so having a Super Scalar architecture will give you more raw power to process shaders without using lots of transistors, but since not all the shaders in games are equal, maximizing the Shader Core usage on ATi hardware requires optimizations on the game and driver level and it's performance is unpredictable, while the nVidia GPU which is a Scalar design, has a fat shader which will requires little or no optimization and it's performance will be more consistent and predictable, even though it's raw power will be lower. And yes, the CFAA is useable in many games, specially on the HD 4800 series (Even though the RV670 has a more optimized datapath to perform all anti aliasing operations on shaders, that explain why the HD 4800 has a bigger impact in performance when CFAA is enabled compared to previous generations and specially, when Edge Detect is used which makes the HD 4800 series to perform almost identical to the HD 3800 series.
 
what psu is recommended for the 4870x2? Anyone link me too one on newegg which already has all the necessary connectors. I have around $200 to spend on a power supply.
 
what psu is recommended for the 4870x2? Anyone link me too one on newegg which already has all the necessary connectors. I have around $200 to spend on a power supply.
you do know theres more than a video card in your pc? anyway, list the rest of your specs so people can actually suggest a couple of units.
 
you do know theres more than a video card in your pc? anyway, list the rest of your specs so people can actually suggest a couple of units.

Current Specs:
Monitor: Gateway XHD3000 30'inch
Mobo: Dfi lan party http://www.newegg.com/Product/Product.aspx?Item=N82E16813136044
Video Card: Geforce 8600 256mb (waiting for the 4870x2 to upgrade :( )) playing gall games at 1400x900 :-( res on low/medium with no anti aliasing or a filtering
CPU: amd phenom 9850
memory: 6GB ddr2 800 g skill
OS: windows Vista ultimate 64 bit
Hard Drive: Raptor 10,000rpm 150gb Western digital ,1 500GB segate and 1 250 GB Western Digital
PSU: not sure it came with the case it sucks barely has any connectors, i had to buy 4 pin to sata connectors, its really outdated.
 
650-750W for single card X2. CF X2 I don't know much.

Silencer if you can, Silverstone's if you can't.
 
Current Specs:
Monitor: Gateway XHD3000 30'inch
Mobo: Dfi lan party http://www.newegg.com/Product/Product.aspx?Item=N82E16813136044
Video Card: Geforce 8600 256mb (waiting for the 4870x2 to upgrade :( )) playing gall games at 1400x900 :-( res on low/medium with no anti aliasing or a filtering
CPU: amd phenom 9850
memory: 6GB ddr2 800 g skill
OS: windows Vista ultimate 64 bit
Hard Drive: Raptor 10,000rpm 150gb Western digital ,1 500GB segate and 1 250 GB Western Digital
PSU: not sure it came with the case it sucks barely has any connectors, i had to buy 4 pin to sata connectors, its really outdated.
you have almost the same setup as I do except the video card. (need to update my sig)

corsair 750TX FTW, good priced and quality. there is a recent thread on this http://www.hardforum.com/showthread.php?t=1328789.

good rule of thumb, any PSU that comes with a case is junk.
 
what psu is recommended for the 4870x2? Anyone link me too one on newegg which already has all the necessary connectors. I have around $200 to spend on a power supply.

Hate to sound like a spokesperson for Anand but they have a nice buyer's guide for PSUs that was just posted about a week ago. You want to look for at least a "mid range" PSU starting on this page. I would go ahead and jump for the 650W variety. You can make due with less (see the power consumption numbers in the [H] preview for proof) but it's always nice to have some juice left to spare for future upgrades. It also might be a good idea to take a look at the high end page as well. The PC Power & Cooling 750 is pretty attractive given the price point and quality of the unit, and it's well within your budget.
 
The shader core used on the RV770 is pretty much identical to the ones used on the RV670. But the main difference is that Anti Aliasing is resolved on the ROP instead of the shader like the RV670. So it would benefit better of the additiona bandwidth than the RV670 which was more ROP bound. ATi was in great disadvantage before, because in reallity, the RV670 had only 64 Stream Processors (They would work as 320 when it was possible to execute 5 Operations per Cycle, since is a Super Scalar Architecture), against 128 from nVidia (Which would perform 128 Operations per Cycle no matter what, because is a Scalar Architecture), now the difference is less, ATi has 160 (Which would work as 800 when it is possible to perform the same 5 operations per Cycle) while nVidia has 240 (Can execute 240 instructions regardless), so having a Super Scalar architecture will give you more raw power to process shaders without using lots of transistors, but since not all the shaders in games are equal, maximizing the Shader Core usage on ATi hardware requires optimizations on the game and driver level and it's performance is unpredictable, while the nVidia GPU which is a Scalar design, has a fat shader which will requires little or no optimization and it's performance will be more consistent and predictable, even though it's raw power will be lower. And yes, the CFAA is useable in many games, specially on the HD 4800 series (Even though the RV670 has a more optimized datapath to perform all anti aliasing operations on shaders, that explain why the HD 4800 has a bigger impact in performance when CFAA is enabled compared to previous generations and specially, when Edge Detect is used which makes the HD 4800 series to perform almost identical to the HD 3800 series.
Thanks for supporting my statements (somewhat). I'm a Computational Physics major and I feel no need to tell some of these lower-life forms how it exactly goes down. Being taught the future of coding in my field, I'm glad someone as kind as you has broken this down to the anxious ATI fanboy who is frothing at the mouth about how great this card really is. One should also disregard that bjorn3d review I read today; they completely ignore the gtx280 on the charts and say what it performs similar to. Also they give it 177.35 drivers. Real classy there bjorn.

Anyways, since you guys now got the full scoop on how GDDR5 optimization works and that it's not just about memory bandwidth I'm glad we have reached an impasse and you guys can realize that ATi will leave you after the launch and you'll have another 3870x2. Great on paper, great at launch and then... Oh wait.... Driver support tails off and the card performs mediocre because of the amount of optimization the card requires for "Free AA" and o-boy, don't think driver support is going to be iron-clad with AMD's quarterly margins; who are we kidding? Sure we need ATi to keep costs down but who said we need to have a card that's completely depended upon programming. I can't wait to see a full review where Kyle and brent throw more than just the modern games that this card is optimized for. We'll see how your guys' beloved card performs when we see... Lets say any game from 1-2 years ago that's still intensive perform. I'm quite excited because alot of you will do what you do best, you'll run for the bushes and post on other threads while living down the mockery and grief you've put me through in this thread.

Enjoy your last words.
 
Thanks for supporting my statements (somewhat). I'm a Computational Physics major and I feel no need to tell some of these lower-life forms how it exactly goes down. Being taught the future of coding in my field, I'm glad someone as kind as you has broken this down to the anxious ATI fanboy who is frothing at the mouth about how great this card really is. One should also disregard that bjorn3d review I read today; they completely ignore the gtx280 on the charts and say what it performs similar to. Also they give it 177.35 drivers. Real classy there bjorn.

Anyways, since you guys now got the full scoop on how GDDR5 optimization works and that it's not just about memory bandwidth I'm glad we have reached an impasse and you guys can realize that ATi will leave you after the launch and you'll have another 3870x2. Great on paper, great at launch and then... Oh wait.... Driver support tails off and the card performs mediocre because of the amount of optimization the card requires for "Free AA" and o-boy, don't think driver support is going to be iron-clad with AMD's quarterly margins; who are we kidding? Sure we need ATi to keep costs down but who said we need to have a card that's completely depended upon programming. I can't wait to see a full review where Kyle and brent throw more than just the modern games that this card is optimized for. We'll see how your guys' beloved card performs when we see... Lets say any game from 1-2 years ago that's still intensive perform. I'm quite excited because alot of you will do what you do best, you'll run for the bushes and post on other threads while living down the mockery and grief you've put me through in this thread.

Enjoy your last words.

There is all kinds of response one could give to that post, using eloquent words like narcissism or maybe some stinging witticisms.

but I think meh covers it nicely.
 
[...] I'm a Computational Physics major and I feel no need to tell some of these lower-life forms how it exactly goes down. Being taught the future of coding in my field, [...]I

Congratulations on not quite having an undergraduate degree in a field marginally related to computer architectures. Suffice to say, there are those on these boards who are much more in-the-know regarding gpu architecture than you.

AMD has managed to produce a very competitive chip at ~1/3 the area of nvidia. You say nvidia could have used DDR5; well AMD could have tripled the area of their chip too (in effect tripling the number of SPs, ROPs, etc.).

These are all design decisions; it really doesn't make sense to discuss what could have been done differently as if it's a defense for a chip's shortcomings.
 
ATi was in great disadvantage before, because in reallity, the RV670 had only 64 Stream Processors (They would work as 320 when it was possible to execute 5 Operations per Cycle, since is a Super Scalar Architecture), against 128 from nVidia (Which would perform 128 Operations per Cycle no matter what, because is a Scalar Architecture), now the difference is less, ATi has 160 (Which would work as 800 when it is possible to perform the same 5 operations per Cycle) while nVidia has 240 (Can execute 240 instructions regardless), so having a Super Scalar architecture will give you more raw power to process shaders without using lots of transistors, but since not all the shaders in games are equal, maximizing the Shader Core usage on ATi hardware requires optimizations on the game and driver level and it's performance is unpredictable, while the nVidia GPU which is a Scalar design, has a fat shader which will requires little or no optimization and it's performance will be more consistent and predictable, even though it's raw power will be lower.

Now I don't know how forthright Wavey is on this subject (I'd bet pretty truthful), but he claims it's very easy to utilize all those stream processors in games. However you are correct that it's a bit difficult for developers to use them with their GPGPU applications.

Enjoy your last words.

We get it, you bought a GTX 280. You don't have to justify your purchase.
 
I find that often it's the less learned ones who feel the need to rattle off credentials. Humility goes a long way :)
 
Now I don't know how forthright Wavey is on this subject (I'd bet pretty truthful), but he claims it's very easy to utilize all those stream processors in games. However you are correct that it's a bit difficult for developers to use them with their GPGPU applications.

My experience with gpgpu is fairly limited (I toyed around with Brook a while back), but it really isn't difficult to make proper use of SIMD processors (as per the AMD gpus). The whole point of gpgpu programming is to take tasks that need to be done for a lot of data, and then run them in parallel on the gpu. SIMD instructions work very well for this by their very nature.

Yes SISD ensures a higher average "load" on the processor, but for graphics and gpgpu applications this is not all that significant.
 
did you even read what I said? I never mentioned ONE thing about the 4870 which of course is 256-bit. I said that a 512-bit bus and GDDR5 would be waste on current cards because they arent strong enough to utilize that much bandwidth.

Um. I don't think you read what I said ... hehe. I don't think we're really arguing anything here. All I was pointing out was that an analogy by another poster didn't really make a whole lot of sense, but I kinda know what he was trying to say ... errrr... think i do anyways.

The design teams will know what to optimise for and the products will be limited by the designs.
for example, if your design team is told they can have DDR5 they will optimise for higher memory bandwidth.
NVidia will have optimised for GDDR3 and ATI 's 4870 is optimised for GDDR5.
Its not sensible to suggest that NVidia will benefit as much as ATI's designs from using GDDR5 when their hardware isnt designed to make best use of it.

That's what I was after. nVidia's current setup wouldn't benefit as much from GDDR5 as ATI's because the ATI cards are optimized to take advantage of it more.

GDDR5 additional bandwidth will only benefit in bandwidth limited situations like Full Scene Anti Aliasing, otherwise, the performance difference between the HD 4850 and HD 4870 in other scenarios should be closer and more relative to their core speed differences (Except when Anti Aliasing is On) That explain why the HD 4870 users can use 4x FSAA pretty much for free, and even sometimes 8x. CFAA uses shaders, so bandwidth doesn't matter there. I don't think that the GTX 280 would benefit greatly of DDR 5 with it's already wide memory bus width, GTX 280 bottlenecks doesn't come from Anti Aliasing performance, but instead of some other internal bottlenecks which faster RAM will not solve, but it should be cheaper to use a 256 Bit BUS with DDR 5 than a 512 Bit with DDR3 in the long run, right now since DDR5 is so new, probably the price difference isn't that much, but enough to be 100 cheaper than the rival's SKU.

And that is why it matters to me at all. I'm a huge fan of anti-aliasing in my games. I love the soft edges and the realism it adds, at least for me, hence the interest in the benefits of the 512-bit bus. I'm really excited to see what the 4870 X2 does with some finalized drivers.
 
Congratulations on not quite having an undergraduate degree in a field marginally related to computer architectures. Suffice to say, there are those on these boards who are much more in-the-know regarding gpu architecture than you.

AMD has managed to produce a very competitive chip at ~1/3 the area of nvidia. You say nvidia could have used DDR5; well AMD could have tripled the area of their chip too (in effect tripling the number of SPs, ROPs, etc.).

These are all design decisions; it really doesn't make sense to discuss what could have been done differently as if it's a defense for a chip's shortcomings.

Who says I need to know the most about architecture? There are Electrical engineers and Computer engineers who know much more than I. The beauty of my profession is I'll be the one coding for the future while the computer science undergrads will be scouring for jobs because their math is only first year calculus and a bunch of statistics classes. There are tons of people who know more about architecture, but only a handful who will be able to code for Larrabee, and the other future GPUs using Ray tracing and great technology that involves complex mathematics and physics. I know I will be there, and I know I'll be getting paid well. Mathematical anomalies such as FSAA with the use of Physics/Math/Computing Science are totally appealing to me.
 
Who says I need to know the most about architecture? There are Electrical engineers and Computer engineers who know much more than I. The beauty of my profession is I'll be the one coding for the future while the computer science undergrads will be scouring for jobs because their math is only first year calculus and a bunch of statistics classes. There are tons of people who know more about architecture, but only a handful who will be able to code for Larrabee, and the other future GPUs using Ray tracing and great technology that involves complex mathematics and physics. I know I will be there, and I know I'll be getting paid well. Mathematical anomalies such as FSAA with the use of Physics/Math/Computing Science are totally appealing to me.

I really dont care, this isnt the place for spouting.
Some people are good to listen to, you havent grasped that one yet.
Your virtual head is bigger than my 42" telly !
 
Well Nenu,

If you read the thread (I'm sure you have, you seem intelligent) then you would see the personal attacks on me for merely stating that memory bandwidth is one component of what ddr5 offers and *how* the memory is more than compensation for the 256 bit bus.
But again, I was attacked. Nonetheless I'm okay now. People realize their mistakes.
 
I find that often it's the less learned ones who feel the need to rattle off credentials. Humility goes a long way :)

I wish more people would understand this. rattling off crednetials shows nothing more than your insecure.
 
Who says I need to know the most about architecture? [...]
To my knowledge nobody is saying that.

If you're going to be spouting off your credentials whilst insulting people, you should be damn sure they mean something significant, which as it turns out, they do not.

CharlieHarper said:
I'm a Computational Physics major and I feel no need to tell some of these lower-life forms [...]
 
Well you would do the same if your mental health was being questioned and personal attacks were flying because I am not on the wagon for this card. THose are uncalled for.
Rattling off credentials sometimes brings merit, which is what my original intention was. When I said "some of these lower life forms" I am referring those fanatics who continually insult me and say my knowledge is limited and I'm an "nvidiot". Totally uncalled for. If I'm not allowed to speak maturely and speak of my qualifications and how I deduce my arguments without being called insecure or some other sort of real life insult then I don't know what you guys want me to do.
Shit if I do, shit if I don't it; it seems.
 
The fact that you've been doing nothing but trashing ATI and the 4870X2 ever since you joined is a clear indication that you are an "nvidiot".

Are you surprised people aren't taking you seriously after the whole "GDDR5 is cheating" incident?
 
Thanks for supporting my statements (somewhat). I'm a Computational Physics major and I feel no need to tell some of these lower-life forms how it exactly goes down. Being taught the future of coding in my field, I'm glad someone as kind as you has broken this down to the anxious ATI fanboy who is frothing at the mouth about how great this card really is. One should also disregard that bjorn3d review I read today; they completely ignore the gtx280 on the charts and say what it performs similar to. Also they give it 177.35 drivers. Real classy there bjorn.


Is there a converse to Poe's law? I thought this was sarcasm initially :rolleyes:.
 
No disagreeing there brent, but even gddr5 on the current PCB would freaking smoke the 4870.
I just offended when people talk to me as if the 4870 is better than the 280, it's not. It may not be inferior by the ratio of the price difference, but it still is. People also forget the gtx280 is a SINGLE GPU CARD! It's not TWO, it's ONE GPU! It's still very powerful for ONE GPU. Yes I understand that dollar/performance ratio is what people want but if we're talking a single GPU versus multi GPU.
Ill call you an Nvidiot for being offended about GPU products.
Its inferior because even with all that die size it doesnt manage to do that much better than the 4870. Case in point two 4870s can be stuck on one board and takes less transistors than a GTX280 and trouncing it. Its single GPU wow well good for Nvidia. ATi manage to make such a powerful chip in a small area they can put 2 in one card or 4 in one system for around the same price and give you much better performance. And 2Gigs ddr5 get out of here. Sure it has 2Gigs physical but thats only 1 Gig in reality.
Nvidiot..
 
Ill call you an Nvidiot for being offended about GPU products.
Its inferior because even with all that die size it doesnt manage to do that much better than the 4870. Case in point two 4870s can be stuck on one board and takes less transistors than a GTX280 and trouncing it. Its single GPU wow well good for Nvidia. ATi manage to make such a powerful chip in a small area they can put 2 in one card or 4 in one system for around the same price and give you much better performance. And 2Gigs ddr5 get out of here. Sure it has 2Gigs physical but thats only 1 Gig in reality.
Nvidiot..

Dear Fidel,

If you took time to read the thread and you weren't being a DealDaddy,as I say, (When people get so excited about a product that is allegedly cost effective and superior to those that were better before release) you would understand that the reason for the superior performance is contingent upon the coding of the hardware. That is why architecturally the nvidia series IS superior, however the ATI architecture is efficient if it has the programmers optimize performance for each game. In the industry there are always ways to compensate for an lesser piece hardware, it's called programming to optimize it. As the person earlier in the thread stated, if the ATI programmers keep up by specifying the computations for their card in each new game, then all the power to them.

However with launches like the 3870x2 which was equally impressive in benchmarks, you will see that 2-3 months after the launch, support just falls off. A full review by Kyle will show that ATIs card cannot perform as dominantly as they do now if the games are older and thus not programmed for (to map the hardware's abilities). The GTX280 offers shaders that aren't contingent upon programming for superior performance, so its performance relative to other cards in all non-CPU bound games will be similar to the competition. The 4870X2 does not offer this. If we look at a game that might be 2 years old and not programmed for, what will we see then? I don't even know but I"m quite excited to see since this offers the real-world-gameplay aspect and also considers more than the things we see now. It will show how great the card truly is if the game doesn't have the driver or profile to allow this performance.
 
Is there a converse to Poe's law? I thought this was sarcasm initially :rolleyes:.

I don't know where you got the idea that I"m trying to justify my purchase. I bought 2 video cards for a 2 GPU configuration. A 9800GX2 SLI or a 4870x2 CrossFireX do not intrigue me. I am like many who prefer my games, when they do not have an SLI profile or multi-GPU optimization, to get brute single GPU performance.
 
That is why architecturally the nvidia series IS superior, however the ATI architecture is efficient if it has the programmers optimize performance for each game.

What has led you to believe that the performance is contingent on driver optimizations for AMD but not nvidia? If you're referring strictly to SLI/Crossfire then you have a point; if you're talking about the actual gpu architecture then you are totally off base.

What metric are you using that leads you to believe that the nvidia architecture "IS superior"? Certainly by performance/area & performance / cost metrics it isn't. So in what respect is it so definitively superior?
 
I'm using basing my logic on the notion of some <insert game here>, if we were to throw 2 of each companies best without having any "optimized" drivers that typically come out for popular games, that the gtx280 will outperform the 4870x2s.
However, if both have each companies "best" revision of the driver and the game is recent, then ATI will when. When I say architecturally superior, I am inferring to the hardware's innate ability to play the game without optimizations and base drivers. I am only saying the nvidia card will when due to the lack of requirements to "program" for the shaders, a feature that the 4xxx series benefits from if the designer programs for it. I guess it's a subjective thing.
You could believe that makes the ATi architecturally superior because it has the ability to be more efficient if the hardware is properly programmed for, whereas I believe if it needs that sort of specialized coding there is a loss in the development of the cards since the money saved on production is then spent on time to program.
That is why I liked the nVidia solution, it does not require specialized attention on the single-card level to perform in a respectable range. The ATI can go both ways, but if it does have that optimization it will spank the nvidia. Totally subjective.

It's the chicken and egg for this kind of argument. One calls special optimization for an architecture genius, and I would agree if the company was on firm financial ground. This card could be great if the longevity of reliable drivers and supporters of the hardware came out. If ATI gets from respectable to tremendous in their drivers then this card will be a beast. Until then I"m holding my breath since there's only so many games one can optimize for; and you gotta hope if you're ATi that the little Timmy buying the card will not play games they haven't, and probably won't, optimize for.
 
However, if both have each companies "best" revision of the driver and the game is recent, then ATI will when. When I say architecturally superior, I am inferring to the hardware's innate ability to play the game without optimizations and base drivers. I am only saying the nvidia card will when due to the lack of requirements to "program" for the shaders, a feature that the 4xxx series benefits from if the designer programs for it.
Like I said earlier; if you're talking about SLI/Crossfire, then you have a point. Are you?

If you're just talking about the gpu architecture itself, I'm very curious what makes you think that nvidia's chip requires less optimization; because I don't believe this is the case.
 
Not an SLI argument.
Architecture of the hardware on the ATI requires optimizations in the drivers to perform as great as it does.
The GTX2xx does not, because its architecture has the 240 shaders as exactly that, 240 shaders.

The shaders from the 8 series and the GTX2 series do not differ in the way they compute the mathematics, what nVidia did with the GT200 shaders is that there are now 3 streaming multiprocessors per Texture Processing Cluster and there are now 10 TPCs as opposed to 8.
This gives a higher shader count of 240 (10*8*3) which isn't really more efficient.
So per TPC we have:
24 Shader processing units
8 texture fetch units.
10x24 = 240 shaders
8x 10 = 80 texture fetch units.
plus a 512mbit memory bus
This all equals a huge and overpowered single GPU solution.

Whereas ATI had changed the way their streaming processors work. If you see in the earlier posts by the person who stated this, if ATI codes the drivers for a game it allows them to map their shaders and lets their architecture become more effective.

What I am saying to you is in a world where we can't depend on driver updates, the ATI will win when the drivers are kept up to date in the latest games, but will always fall short in older games because of the inability to create all of the profiles for the card for most games. The difference in this argument between a single GPU nvidia card and a single GPU ATi 4xxx series card is, for the ability to get the performance the 4870 is capable of you will NEED to have the profiles for each game whereas an nVidia card would not.
 
I'm curious that you if you think drivers don't matter for the 280, then why did you complain about Bjorn using the wrong ones in their review?
 
Logic of the day:
ATI is more effecient because they optimize their drivers for each game( ? ) ,therefore nVidias architecture is more superior because it can run fast without the need of optimizations.

okaay...do you actually have any idea what you're talking about? The fact that 48xx handles AA on their SPs has nothing to do with driver optimizations.
 
Back
Top