AMD ATI Radeon HD 4870 X2 Preview

Yeah, and with newer driver releases, it's performance gets closer to the 8800GT and proves that there's still untapped power left on this GPU (Even though it's weakest point is it lack of texture power). The funny thing is that we know that the FSAA use tanks the card, but at 8x and ultra high resolutions, it doesn't dip that much compared to their rivals, probably most of the damage in it's impact in performance is already done.
 
That means that in the worst case scenario (Which pretty much never happens), the GPU will only execute 1 instruction per stream processor, that means 64. But when the conditions area ideal and the optimizations are done (Specially at the Game Engine level), 5 instructions can be packed and executed in parallel on each stream processor making a total of 320 instructions per cycle.
Yes, worst case scenario only a single operation is executed, but as you noted, this rarely happens, and it doesn't rely on game specific optimizations.

Taking proper advantage of a SIMD instruction does not require application-specific optimizations. Shader programs are typically passed into the gpu driver in the form of a higher level language (for example GLSlang, HLSL, or Cg), the shader is then compiled for the target architecture by the driver; this is the not specific to any graphics card vendor; this is how it works for everyone. The compiler will then assemble machine code for the target architecture with architecture specific optimizations. This is just like how any other (non-gpu) compiler works.

Take for example a 3 dimensional vector addition. On a SISD instruction set this will be compiled to 3 separate (yet simulataneous) scalar adds; on a SIMD instruction set this will compile to a single vector add. There is nothing application specific about this. Look at any other compiler which supports MMX/SSE optimizations. These are SIMD instructions, which according to you requires application-specific optimizations. Clearly this is incorrect; the compiler is able to handle this.

I'm not attempting to downplay the advantages of coding directly for the instruction set you're running on; but that doesn't really happen at all these days. Shaders are typically coded in a higher level language, and the performance is quite good.

I don't doubt that both AMD and Nvidia optimize specifically for games, but there is nothing about either architecture which necessitates it.
 
Of course the GTX280 requires optimizations, but it's shader processing is very similar to the rest nVidia's offerings. nVidia's optimizations aren't to the extent where upon a release of a new video game the drivers must be rolled out for adequate performance. I'd love to see a 4870x2 use catalyst drivers previous to its release to see how good it is. You won't see it. I can tell you that a GTX280 on previous drivers will not perform as bad as you think.

Well, you can run old drivers all you want, but nobody else is going to. Those people who purchase a 4870X2 will be running the latest drivers and be getting GTX280 SLI level performance in current games. And in older games in which the drivers aren't specifically optimized for the 4870X2? It will have more than enough raw "unoptimized" power to run them with full eyecandy anyhow since older games are usually less GPU intensive.

So you can just sit there and "what if" all day long and in the end you will accomplish...


absolutely nothing.
 
My friend, that my come sooner than you think.
ATi's driver support has never been a selling point.
Just think about what company this is and who their hardware partners are?
Maybe visiontek has decent warranty but the rest you must run through hoops.
Half the ATI drivers were rushed, what makes you believe after a strong launch they're not going to tail off? Why shouldn't they? I mean they have limited resources and labor and you're the one who purchased the card it's not like you can return it.
They'll have made their money by then.
 
My friend, that my come sooner than you think.
ATi's driver support has never been a selling point.

Welcome to pre-2002. Since then, however, ATI's driver support is better than Nvidia's. ATI puts out WHQL certified drivers on a monthly basis (sometimes even quicker). Nvidia releases theirs, what? Bi-monthly at best, quarterly at worst with a few betas thrown in?

Just think about what company this is and who their hardware partners are? Maybe visiontek has decent warranty but the rest you must run through hoops.

So buy from Visiontek then. What's the big deal? Does it matter if there is one company with a lifetime warranty or ten? As long as there's at least one then you're set. Visiontek's prices are comparable to the rest of their competitors. You can buy an Nvidia card with a poor warranty as well, you know.

Half the ATI drivers were rushed, what makes you believe after a strong launch they're not going to tail off? Why shouldn't they? I mean they have limited resources and labor and you're the one who purchased the card it's not like you can return it. They'll have made their money by then.

You're implying that ATI's drivers are rushed out the door before they're actually ready. :rolleyes: And your proof of this is.....?

I have the feeling you're simply trolling because you're feeling depressed at having spent so much money on your GTX280 SLI setup now that a single ATI card is coming out that can equal it for less than half the price you paid.

You know what? GET OVER IT! I'm selling an XFX 512MB 8800GT for $99 shipped that I spent $250 on eight months ago. That has got to be the fastest hardware value depreciation I've ever experienced. But it's all part of the game. If you don't like it, then don't buy top-of-the-line equipment as soon as it comes out. Otherwise, be prepared to take your financial lumps with the rest of us.
 
My friend, that my come sooner than you think.
ATi's driver support has never been a selling point.
Just think about what company this is and who their hardware partners are?
Maybe visiontek has decent warranty but the rest you must run through hoops.
Half the ATI drivers were rushed, what makes you believe after a strong launch they're not going to tail off? Why shouldn't they? I mean they have limited resources and labor and you're the one who purchased the card it's not like you can return it.
They'll have made their money by then.

Am I to infer that you're saying NVidia's drivers *are* a key selling point for them? The same company that was blamed for 30% of all Vista crashes? Logic does not compute.

"Why shouldn't they [driver improvements] tail off?" Because it's the developers job to keep developing, comprende? I don't think ATi wants to have a bunch of developers sitting around with their thumbs up their asses waiting for the hardware guys to make the 5800 series. They're still adding fixes to current drivers for the X1xxx series cards, read a release note or two. What in the world would make you think they would all of the sudden stop developing software for a product as important as their flagship 4870X2? Not to mention that driver improvements to the 4870X2 would also very likely cause improvements in all Crossfire solutions from the 2900 and up because of architecture similarities.

I think you should stop trying to find reasons to hate ATi for making such a badass piece of hardware and try to see the benefits that putting a product like this in the market place has provided. Now you've resorted to not only insulting the competence of the forum members here, but also that of professional driver developers and business managers at ATi who have been doing this for years saying that they have questionable ethics.
 
These arguments are stupid and inane. Both companies will continue to optimize their drivers for games so trying to argue which one would win if they both suddenly stopped is stupid and ignorant. Your hypothetical situations are contrived and there is no point in discussing this nonsense.

Ugh...
 
You still haven't addressed the fact that all the commentary about SIMD processors requiring application-specific optimizations is a load of nonsense.
 
Welcome to pre-2002. Since then, however, ATI's driver support is better than Nvidia's. ATI puts out WHQL certified drivers on a monthly basis (sometimes even quicker). Nvidia releases theirs, what? Bi-monthly at best, quarterly at worst with a few betas thrown in?
8.7 hotfix is out. More performance improvements in COD4 (~ 7%) and other games.
 
Welcome to pre-2002. Since then, however, ATI's driver support is better than Nvidia's. ATI puts out WHQL certified drivers on a monthly basis (sometimes even quicker). Nvidia releases theirs, what? Bi-monthly at best, quarterly at worst with a few betas thrown in?



So buy from Visiontek then. What's the big deal? Does it matter if there is one company with a lifetime warranty or ten? As long as there's at least one then you're set. Visiontek's prices are comparable to the rest of their competitors. You can buy an Nvidia card with a poor warranty as well, you know.



You're implying that ATI's drivers are rushed out the door before they're actually ready. :rolleyes: And your proof of this is.....?

I have the feeling you're simply trolling because you're feeling depressed at having spent so much money on your GTX280 SLI setup now that a single ATI card is coming out that can equal it for less than half the price you paid.

You know what? GET OVER IT! I'm selling an XFX 512MB 8800GT for $99 shipped that I spent $250 on eight months ago. That has got to be the fastest hardware value depreciation I've ever experienced. But it's all part of the game. If you don't like it, then don't buy top-of-the-line equipment as soon as it comes out. Otherwise, be prepared to take your financial lumps with the rest of us.

Not really. I've stated numerous times here I can afford to buy whatever I want. I live a good life and I am thankful everyday for it. The difference between your statements and mine about the video card perception is that you're assuming I'm bashing a card I desire and conversely I am trying to justify my purchase. That isn't the case.

Pre 2002? How long have you been building? ATI's support was solid when they used to make their own cards, they used to make a whack load of them. If the company you buy from isn't important and your belief is that it's "just" about lifetime warranty then you must take the time to look at the horror stories of how crappy the likes of powercolor and HIS are. These guys avoid you when you need to replace stuff!

To address your statement about drivers, just because nVidia doesn't release drivers every week or 2 doesn't mean that they're not there. I love your touch with "WHQL" to specify that nVidia releases, at best, 1-2 WHQL drivers every quarter. So? Everyone I know that's been using 177.41 is happy. Those who have had the courage to try the higher level betas available at nVnews.net are also happy. Your argument is fallacious. It's not the quantity of drivers you put out, it's the quality. Why not search in the graphics forum here to see how many people are having trouble with their new 4-series with either drivers, fan settings or god knows what else. I know I'll never have that problem if I buy a card from a solid company. It's a shame.

To address your statement jimmy, evolucion8's stance is pretty much emulated by me. You're not going to get any further of an argument because you seem to like a roundabout argument and invite the dealdaddy trolls to "Fight" with me since I apparently care alot about not having a video card I don't want.

What you also failed to address, Creig, is the poor continual driver support. Look at the reviews for the 3870x2, and now look at it's performance now relative to the class it was put up against earlier this year. You will see it getting spanked by ONE 8800 GTX. ONE, not two, ONE. How did that happen? What happened to those early signs of dominance that it showed just like it's older brother the 4870x2? ATi tailed off on support. There is no way a card which has the potential to do what it did against the GTX in the review benchmarks can tail off the way it did. This should raise massive amounts of questions for you as buyers. Would you really take that chance if ATi has already done it before? I don't understand where you guys are making baseless accusations about me being a ignorant, and in denial, fanboy. I am a big fan of nVidia, yes, but it is propriated by decisions they make about whose going to support their product. I would buy an ATi product if they had more north american based card manufacturers. I would buy an ATi card if it gave me a better value for my dollar if their previous company history showed that this card was a CONSISTENT performer.

It's not about nVidia being great, it's moreso about nVidia being a better company which at one time, competed with themselves to make a better card (Geforce 3/4 anyone?). I want a company whose like INTEL but with the GPU sector. Yes we all know intel got lazy and AMD had the edge, but the bottom line is intel got its wakeup call and never relinquished their edge in the market they were losing to the enthusiast. I wouldn't see the same argument from those who would be looking at a phenom that costed a fraction more than a C2Q and maybe performed a tad better (in a hypothetical dream that AMD could muster such a thing). You would remember recent history and recall what a failure the launch of Barcelona was; regardless of whether a "new" (again hypotheticals here) phenom that had good price/performance was released, you would be wary. Why? because of this company's previous attempt showing that this was a complete bust of a chip. History plays a deadly role in decisions we make for our future purchases, It also shows how much brand discrimination exists in today's world. I wasn't happy with nVidia's price fixing; but no one said the industry had to be fair, did they? Instead I see villianizing of people who support nVidia video cards for personal decisions. Instead of being mature I get attacked for being proud of a purchase.

Whether the FX 5xxx was a failure to you or not is a personal decision. I personally felt that the technology of the card was there but it didn't put out the performance it should have and it's something that nVidia learned from. They got caught sleeping and the 9700 made them wake up. What generation have they lost in this sector then?
The 6 series was a walk, the 6800 GT is considered one of the best price/performance cards of all time.
The 7 series came out in June and, again, provided considerably higher performance (albeit at a higher cost). The x1800 XT came out and barely beat a 256mb 7800 GTX. Then comes the 7800 512 (great card, overpriced though) in November. Oh, then January rolls around and ATI typically soft-launches their x1900xt. Soon comes the 7900 GTX that was the SAME architecture and it STILL tied the x1900xt (Both won and lost some categories).
The 8 series, well I don't really need to say anything here.
The 9 series, great price to performance of the above series with a "die shrink". The 8800 GT is now the modern day 6800 GT.
the GTX2 series. You believe it's not the best for the price. I do. I believe a single GPU that loses (This is with the assumption ATI holds up their driver support) to a dual gpu even by 20-30 percent is acceptable. There are things a single GPU offers that can't be quantified. I could have easily waited but I do not want this card. I would never buy quad SLI or quad crossfire. Both solutions are a very tiny portion of the market and thus the support by nvidia for the 9800 GX2 on some games is blatant and lazy.

I have equal qualms with nVidia as I do with ATi. The argument was, and still is, about how ATI's architecture works. This somehow tailed off into an attack on because of my prediction of this card starting off strong in games and tailing off. You guys believe that this card, comparatively to the gtx 280, is less driver dependent. I'm just going to let you guys find out the hard way. I'm not saying buy a GTX280. I'm saying wait. I didn't ever try to say "Oh well you know my card loses in all benchmarks but I believe it's better". My definition of better is the ability to have consistent performance. When we see that this will be supported later on in the year when the 4870x2's profiles are lacking for it's shader mapping in the drivers; then we can rehash my post and discuss. Infact I'll rehash it myself. I just hope you all will answer the bell and ready to eat your own pride and your haplessness will extinguish. I'm not arguing this topic anymore with you jimmy. I have to let chronology play its hand before I can reintroduce this situation.
 
To address your statement jimmy, evolucion8's stance is pretty much emulated by me. You're not going to get any further of an argument because you seem to like a roundabout argument and invite the dealdaddy trolls to "Fight" with me since I apparently care alot about not having a video card I don't want.
[...]
The argument was, and still is, about how ATI's architecture works.
Just to be clear, I really couldn't care less about what video card people end up using. I'm currently running a AGP 6800 with no intention of upgrading.

As you said, the argument is about the architecture, and I take issue with your claims that SIMD processors require application-specific optimizations to achieve reasonable performance when there is no evidence to suggest that's the case. If you had said SLI/Crossfire requires application-specific optimizations then I would have agreed, but you explicitly said you were not talking about that.

I'm really not trying to invite people to come and "fight" you (and honestly I don't know how you could interpret my posts that way, since I'm just asking for some data to back up your claims). Really, I'm genuinely interested in seeing what evidence you have to support your claim, because everything I know about gpu and computer architecture suggests that you are incorrect. If you know what it is in the architecture that causes this optimization requirement, then surely you could whip together a simple shader which exploits this issue in about 10 minutes and post it.
 
Pre 2002? How long have you been building?

I've been building PCs since ISA slots were considered a novel idea. How about you? I mention 2002 since (IIRC) that is the year that ATI started putting out monthly driver updates, although I might be wrong about that. It has been quite awhile, at the very least.

If the company you buy from isn't important and your belief is that it's "just" about lifetime warranty then you must take the time to look at the horror stories of how crappy the likes of powercolor and HIS are. These guys avoid you when you need to replace stuff!

I'm confused. What exactly are you trying to say here? Nearly all of the various ATI cards are based on the reference design, so the only real difference comes down to warranty and bundle. That has NOTHING to do with ATI, but with the individual vendor. I'm sure there are many "horror stories" regarding Nvidia manufacturers as well since not all Nvidia manufacturers offer lifetime warranties.

To address your statement about drivers, just because nVidia doesn't release drivers every week or 2 doesn't mean that they're not there. I love your touch with "WHQL" to specify that nVidia releases, at best, 1-2 WHQL drivers every quarter. So? Everyone I know that's been using 177.41 is happy. Those who have had the courage to try the higher level betas available at nVnews.net are also happy. Your argument is fallacious. It's not the quantity of drivers you put out, it's the quality. Why not search in the graphics forum here to see how many people are having trouble with their new 4-series with either drivers, fan settings or god knows what else. I know I'll never have that problem if I buy a card from a solid company. It's a shame.

That's odd because that's not what you said earlier:

CharlieHarper said:
As it is I believe that there a few things that nVidia needs to iron out when it comes to this card. I'll be waiting for the next killer driver update I feel the gtx280 was rushed but the drivers we have now aren't fully optimized in the sense of UDA providing decent and feasible results, but not the best.

From this statement it sounds as if you consider Nvidia's current driver to be mediocre at best.

Many people now consider ATI drivers to be either comparable or superior to Nvdiia's. As has been previously mentioned, 28.8% of all Vista crashes were related to Nvidia drivers while only 9.3% were attributed to ATI. Oddly enough, those figures almost make it look as if ATI has better drivers than Nvidia.

Both companies have driver issues with certain hardware and games. Maybe you should pop on over to the official nvidia forums and see for youself if you don't believe me.

What you also failed to address, Creig, is the poor continual driver support. Look at the reviews for the 3870x2, and now look at it's performance now relative to the class it was put up against earlier this year. You will see it getting spanked by ONE 8800 GTX. ONE, not two, ONE. How did that happen? What happened to those early signs of dominance that it showed just like it's older brother the 4870x2? ATi tailed off on support. There is no way a card which has the potential to do what it did against the GTX in the review benchmarks can tail off the way it did. This should raise massive amounts of questions for you as buyers. Would you really take that chance if ATi has already done it before?

Dual-GPUs cards are a work-in-progress. As each generation is designed and built, they slowly eliminate the glaring issues of the previous generation and become more attractive to the end user. The concensus from reviewers so far is that the 4870X2 is a very worthy successor to the 3870X2 and should do quite nicely, both in performance and sales. Can I say for certainty that the 4870X2 will do well? Of course not. But neither can you look at the 3870X2 and instantly declare the 4870X2 a failure.

SLI/Crossfire both require driver support also. Are you saying that SLI has been a complete failure? Because there is VERY little difference between a pair of cards in SLI/Crossfire and a dual-GPU card. And I don't foresee SLI/Crossfire support to be ending anytime soon. If anything, it is becoming increasingly popular.


I don't understand where you guys are making baseless accusations about me being a ignorant, and in denial, fanboy.

Um.... Maybe comments like:

"I just offended when people talk to me as if the 4870 is better than the 280, it's not."
"IT's funny how no one argues the fairness of gddr5."
"I just get frustrated when it takes ATI 2GB of GDDR5 on a crap architecture to beat nvidia"
"We can all be the best by being lazy, but innovation is what will get us somewhere and achieve much more both in technology and life..."
"I'm glad we have reached an impasse and you guys can realize that ATi will leave you after the launch and you'll have another 3870x2"
"We'll see how your guys' beloved card performs"

I am a big fan of nVidia, yes, but it is propriated by decisions they make about whose going to support their product. I would buy an ATi product if they had more north american based card manufacturers. I would buy an ATi card if it gave me a better value for my dollar if their previous company history showed that this card was a CONSISTENT performer.

The only thing Nvidia was supporting when the GTX280/260 was released was their own bank account. Yes, that showed GREAT concern for their customers. Since that time, the 4870 and 4850 are selling like crazy and nearly everybody is recommend them. Do you think that everybody here who is purchasing an ATI card is somehow deluded and has been brainwashed into purchasing these cards because we simply don't know any better?

We're not stupid. We can read reviews and understand video card architecture better than the majoritiy of people on the street. And yet, we're still buying ATI 4850s/4870s. That tells you something. If they were complete junk, they would still be sitting on shelves and everybody here would be saying "meh". The same goes for our (and reviewers) excitement regarding the 4870X2. There would hardly be this much fuss if the card didn't appear to be promising in both specs and pre-release previews.

I'm not going bother with the rest of your rambling recitation.
 
Charlie please read this thread: http://forum.beyond3d.com/showthread.php?t=49295

You're really off here. the 5way VLIW design that ATI went with from R600 on does not require game specific optimizations in order to extract a good amount of performance. It just doesn't. I challenge you to provide any evidence to the contrary.
 
Don't feed the troll guys. I don't see why everyone got so worked up. If anyone should be offended by Charlie's comments, it should be me. And you don't see me sweating it :)
 
Don't feed the troll guys. I don't see why everyone got so worked up. If anyone should be offended by Charlie's comments, it should be me. And you don't see me sweating it :)



Ya, I put CH to rest awhile ago. :p
 
Charlie please read this thread: http://forum.beyond3d.com/showthread.php?t=49295

You're really off here. the 5way VLIW design that ATI went with from R600 on does not require game specific optimizations in order to extract a good amount of performance. It just doesn't. I challenge you to provide any evidence to the contrary.
Yeah I've been challenging him to do the same. He's arguing against the last 40 years of computer science and compiler technology.

Someone posted an interesting slide in the thread showing the average utilization in various games. In Bioshock (the most efficient) it was 4.2 which corresponds to 672 scalar operations per clock. In Lost Planet (the least efficient; and typically a poor performer on AMD cards) was 3.3, which corresponds to 528 scalar operations per clock.

I would think that 3 would be a reasonable worst case scenario, since typically most vectors and pixel data in games are at least 3 components. And that assumes the compiler isn't able to compress like-operations between the different processors.
 
It''s a pity that is almost unreachable to reach the level 5 of utilization. The funny thing is that Lost Planet is an Xbox 360 port which uses the ATi GPU, and usually Xbox 360 ports runs better on ATi hardware, but since Lost Planet is under TWIMTBP program, I wouldn't be surprised that nVidia's did some optimizations inside the game engine (Or at shader level) to perform better on their hardware.
 
If the ATi offerings aren't more driver dependent than nvidia cards then how come there are hotfixes in certain games that boost performance numbers an unreal proportion? 7 percent is slot
 
Charlie please read this thread: http://forum.beyond3d.com/showthread.php?t=49295

You're really off here. the 5way VLIW design that ATI went with from R600 on does not require game specific optimizations in order to extract a good amount of performance. It just doesn't. I challenge you to provide any evidence to the contrary.

That's not completely true, there's always ways to increase the Stream Processors utilization (Look at the last 5 driver releases). They didn't just simply solve some issues, they manage to increase the performance of some games and wasn't because of a driver bug, after all, Catalyst A.I. does optimizations like Shader Replacement to increase the performance of their cards. I'm pretty sure that if the compiler could extract 5 operations for every VLIW, it would be as fast or faster than the 8800GTX all the time, and isn't.

http://www.anandtech.com/showdoc.aspx?i=2988&p=4
Stream Processors: AMD's R600

Things are a little different on R600. AMD tells us that there are 320 SPs, but these aren't directly comparable to G80's 128. First of all, most of the SPs are simpler and aren't capable of special function operations. For every block of five SPs, only one can handle either a special function operation or a regular floating point operation. The special function SP is also the only one able to handle integer multiply, while other SPs can perform simpler integer operations.

This isn't a huge deal because straight floating point MAD and MUL performance is by far the limiting factors in shader performance today. The big difference comes in the fact that AMD only executes one thread (vertex, primitive or pixel) across a group of five SPs.

What this means is that each of the five SPs in a block must run instructions from one thread. While AMD can run up to five scalar instructions from that thread in parallel, these instructions must be completely independent from one another. This can place a heavy burden on AMD's compiler to extract parallel operations from shader code. While AMD has gone to great lengths to make sure every block of five SPs is always busy, it's much harder to ensure that every SP within each block is always busy.

http://www.anandtech.com/showdoc.aspx?i=2988&p=5

Bringing it Back to the Hardware: AMD's R600

AMD implements their R600 shader core using four SIMD arrays. These SIMD arrays are issued 5-wide (6 with a branch) VLIW instructions. These VLIW instructions operate on 16 threads (vertices, primitives or pixels) at a time. In addition to all this, AMD interleaves two different VLIW instructions from different shaders in order to maximize pipeline utilization on the SIMD units. Our understanding is that this is in order to ensure that all the data from one VLIW instruction is available to a following dependent VLIW instruction in the same shader.

Based on this hardware, we can do a little math and see that R600 is capable of issuing up to four different VLIW instructions (up to 20 distinct shader operations), working on a total of 64 different threads. Each thread can have up to five different operations working on it as defined by the VLIW instruction running on the SIMD unit that is processing that specific thread.

For pixel processing, AMD assigns threads to SIMD units in 8x8 blocks (64 pixels) processed over multiple clocks. This is to enable a small branch granularity (each group of 64 pixels must follow the same code path), and it's large enough to exploit locality of reference in tightly packed pixels (in other words, pixels that are close together often need to load similar data/textures). There are apparently cases where branch granularity jumps to 128 pixels, but we don't have the data on when or why this happens yet.

If it seems like all this reads in a very complicated way, don't worry: it is complex. While AMD has gone to great lengths to build hardware that can efficiently handle parallel data, dependencies pose a problem to realizing peak performance. The compiler might not be able to extract five operations for every VLIW instruction. In the worst case scenario, we could effectively see only one SP per block operating with only four VLIW instructions being issued. This drops our potential operations per clock rate down from 320 at peak to only 64.

On the bright side, we will probably not see a shader program that causes R600 to run at its worst case performance. Because vertices and colors are still four components each, we will likely see utilization closer to peak in many common cases.

If I'm wrong please enlight me, knowledge is power.
 
That's not completely true, there's always ways to increase the Stream Processors utilization (Look at the last 5 driver releases). They didn't just simply solve some issues, they manage to increase the performance of some games and wasn't because of a driver bug, after all, Catalyst A.I. does optimizations like Shader Replacement to increase the performance of their cards. I'm pretty sure that if the compiler could extract 5 operations for every VLIW, it would be as fast or faster than the 8800GTX all the time, and isn't.
[...]
If I'm wrong please enlight me, knowledge is power.
My understanding is that the r6xx series was bottlenecked by a ROP and TU deficiency, not anything to do with its SPs (in particular compared to the 8800 series).

I don't think you've posted anything incorrect. Note that nothing here says that game specific optimizations are required though. They correctly mention that the compiler does have to target the instruction set and architecture of the hardware, but again, these are standard compiler requirements and optimizations. This is not specific to AMD hardware either.

You'll never see 100% utilization on any processor (AMD,Nvidia, Intel, etc). Cache misses and other pipeline stalls are going to ensure this. To pick out one aspect of the hardware which causes less the perfect utilization and suggests that this necessitates game specific optimizations is ridiculous.

As I mentioned earlier, and as the articles confirm, you can probably expect a baseline utilization of 3/5 because most vectors are going to be at least 3 components, and then with other compiler optimizations this probably gets closer to 4 (as per the slides).

If the ATi offerings aren't more driver dependent than nvidia cards then how come there are hotfixes in certain games that boost performance numbers an unreal proportion? 7 percent is slot

In the case of for the X2/GX2/SLI/Crossfire cards I would assume this is because of the need to create SLI/Crossfire profiles, but then you specifically said you weren't talking about this.
 
Yeah, it's ROP and Texture units were weak, it's performance halves when you activate texture filtering, I wonder how the story would be if their ROP and Texture Units were as strong as the RV770.
 
oh the beauty of foot in mouth disease :) thx evo
Btw jimmy i'm referring to someone above who said a hotfix increases cod4 performance by 7%
 
I'm talking about CoD4, though.
And Jimmyb, you got owned man. Evolucion8 made you guys pack for the hills.
I just hope you guys are happy with that card you buy. And don't say we didn't tell you so.
 
I'm sure sure where you're getting that idea. Everything he posted agrees with my claim a SIMD architecture doesn't necessitate application-specific optimizations. Did you even read the articles he linked?

Like I said earlier, I'm running a very old AGP-era 6800 with no intention of upgrading soon. This isn't personal at all; I saw that you were saying that SIMD processors required application specific optimizations, so I felt obliged to point out that that particular claim is incorrect.
 
I'm talking about CoD4, though.
What about it? Do you know what changes were made? Are you also suggesting that we shouldn't impove drivers? The performance numbers where what they were when people bought them, surely continued improvements are what good support of a product is all about.

Additionally, think of a DX10 title that was released over the past 12-14 months and search for an NVIDIA beta driver of that name - you'll get a very high hit rate.
 
I'm talking about CoD4, though.
And Jimmyb, you got owned man. Evolucion8 made you guys pack for the hills.
I just hope you guys are happy with that card you buy. And don't say we didn't tell you so.
Us deal daddys were right Charlie.
Looks like you need to read the thread again. :p
 
Us deal daddys were right Charlie.
Looks like you need to read the thread again. :p

Don't ask for clarification/assistance on a mature board for something you are pretty much ignorant about then return here and repeatedly act like a child saying, "I told you so!", as if you knew everything all along. You still don't know why Charlie is wrong.
 
Don't ask for clarification/assistance on a mature board for something you are pretty much ignorant about then return here and repeatedly act like a child saying, "I told you so!", as if you knew everything all along. You still don't know why Charlie is wrong.

I’m not ignorant about it I just admitted that compared to the B3D forum members I am less knowledgeable about it. Repeatedly say I told you so?
A. Charlie acted like a child in this thread looks like you need to read it again.
B. I have not repeatedly said "I told you so"
C. I do know why Charlie is wrong now it’s called READING something you have not done in this thread it seems.
 
LOL Arguing against Dave and others that actually work on this card makes me laugh... i'd imagine they know quite a bit more about how the card *gasp* actually works..
 
LOL Arguing against Dave and others that actually work on this card makes me laugh... i'd imagine they know quite a bit more about how the card *gasp* actually works..

Which one is Dave? A colleague in our midst? :)
 
I’m not ignorant about it I just admitted that compared to the B3D forum members I am less knowledgeable about it. Repeatedly say I told you so?
A. Charlie acted like a child in this thread looks like you need to read it again.

Who cares if he did or not, you are acting like a child.
Thank you Fun Duck.
Rofl Charlie.



B. I have not repeatedly said "I told you so"

Us deal daddys were right Charlie.
Looks like you need to read the thread again. :p



C. I do know why Charlie is wrong now it’s called READING something you have not done in this thread it seems.

No it's obvious it still just some sort of electronic magic in your mind. Quick, tell some more people to read the thread once more.
 
So you got two quotes from me responding to your post.
Good work?

And then you have two of my posts to Charlie but none of charlies posts to me?
Come on be fair. :p
Charlie has made unfounded claims which he writes as truth.
Charlie accused me of buying this card over a 8800GTX (which I would have if money hadnt been an issue..sigh) because i like 3Dmark06 even though I dont have it installed.
Charlie coined the term deal daddy and labeled many people with the term.
Charlie is showing blatant signs of fanboyism which most people here can see.

All in all your really not acting mature yourself.
So im not going to bother with you anymore Goodbye.
 
The crysis driver, in my experience really didn't pan out. I am not trying to tell a future buyer to go nvidia or ati. Jimmy if you read the post by evolucion again you'll see that atis card has an advantage in certain games. How can you be so sure that this advantage won't disappear. Sure I was wrong in saying driver profiles for each game neccessitated optimization, but the card has an advantage when ati can use all 5 of it's threads. The real question now is do you think 64 true shaders will last? The ability for each cluster to handle crysis poorly explains the cards architectural disadvantage. Do you honestly believe future games like far cry2 won't be the same? Sure they'll be more efficient on the new engine but what if shafer intensive games are the future? Right now it's crysis but what about later? Right now this card is a preliminary undisputed king. It seems giving it a true workload of shafer heavy games is the warning to an unpredicted demise of this architecture since I believe shaded heavy games are the future. Looks like we gotta wait and see.
 
Stop trying to tell us anything, you mix fact and fiction and while you dont know what you are talking about, we do.
BS has no place here.
There are Forums around to cater for those that havent reached adulthood yet.
Perhaps you can impress someone there :)
 
Back
Top