Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
The crysis driver, in my experience really didn't pan out. I am not trying to tell a future buyer to go nvidia or ati. Jimmy if you read the post by evolucion again you'll see that atis card has an advantage in certain games. How can you be so sure that this advantage won't disappear. Sure I was wrong in saying driver profiles for each game neccessitated optimization, but the card has an advantage when ati can use all 5 of it's threads. The real question now is do you think 64 true shaders will last? The ability for each cluster to handle crysis poorly explains the cards architectural disadvantage. Do you honestly believe future games like far cry2 won't be the same? Sure they'll be more efficient on the new engine but what if shafer intensive games are the future? Right now it's crysis but what about later? Right now this card is a preliminary undisputed king. It seems giving it a true workload of shafer heavy games is the warning to an unpredicted demise of this architecture since I believe shaded heavy games are the future. Looks like we gotta wait and see.
DaveBaumann
How did it come that the ATi HD architecture suddenly got better and was able to reduce the performance difference vs nVidia considering their previous performance differences? And now with newer driver sets, the HD 3870 performs very close to the 8800GT, and the 9600GT which was supposed to be the direct rival of the HD 3870, is starting to show it's limitations in newer games, something that was unnoticeable when it debuted and was slighly faster overall, but now in more shader intensive games, is getting outperformed by the HD card, and now the HD 4800 architecture is here to stay until ATi takes another approach.
Of course not.Hi Dave, I see your name in Outlook, at least I think that's the right one! You mind if I shoot you an email?
By the time HD 4800 hit we had got to a point of much greater development with the DX10 driver and when you combine that with an architecture thats 2.5x more powerful in places the benefits can really be seen!
This is really sad, people.
Can't we just be objective and let the in-game frame rates be the extent of things?
I agree. Are we 2 years old? "nuh uh, yeah huh, nuh uh, yeah huh" sheesh.
As for my take on this whole thing. They both have their benefits and their drawbacks. I'm not loyal to any company. The one that gets my money is that one that performs. Hence why I have and Intel Processor and an nVidia GPU. That said, I will be buying a 4870 x2. It'll be a great improvement over my 8800gt especially at higher resolutions. That said, there's nothing wrong with the GTX 280 except that when it was released it was WAY overpriced for what you got and no one can deny that. ATI really did a great job this time around. Thankfully, there's competition and we all get to benefit!
Anandtech used an Intel Core 2 Extreme QX9770 @ 3.20GHz.I also doubt Anand runs benches with an overclocked CPU.
ok, bump time; once again [H]'s numbers go head to head with everyone else's, although apparently nobody cares about these ones, I mean were all expected to own nuclear PSUs today right?
Who screwed up?
No one screwed up, the rigs are different.
This thing needs to much of power :O. I'm planing to buy after 12th of august but i want to know, will it run with 750w CM PSU?
Planning to OC after getting the 4870x2 ...(if its safe)
different enough to warrant a swap of losers, I'm not intrested in the figures themselves simply in the ratios they provide. According to Hard OCP the HD4870 consumes equally as much as a GTX280, according to anand its more than 25W less.
yeah your fine.
simple answer would be yes. to what extent would depend on the game and settings.what about my CPU E6400 ?will it be bottleneck? With 4870x2 ?
simple answer would be yes. to what extent would depend on the game and settings.
I don't understand one thing that is, why will my processor be the bottleneck? Usually when you are playing game it all depends on the 3d card, processor don't play much role when you game. Suppose if I oc my processor to 3ghz so it will still be the bottleneck or it will do fine ?
Edit: If its bottleneck so its negligible ? or noticeable?
Im sure someone can explain it to you a lot better than me so I will them do it. also if the cpu didnt matter then gamers would all buy a Celeron and be done with it.I don't understand one thing that is, why will my processor be the bottleneck? Usually when you are playing game it all depends on the 3d card, processor don't play much role when you game. Suppose if I oc my processor to 3ghz so it will still be the bottleneck or it will do fine ?
Edit: If its bottleneck so its negligible ? or noticeable?
Whether or not a game will be CPU-limited or GPU-limited depends on so many factors. I would say that unless you are playing at low resolutions (1280x1024 or less) you stand a very low chance of being CPU-limited. And if you're playing at that resolution your framerates are probably going to be quite high anyway, so you wouldn't even notice it. Don't worry.
It has been proven through many reviews that the C2D running at 2.13GHz and above shows little performance gains across games (A GeForce 8 series card was used), so considering that the X2 is much faster, I would say that the sweet spot for this card is at 3.0GHz or more, and a bottleneck may be present, but it would be hard to pin point (If it doesn happen). Even at CPU dependant games, the performance difference doesn't worth the expenses of buying a $1K CPU like QX9750 or similar.
I agree, with the newer single GPU cards, 3GHz is really the minimum you need.
With a dual GPU / SLI, you may want a bit faster.
no. First off, as has been prooven, an absolute number like 3GHz means nothing when the Athlon 64 is 30% more efficient clock-for-clock than a Pentium 4 and the Core 2's are 40% more efficient clock-for-clock than the Athlon 64s. Theres also the obvious case of dual core vs single core (vs quad core).
Ok so lets step back to the basics. The GPU can only render so much right? lets say in our primitive 3D game that the graphics card can handle 4 shader effects and 10 polygons while maintaining 60FPS. so, on your screen your looking at 3 shader effects and 8 polygons when whamo you look left and theres two more shader effects (for a total of 5). Since your graphics card can't handle that load at 60FPS, the FPS drops to (que the math wiz) 48 (?) FPS. Moral is that your graphics card can do a certain amount of work at 60 FPS.
Same story with a CPU only with different terms, but of course things get a little more complicated with the OS thrown into the mix. Lets say the CPU can handle 2 players worth of networking (meaning on-screen your looking at two other people who are playing with you on-line), and 2 buildings collapsing (physics being done on the CPU) at the same time. So your in a game with your friend (whos on screen ahead of you), and your throwing grenades at buildings. At one moment in time, you've got two buildings collapsing infront of you and your friend on screen when suddenly the enemy team pops up and now 3 people are on screen. Your CPU can't handle the networking load and as such the FPS drops by 25%.
There are millions if not billions of factors which make up a real game but my example gets the point across. Some engines are CPU intensive, maybe they have lots of physics or a heavy networking load but the graphics arn't to excellent so the GPU is just coasting through the load. Most engines are GPU intensive, where the CPU is always able to keep up as it has no trouble handling everything the game engine can throw at it but the visuals are excellent and as such the GPU gets bogged down. There is a third scenario, one thats often overlooked and thats the effect of a slow hard drive and/or not enough ram. Constantly having to move things out of ram and into the page file because your ram is full will slow you down considerably, and having an old 5400 RPM drive that just cant keep up with these big texture files is another issue some people have. Now of course the obvious answer is pre-cache but that can mean long load times and/or a CPU bottleneck!
As you guys are saying you need minimum 3Ghz Processor, so why not i just overclock my processor to 3Ghz ? Don't you think that will do the trick ?
Do reviews go up at midnight???
Hexus said:AMD refers to the inter-GPU communication ability as CrossFireX SidePort, and it's a feature that, as the name suggests, offer high-bandwidth - bi-directional 5GB/s - transfers from GPU to GPU, should they be required. We were informed that the feature will not be enabled until a later date, though.