AMD's ATI Radeon HD 5870 Video Card Review @ [H]

Does anyone have a good guess at what this card would run Crysis at 1680 x 1050? That might not be applicable in the future, but for now it's what I have. Maybe 20 min FPS, 40 average, 60 max at the same "playable" settings that were used for 1920 x 1200, or is that being overly generous? But really though, how many of those eye candy effects make such a noticeable difference graphics wise in that you couldn't skimp on a few to increase performance?
 
very true, but I wasn't just talking about the memory bandwidth I was talking about the memory bus in general I am sure you can clock the memory high and have the same bandwidth but wouldn't it be more sufficient to have a wider memory bus, I bet we could test it where you have 512bit bus with gddr5 for instance take the unreleased gt300, and downclocked it to match the bandwidth of 5870 with 256bit bus and see what the results are, I am sure the card with wider bus would be more sufficient, though it looks good for games right now but what my point was lets say they move to 28nm and have 3200sp's and doubled everything, do you not think they would be limiting it with the memory bus with only 256bit, I am not trying to argue here as I will probably get this card or the 5850 which is priced really nice at 259, but I wouldn't wanna see the card with a 256bit bus and double the shader count, rop's and texture units.

I'm tellin ya, it really doesn't matter so much. In fact, the GTX 285 tested has a 512bit mem bus and higher memory bandwidth than the HD 5870, yet the HD 5870 was faster in every game we tested, with and without high levels of AA at 2560.
 
lol, it probably doesn't matter, but the only question I was trying to get to is the future of the memory bandwidth compare to the gpu itself, there has to be a point where gddr5 is going to be outdated, I am sure they will have gddr6 or 7 but by that time It will probably be easier and cheaper for them to have 512bit bus and stick it with gddr5.
 
Does anyone have a good guess at what this card would run Crysis at 1680 x 1050? That might not be applicable in the future, but for now it's what I have. Maybe 20 min FPS, 40 average, 60 max at the same "playable" settings that were used for 1920 x 1200, or is that being overly generous? But really though, how many of those eye candy effects make such a noticeable difference graphics wise in that you couldn't skimp on a few to increase performance?

Actually, I forgot to mention this in the evaluation, I did test Warhead at 1680x1050 on the HD 5870. I was able to MAX THE GAME OUT at 1680x1050. That is, I was able to enabled 8X AA/16X AF and all Enthusiast settings. So the game can be maxed out, you just have to do it at 1680x1050. I was getting framerates that hovered in the lower 30's to upper 20's at this setting.
 
lol, it probably doesn't matter, but the only question I was trying to get to is the future of the memory bandwidth compare to the gpu itself, there has to be a point where gddr5 is going to be outdated, I am sure they will have gddr6 or 7 but by that time It will probably be easier and cheaper for them to have 512bit bus and stick it with gddr5.

Increasing bus width adds PCB cost, it is cheaper to build memory bus width 256-bit PCB, less complex in material and cheaper in cost.
 
I wish we had dx11 game that used the hardware in a good manner, and see how it performs.

problem is so many people still use XP or vista so no true DX games are out as companies want to appeal to the largest base


i would love if some standard was made to which ATI / NVIUDIA would use when they release new cards to test the new features, some no biased entitiy which would truely test each cards DX abilities from DX9 to DX11.,

it is something say, 3dmark could do, and get their name back in the game, even though a synthetic benchmark, could be used to evenly compare NVIDIA to ATI.
 
I'm fairly happy for AMD, the economy needs good products people want to buy. Anyway, maybe i'll get one once I'm not happy with my gtx 280.
 
problem is so many people still use XP or vista so no true DX games are out as companies want to appeal to the largest base

Vista is going to get DX11, but you are right, XP is stuck with DX9. I personally think DX10 would have received wider adoption if DX10 had been supported on XP.
 
Looks like a great card. I'm going to wait and see what nVidia does first this time around. I'll probably want DX 11 before I build a new rig when the i9s come out.

Right now however PhysX is at least implemented in a fair number of good games, like Arkham Asylum. I've see some video comparisons showing what PhysX added to AA and it did add some noticeable if not spectacular detail, you could at least easily tell the difference, I mean Batman's cape is rendered as cloth with PhysX and it is an improvement from what I saw in a video at least.

Looks like a great card so now its nVidia's turn to answer.
 
lol, it probably doesn't matter, but the only question I was trying to get to is the future of the memory bandwidth compare to the gpu itself, there has to be a point where gddr5 is going to be outdated, I am sure they will have gddr6 or 7 but by that time It will probably be easier and cheaper for them to have 512bit bus and stick it with gddr5.
The width of the memory bus in and of itself doesn't matter. What matters is the amount of data that can be written to or read from the memory in a given amount of time, which is the total bandwidth. Whether a particular amount of bandwidth is achieved with a 512-bit bus and 2000MHz RAM or a 256-bit bus and 4000MHz RAM is irrelevant; both will result in the same overall performance.
I mean Batman's cape is rendered as cloth with PhysX and it is an improvement from what I saw in a video at least.
The cape animations are the same with PhysX enabled or disabled. If it does use PhysX, it doesn't require any hardware acceleration.
 
ZERO next time I have a bandwidth dilemma I am gonna ask you. LOL
 
Excellent review, though I wish some other games had been tested off-the-bat.

However, just to clarify...

- 5870 (single PCB) = GTX 295 (dual PCB), basically?

Two other questions I have:

- Does the 256 Bus width hurt it in any way?
- Does it handle true MSAA as well?

I prefer true MSAA over CSAA etc. ...was true MSAA used in the Crysis: Warhead test? It's not mentioned in the review, unless I missed it somewhere.

I've never seen the GTX 285 really "trounce" the GTX 280, but the 5870 runs Crysis: Warhead on Enthusiast settings and gets the same fps as my GTX 280 on Gamer settings.

So essentially, the 5870 trounces the GTX 280?

If so, would that be with true MSAA?

Unless Nvidia comes out with something astounding that's not a "sandwich" card, and depending on the responses here, I'm seriously considering moving from the GTX 280 to the 5870.
 
Thanks again AMD for not trying to rob us with extravogant MSRPs. If only you could pull another Athlon out of your hat.
 
Thanks again AMD for not trying to rob us with extravogant MSRPs. If only you could pull another Athlon out of your hat.

I agree, I haven't had an AMD processor since my 4200x2 during the pentium D days. I hope they can manage to pull something off. Hopefully they take a similar route to what ATI did to get back into the game. With the 3000 series they made affordable cards that fit a larger market and were great for the price. Right now, Intel seems to have won on all levels.
 
Anyone see the 5850 for sale anywhere, or even the 5870 in-stock :eek:?


they wont go on sale in store til the end of the week.. right now its just oem system builds.. but you may see them show up online before then..

Thanks again AMD for not trying to rob us with extravogant MSRPs. If only you could pull another Athlon out of your hat.

this is their athlon.. no question about it..


very true, but I wasn't just talking about the memory bandwidth I was talking about the memory bus in general I am sure you can clock the memory high and have the same bandwidth but wouldn't it be more sufficient to have a wider memory bus, I bet we could test it where you have 512bit bus with gddr5 for instance take the unreleased gt300, and downclocked it to match the bandwidth of 5870 with 256bit bus and see what the results are, I am sure the card with wider bus would be more sufficient, though it looks good for games right now but what my point was lets say they move to 28nm and have 3200sp's and doubled everything, do you not think they would be limiting it with the memory bus with only 256bit, I am not trying to argue here as I will probably get this card or the 5850 which is priced really nice at 259, but I wouldn't wanna see the card with a 256bit bus and double the shader count, rop's and texture units.

yes 512bit GDDR5 will double the bandwidth of 256bit.. but it will also double if not triple the price of the card.. so overall no there is absolutely no reason for it at this point except to say "look at me i have a bigger e-pen0r then you".. im really having my doubts that the GT300 will even have 512bit bus GDDR5.. if it does.. ya might as well get ready to take out a loan just to buy the card.. and dont forget.. GDDR5 as a whole hasnt even reached its max potential.. because they have already said that GDDR5 can be taken up to 7 ghz.. and we are no where near that yet.. give it another year or so.. and we make see some insane bandwidth numbers..
 
Last edited:
The cape animations are the same with PhysX enabled or disabled. If it does use PhysX, it doesn't require any hardware acceleration.

I stand corrected. As I said I've not played it with PhysX off as the game runs very well on my sig rig maxed out at 4xAA, averages 55 FPS @ 1920x1200 which is more than fine for this game.

Nonetheless my point was that PhysX for the foreseeable future will be better supported than DX11. Of course that can all change quickly.
 
What do you all think about the AF IQ and addition of Supersampling AA? Like/Dislike/Don't Care? Give me your opinions.

Awesome article, really enjoyed the read.

Supersampling, I have waited for this feature since I first saw a voodoo 5 review back in the day. I have to disagree with you guys, I think the SSAA shots are spectacular, the complete lack of aliasing is more immersive, and the texture degradation you mention actually looks a little better to my eyes, fewer errant pixels to jumping out where they shouldn't.

AF is sort of meh, I've been pretty satisfied with the AF since the 9700 series, but at least now it's perfect so they don't need to devote resources to improving it :)

Excellent review, though I wish some other games had been tested off-the-bat.

However, just to clarify...

- 5870 (single PCB) = GTX 295 (dual PCB), basically?

Two other questions I have:

- Does the 256 Bus width hurt it in any way?
- Does it handle true MSAA as well?

I prefer true MSAA over CSAA etc. ...was true MSAA used in the Crysis: Warhead test? It's not mentioned in the review, unless I missed it somewhere.

So essentially, the 5870 trounces the GTX 280?

If so, would that be with true MSAA?

256 bus: Not in the least, GDDR5 memory on the 5870 enables double the bandwidth GDDR3 is capable of.

MSAA: ATI's AA is MSAA except for the new supersampling modes. ATI has CFAA, which are bonus shader enhancements to the base MSAA. CFAA = Custom Filter Anti Aliasing, not to be confused with Nvidias CSAA. Edge detect is what the review refers to when they speak of 24x, 12x modes. Edge detect uses shaders to increase AA only on polygon edges to effectively triple the MSAA level, 4x-12x, 8x-24x respectively. ATI has other CFAA modes that can come in handy in certain situations, but usually result in unpleasant texture blurring, so most performance reviews don't use them.

Trouncing: Yes, and yes.
 
Last edited:
Another great review. Eagerly waiting some more reviews with other games as promised.
 
I haven't finished the article yet but from what I've seen so far (SPECs) All I can say is:
Bye Bye 4850, 5850 Here I CUM...
 
Won't be jumping right now at 5xxx wagon. First I want to see how it performs at retail version of Win7, then I want to see how it works at GT300, and for the next few weeks, my pair of 280 are more then enough :)

I hope that GT300 launch will either be shadowed by 5870x2, 5890 or at least big price cut :) Besides I want 2gb version, as I believe insaine AA levels will benefit greatly on this card with more ram... and I want custom cooling, more silent and more efficient.

Now Sapphire, bring out your 5870 2GB Vapor-X :D
 
Sorry to change the subject a bit but does anyone know if the HD4800 series will be getting the Supersampling AA modes as well?

edit: nm found the answer on page one sorry.
 
Last edited:
256 bus: Not in the least, GDDR5 memory on the 5870 enables double the bandwidth GDDR3 is capable of.

MSAA: ATI's AA is MSAA except for the new supersampling modes. ATI has CFAA, which are bonus shader enhancements to the base MSAA. CFAA = Custom Filter Anti Aliasing, not to be confused with Nvidias CSAA. Edge detect is what the review refers to when they speak of 24x, 12x modes. Edge detect uses shaders to increase AA only on polygon edges to effectively triple the MSAA level, 4x-12x, 8x-24x respectively. ATI has other CFAA modes that can come in handy in certain situations, but usually result in unpleasant texture blurring, so most performance reviews don't use them.

Trouncing: Yes, and yes.

Wow, thanks for the in-depth response.

Are there any bad "repercussions" when using Edge Detect? Like blurring other textures?

So, in terms of both visuals and performance, the GTX 280 basically gets crushed by this beast then? Worth the move to ATi at this point?

It almost sounds like if you ran Crysis in Gamer mode with the 5870, as opposed to Enthusiast (just to make a comparison) then you'd probably be able to hit around 50 fps, since the 5870 runs at around 30 fps in Enthusiast. If so, it'd be the first GPU to be able to pull that off.

I always like ATi's color a bit more than Nvidia, always seemed to have a little more depth, so with all this on the table, man... I'm close to pre-ordering one right now.

Also, wondering what to go with if I do switch... always had good luck with Sapphire, but there's also ASUS (I wont touch HIS)... probably Sapphire.

I don't know, what do you think?

Currently running a GTX 280, E8600, 2GB RAM @ 1920x1200 res and would love to push things to the max visually while gaining performance. Would this really do it in both realms?
 
Last edited:
Hey Brent and Kyle, I've been thinking about the blurry-texture problem with SSAA.. perhaps there would be a way to force negative LOD (Level of Detail) on the textures to balance it out? Nvidia's CP allows us to choose "Clamp" instead of "Allow", but Clamp does not allow the LOD to be negative, or to go below 0 (anisostropic filtering takes care of the rest). Rivatuner allows a manual tweaking of the LOD when clamp is chosen, but AF does not allow for negative LOD, right? Anyways, ATI could implement a -2 LOD when 2xSSAA is used, -4 LOD for 4xSSAA and -6 or the maximum negative LOD for 8xSSAA or something like that.

2xSSAA effectively doubles AF, while 4x SSAA gives 4x AF by itself if no AF is being used (or quadruples it anyways). Say, if we choose 16x AF with 2x SSAA, we're actually getting 32x AF as shown in the D3D AF Tester program, but if the textures are being blurred out by the supersampling, then the only logical solution is a much sharper LOD to try to alleviate the problem, no?

I'm looking forward to SSAA because I just cannot stand the shader-aliasing in Unreal3-engine games (Mass Effect, R6Vegas, Bioshock, etc..). It's like as if FSAA is actually "HSAA" or Half-screen AA because only half of the things on the screen are being AA'ed. It was horrible with Lost Planet too!
 
Last edited:
good review but i feel underwelmed a bit. i was expecting near double the performance on previous gen cards but all i see is a nvidia 295gtx slapped with dx11.

i was wishing games like crysis and arma 2 will be able to play at an average framerate of around 50fps at above 1080p res but it seems this card cant do it just yet.

what will the performance of a 5850 be? the numbers shown on your results are what i would expect from a 5850 not a 5870.

a 5850 should be as fast as the fastest GPU of previous gen if not slightly faster while the 5870 should be a no brainer.
 
4850->5850
4870->5870

did gtx 280 double the performance of 9800gx2, when g80 came out there was not really any dual gpu card out as far as I know, thats why it felt like holy hell the best thing ever, drool. once again the card performs nearly double the previous gen, stop expecting it to beat the a dual chip card with a 50% margin, that is just not going to happen, just imagine if there was no hd4870x2 or gtx295, than how would you thinking than. I never expect a card to outperform a dual gpu card from last gen by a big margin, that is just silly.
 
Great review. That said, had to share this. :)

fullscreencapture917200u.jpg
 
4850->5850
4870->5870

did gtx 280 double the performance of 9800gx2, when g80 came out there was not really any dual gpu card out as far as I know, thats why it felt like holy hell the best thing ever, drool. once again the card performs nearly double the previous gen, stop expecting it to beat the a dual chip card with a 50% margin, that is just not going to happen, just imagine if there was no hd4870x2 or gtx295, than how would you thinking than. I never expect a card to outperform a dual gpu card from last gen by a big margin, that is just silly.


before the 9800GX2 there was the 7950GX2.. i might have the name wrong though.. but that was the first dual PCB/GPU card from nvidia and it was a total flop..


good review but i feel underwelmed a bit. i was expecting near double the performance on previous gen cards but all i see is a nvidia 295gtx slapped with dx11.

i was wishing games like crysis and arma 2 will be able to play at an average framerate of around 50fps at above 1080p res but it seems this card cant do it just yet.

what will the performance of a 5850 be? the numbers shown on your results are what i would expect from a 5850 not a 5870.

a 5850 should be as fast as the fastest GPU of previous gen if not slightly faster while the 5870 should be a no brainer.


want to explain to me where you see a GTX295 slapped with DX11 at? because all i see is a single PCB, single GPU.. out performing a dual PCB/GPU card by a long shot..

and why do you expect so much from a card on 2 piss poorly coded games? i mean hell.. explain to me where you see a single gpu card running arma II with all the settings maxed out except view distance AND running normal AA? because there is none except for the 5870.. then on top of that explain to me where you see a single gpu card running Crysis warhead with all the settings on enthusiast while running AA at 1920x1200? none.. thats right.. NONE.. use some friggin common sense and look at the numbers.. if you wanted to compare apple to apple between the 5870 and the GTX 285 which are the now officially the 2 fastest single GPU cards on the market barring the numbers released for the 5850.. and you would easily see that the 5870 is leaps and bounds better then it..

there hasnt been a new generation of graphics cards that has EVER doubled the performance of the card generation before it.. not even the GTX series could do that against the 9800GTX or 9800GTX+.. look at the numbers and the settings in game and maybe you would realize just how much better the 5870 is against the GTX 285 and GTX 295..


arg i wish there was an easy button to slap some common sense into people..


oh and nice picture serpico
 
Last edited:
why is it silly? a dual GPU doesnt mean double the performance ie a 4780x2 is not twice as powerful as a 4870. its only maybe 40-50% more powerful IF the game supports it and so in theory this 5870 is only 50-60% more powerful than a single 4870. sorry but i was expecting double the "performance" of a 4870.

look at the 8800gtx. that was nearly double the performance of a 7800gtx. 6800gtx also had a GIGANTIC gain over a 5800 card.

i think i will wait for a card that doubles the performance of my 260gtx for £200 before i dive into it.

Next up is nvidia. maybe they can grant me my wish or else i may just wait for the refreshes such has a 5890 from ati.

Still to this day crysis cant play at 50fps let alone 60fps "average" at 1080p or higher. :(
 
before the 9800GX2 there was the 7950GX2.. i might have the name wrong though.. but that was the first dual PCB/GPU card from nvidia and it was a total flop..





want to explain to me where you see a GTX295 slapped with DX11 at? because all i see is a single PCB, single GPU.. out performing a dual PCB/GPU card by a long shot..

and why do you expect so much from a card on 2 piss poorly coded games? i mean hell.. explain to me where you see a single gpu card running arma II with all the settings maxed out except view distance AND running normal AA? because there is none except for the 5870.. then on top of that explain to me where you see a single gpu card running Crysis warhead with all the settings on enthusiast while running AA at 1920x1200? none.. thats right.. NONE.. use some friggin common sense and look at the numbers.. if you wanted to compare apple to apple between the 5870 and the GTX 285 which are the now officially the 2 fastest single GPU cards on the market barring the numbers released for the 5850.. and you would easily see that the 5870 is leaps and bounds better then it..

there hasnt been a new generation of graphics cards that has EVER doubled the performance of the card generation before it.. not even the GTX series could do that against the 9800GTX or 9800GTX+.. look at the numbers and the settings in game and maybe you would realize just how much better the 5870 is against the GTX 285 and GTX 295..


arg i wish there was an easy button to slap some common sense into people..


oh and nice picture serpico

chill man no need to act so defensive. jesus christ am i wrong to post my opinions? ok what you said in terms of SINGLE card solutions is correct but im ignoring whever its a single card or dual card solution and just looking at it based on previous GPU cards.

Your right it does look a bit impressive. can anybody else share your wisdom before im fully confinced?
 
why is it silly? a dual GPU doesnt mean double the performance ie a 4780x2 is not twice as powerful as a 4870. its only maybe 40-50% more powerful IF the game supports it and so in theory this 5870 is only 50-60% more powerful than a single 4870. sorry but i was expecting double the "performance" of a 4870.

look at the 8800gtx. that was nearly double the performance of a 7800gtx. 6800gtx also had a GIGANTIC gain over a 5800 card.

i think i will wait for a card that doubles the performance of my 260gtx for £200 before i dive into it.

Next up is nvidia. maybe they can grant me my wish or else i may just wait for the refreshes such has a 5890 from ati.

Still to this day crysis cant play at 50fps let alone 60fps "average" at 1080p or higher. :(

yeah, I don't see the point in upgrading to a card that only gives me around 15 % more performance. The only game that is unplayable at 2560x1600 with my 275 is of course Crysis, but the 5870 does not improve this situation by a great deal. I hope Nvidia's next line of cards manages to wow me.. I'd like to see a card with double the performance of the 285. This more or less reminds me of the jump from the Geforce 6xxx to 7xxx series, which was decent but nothing major. The 8xxx series however changed everything, finally making Oblivion for example playable at 1080p with a single card. I'd like to see that happen with Crysis :)

there hasnt been a new generation of graphics cards that has EVER doubled the performance of the card generation before it.. not even the GTX series could do that against the 9800GTX or 9800GTX+.. look at the numbers and the settings in game and maybe you would realize just how much better the 5870 is against the GTX 285 and GTX 295..

oh and nice picture serpico

It has happened, the 8800GTX was twice as fast as its predecessor.
 
Last edited:
GTX295 w/DX11?

Did you look at the power consumption and noise levels?

It's similar to two GPUs on a single package, but nothing like a dual GPU card. 188w max draw for this performance is outstanding.
 
yeah, I don't see the point in upgrading to a card that only gives me around 15 % more performance. The only game that is unplayable at 2560x1600 with my 275 is of course Crysis, but the 5870 does not improve this situation by a great deal. I hope Nvidia's next line of cards manages to wow me.. I'd like to see a card with double the performance of the 285. This more or less reminds me of the jump from the Geforce 6xxx to 7xxx series, which was decent but nothing major. The 8xxx series however changed everything, finally making Oblivion for example playable at 1080p with a single card. I'd like to see that happen with Crysis :)



It has happened, the 8800GTX was twice as fast as its predecessor.

No it wasn't.

http://techreport.com/articles.x/11211/11

Just an example. 7900GTX SLi'd beat an 8800GTX. And SLI drivers weren't even that great at the time so SLIing probably got 50% increase in performance, and if the 7900GTX SLI'd beat an 8800gtx with only getting a 50% increase in performance, then no, the 8800gtx was not close to doubling the performance.
 
why is it silly? a dual GPU doesnt mean double the performance ie a 4780x2 is not twice as powerful as a 4870. its only maybe 40-50% more powerful IF the game supports it and so in theory this 5870 is only 50-60% more powerful than a single 4870. sorry but i was expecting double the "performance" of a 4870.

look at the 8800gtx. that was nearly double the performance of a 7800gtx. 6800gtx also had a GIGANTIC gain over a 5800 card.

i think i will wait for a card that doubles the performance of my 260gtx for £200 before i dive into it.

Next up is nvidia. maybe they can grant me my wish or else i may just wait for the refreshes such has a 5890 from ati.

Still to this day crysis cant play at 50fps let alone 60fps "average" at 1080p or higher. :(

if you wanna see double the performance you need to look at other reviews than the H review because over here it is not just about numbers they do gameplay experience, so look at other reviews around the net to see the pure numbers,

here is the truth, when the 8800gtx came out, we needed a card bad because the games demanded it, and it did really well in new games that the older cards just could not handle period. now when the dx11 games come out and they start stressing the hard ware I am sure you will close to double the performance, right now other than crysis there is not really any game that would stress even a hd 4890 enough, that is why the higher resolution you go and more AA you have happening the more performance difference you see between the 4870 and the 5870. we are getting new gen cards at a faster rate than we did before.
 
if you wanna see double the performance you need to look at other reviews than the H review because over here it is not just about numbers they do gameplay experience, so look at other reviews around the net to see the pure numbers,

Any review with double the performance is full of shit. The 5870 is ~70% faster than a 4870 and ~50% faster than a 4890 (at best). That is it.
Not worth spending over 350 € for me. Maybe later.
 
Last edited:
Back
Top