Upgrade from 6870 advice

ritch1

Weaksauce
Joined
Sep 1, 2005
Messages
112
I’ve just upgraded my rig to an i5 2500k and now want a better performing card. I currently have an AMD 6870 and I like to game at my monitors native res of 1900 x 1200. I find this card is ok but struggles a bit in some games so am looking for something higher end.
Ideally i’d like a single GPU solution as don’t like the idea of driver problems. I’m looking to spend around £400 ($650). Would you say the GTX 580 would be a good upgrade, I was thinking of this one –

http://www.overclockers.co.uk/showproduct.php?prodid=GX-149-GW&groupid=701&catid=1914&subcat=1812

this is about as high as i could go price wise and would expect not to have to upgrade for at least 2 years. I know its probably been a question asked a ton of times but is it worth going to this from a 6870. I’d consider getting another 6870 and going down the CF route but is that an easy process or would I be in for a few headaches?

Also, I have a decent Corsair PSU buts its only a 650, would that be enough to run these cards?
 
It's honestly up to you whether you want to CF or not. The 6000 series cards scale extremely well in Crossfire and outperform a single 580. 6870s are a incredible deal right now as well so you can pick another up for extremely cheap (ie $115-150 after rebates/coupons) especially considering the power you are adding for so little money. I would say if you know you are going to be staying @ 1920 x 1200 on 1 monitor then you should seriously consider 6870s in CF.

On the flip side though, alot of people prefer the simplicity of a single slot solution and a 580 is certainly a beast and a drastic jump over a single 6870.

I think your Corsair 650W psu should be able to handle either configuration with ease.

It honestly comes down to what you prefer...you could pick up another 6870 on the cheap or sell the one you currently have and put it towards the 580.
 
Last edited:
Well my recommendation is generally spend less money and upgrade more often. Nothing wrong with a 580 and it is indeed the baddest thing out there. I also appreciate not wanting to go dual GPU. It is more problems, don't ever let anyone tell you it isn't. It works, of course, and many are happy with it, but it is more problems that is for sure. Some games pitch a fit with dual cards for whatever reason.

However, graphics moves fast, faster than most things. Not only do new cards tend to make big jumps in performance, but features too. So I tend to recommend picking a price point that you can get a new card every 12-18 months. That usually makes you happier than buying a higher end card, and waiting longer.

Now I understand if for whatever reason you need to purchase less often and thus have to get the best you can, but unless you have a special case like that, I recommend figuring out what you can spend in that 12-18 month range and then budgeting there.

In terms of now, consider you could save near a hundred pounds by getting a 570, which is still a very fast card. If you saved that money, could you then purchase a new card in a year to year and a half, when we very well might have something like DirectX 11.1 or 12 out?

Just something to consider. You might find that a better use of your money than waiting 24+ months because you wanted the highest end.
 
Now that the HD6870 has come down in price a lot in the UK, a second card would be much better than using a single GTX580. The only issue you may encounter is memory limitations, but with a single 1920x1200 monitor, that's not going to be an issue.
 
Since you are upgrading for the next 2 years, I would at least wait until the next AMD and/or Nvidia series are out. They are said to be made on 28nm process, which probably will give you twice the speed for the same price.
Adding another 6870 would be cheaper option that might give you the extra performance you are looking for without paying too much.

That said, I have the 570 Phantom edition and am very pleased with the card. For me, noise/performance ratio matters and the fan profile on this card is pleasant. If I go Nvidia next time, I would probably look for a Phantom edition again. :)

The Phantoms are 2.5 slots, so be aware that in many cases, SLI wouldn't be possible. The card feels solid and "expensive", and Gainward added extra phases, something Nvidia cheaped out on with the reference 570. As a single highend card solution for someone who wants a quiet system, these cards can be recommended.
 
Last edited:
Since you are upgrading for the next 2 years, I would at least wait until the next AMD and/or Nvidia series are out. They are said to be made on 28nm process, which probably will give you twice the speed for the same wattage.
fixed that for you :p

On first release, you are unlikely to see dramatic improvements in value, just some more expensive cards initially. Of course there'll be some savings, but usually the big increases in value over the previous generation don't arrive until the ranges fill out from both manufacturers.
 
fixed that for you :p

On first release, you are unlikely to see dramatic improvements in value, just some more expensive cards initially. Of course there'll be some savings, but usually the big increases in value over the previous generation don't arrive until the ranges fill out from both manufacturers.

I don't think so. I expect AMD's 7870 to be within the same price segment as the 6870, but with a performance of 6870 crossfire due to process change and improvements.
 
I agree with the 2x 6870 sentiment. Probably the cheapest way to get the best performance. May require a mobo/psu upgrade as well, so keep that in mind.
 
I don't think so. I expect AMD's 7870 to be within the same price segment as the 6870, but with a performance of 6870 crossfire due to process change and improvements.

It's tricky to guess at, because we don't know if AMD will repeat the last gen by introducing the upper midrange cards first, or whether they'll go top-down like previous generations. Assuming they introduce the upper midrange first, I'd expect the HD7850 and HD7870 to perform similarly to the HD6950 and HD6970, but use around 110W and 130W respectively, and probably cost a little less than the HD6950/70 do now, redundantising the current top-end pair, purely on the basis that they'd basically be the same cards as the 6950/6970 as far as specification goes (though perhaps they'd have 1GB models predominantly over 2GB), but obviously use the 28nm process, so applying a 55% increase in efficiency as seen from 55nm to 40nm, that's about what I'd expect. The new '6770 and 6850' as it were, but almost twice as fast.
For the new top-end cards, I've no idea really, it all depends how well the new architecture works out and how far up the scale AMD want to go. It seems likely they'll go as high as they can, because nvidia will certainly be doing the same as they always have. If AMD can reach the 200W TDP mark again, we should hopefully be looking at something a clear 50% faster than the HD6970, and I would hazard a guess at that coming out with a $400-$500 price tag, certainly more than the HD6970 currently sells for.
 
AMD has been running a "sweetspot" strategy since the 4000 series. We will see a bigger performance increase now then what we did from the 5000 series to the 6000 series, and instead something closer to the 4000 -> 5000. The 6000 series is said to be originally planned as a shrink from 40nm to 32nm, but 32nm got abandoned. Now we will see a shrink to 28nm instead with the 7000 series + at least the improvements we saw in the 6000 series over the 5000 series.

The 57XX wasn't replaced by the 68XX due to change in naming scheme, but the 68XX is expected to be replaced by the 78XX, which I expect to be within the $150-$250 price segment which the 68XX was/is in.

Sweet-Spot_575px.jpg


I doubt the 78XX cards will be less then 68XX crossfire performance considering 40nm -> 28nm + improvements in architecture.
 
Last edited:
I doubt the 78XX cards will be less then 68XX crossfire performance considering 40nm -> 28nm + improvements in architecture.
Given the scaling of the HD6 cards is often in excess of 90% I highly doubt that. Typically 30% die width reductions like this provide roughly 50-60% extra performance per watt, so if they're going for a 90%+ performance increase the TDP is going to go up quite significantly.
 
Given the scaling of the HD6 cards is often in excess of 90% I highly doubt that. Typically 30% die width reductions like this provide roughly 50-60% extra performance per watt, so if they're going for a 90%+ performance increase the TDP is going to go up quite significantly.

Those numbers seems far off, so I need to ask you to provide some documentation or at least links.

Kepler/GTX 6XX, AMD's 7XXX's competitor, is estimated of giving 3-4 times the performance per watt from Fermi's 40nm to Keplers 28nm according to Nvidia. I doubt that AMD's die shrink is so much worse that they only give 50-60% performance per watt as you speculate:

nvidia-kepler-maxwell,I-Y-262474-13.jpg

http://www.tomshardware.com/news/fermi-kepler-maxwell-gigaflop-watt,11339.html

When it comes to relative gaming performance, you can see that AMD's 55nm 4870X2 is approximate the 40nm 5870:
http://www.techpowerup.com/reviews/ATI/Radeon_HD_5870/30.html

4870X2 is approximate 4870 crossfire:
http://www.xbitlabs.com/articles/graphics/display/radeon-hd4870-crossfire_17.html#sect0


Here's 6870 crossfire scaling in games:
http://www.techpowerup.com/reviews/ATI/Radeon_HD_6870_CrossFire/23.html

So:
Single 4870 from links above = 63% of 5870
4870X2 is equal (101%) to 5870

Single 6870 = 73% of 6870 crossfire
Chances are that 6870 crossfire = single 28nm 7870
 
They're not from any documentation, they're just speculative.
The 50% improvement per watt comes from comparing the HD4 series to the HD5 series, specifically the HD4870 to HD5850.
The HD5850 is about 50% faster than the HD4870, yet uses the same power consumption. Simple, easy maths.

Not even going to comment on the completely incorrect result of HD6870 vs HD6870 crossfire. It includes games that don't support crossfire, and resolutions that can't make any benefit of crossfire due to CPU limitations. The closest you can get is the 2560x1600 chart at 62%, and even that's not very accurate due to the games that have no/poor crossfire support.
 
They're not from any documentation, they're just speculative.
The 50% improvement per watt comes from comparing the HD4 series to the HD5 series, specifically the HD4870 to HD5850.
The HD5850 is about 50% faster than the HD4870, yet uses the same power consumption. Simple, easy maths.

Not even going to comment on the completely incorrect result of HD6870 vs HD6870 crossfire. It includes games that don't support crossfire, and resolutions that can't make any benefit of crossfire due to CPU limitations. The closest you can get is the 2560x1600 chart at 62%, and even that's not very accurate due to the games that have no/poor crossfire support.

I am not asking you to document the speed of 7XXX series, but your claims about 6XXX series being often in excess of 90% scaling and process reductions yeilding typically "provide roughly 50-60% extra performance per watt". Those numbers doesn't sound correct at all.

Performance per watt is measured as standard in flops. When AMD reduced from 55nm 4870 to 40nm 5870, they increased performance per watt almost 2X. Thats more then 50-60%:
Image_05.jpg


HD 5850 is not the replacement for 4870, but the 4850. Why you use 5850 vs. 4870 is also something you must explain, since it doesn't make sense. :)
 
I'm not discussing flops, as that's not really what we're going on about here. 'Speed' in gamers' terms is frame rates, and that does not correlate directly to flops.
I used the HD5850 vs HD4870 because those cards share the same TDP, so are more appropriate approximations in the scale. The HD5870 may have been the 4870's successor in terms of price and market-position, but it's a successor to the HD4890 as far as power usage is concerned.
 
I'm not discussing flops, as that's not really what we're going on about here. 'Speed' in gamers' terms is frame rates, and that does not correlate directly to flops.

You are discussing flops if you drag in "performance per watt" between generations. If you want alternate meanings of this, you need to drag in documentation for what meaning you want to use instead of the industry standard flops. "Performance per watt" is also a discussion of efficiency increases and not pure gaming performance. Make up your mind about what you are going on about here.

Speed in gamers terms is framerate, this we agree upon, so stop using efficiency/"performance per watt".

You haven't provided any source of your numbers on "performance per watt" aka nor on performance in games aka performance. I have provided both and as I point out, your numbers doesn't match your claims, so provide a source for your numbers.


I used the HD5850 vs HD4870 because those cards share the same TDP, so are more appropriate approximations in the scale. The HD5870 may have been the 4870's successor in terms of price and market-position, but it's a successor to the HD4890 as far as power usage is concerned.

We are talking about change of generations. As you can see from AMD's own slides, the 5850 isn't the replacement for the 4870, but the 5870 replaces the 4870. Its not a matter of power usage, but which model that replaces which model in what segment.

When it comes to 6XXX to 7XXX, AMD can choose to make a monolithic GPU which drains nuclear power plants dry and still it would be a generation replacement. I don't give a crap about power usage, since we are speculating about performance increases.

However, based on previous results from process changes (55nm 4870X2 to 40nm 5870) and architectual changes, I expect single 78XX to be at least equal to 68XX crossfire within the same price segment in games as I showed on the benchmarks in earlier posts. Considering OP's budget is $650, he could probably get a 7970 or Kepler equal if he is willing to wait for next gen.

If you still want to contest that, make up your mind if you are talking about power consumption, efficiency or gaming performance. And please provide some links to show that you don't pull the numbers "out of your ass" (no offence ment) :)
 
Last edited:
Ok, OP, After seeing that review I really can not understand why you wouldn't do crossfire with another 6870! I mean, it hangs with the GTX590 most of the time, a card that is basically TWO GTX580's!!!! So if you think about it, in a way by saving the money by getting another 6870 would net you performance of arguably the most powerful cards in the world. Also, if you overclock them a bit you might get the performance of TWO GTX580's. I have a few CFX systems and they have been smooth sailing. I have 2 5850's and they are running excellent in CFX.

Buy another 6870. That's the cheapest way for best performance right now. Dual 6870's will run circles around a single 580.

http://www.techspot.com/review/386-his-radeon-6850-6870-iceqx/
 
I don't really see the point you're trying to make. Are you saying performance per watt is completely irrelevant to gamers?

http://www.bit-tech.net/hardware/graphics/2009/10/14/ati-radeon-hd-5850-review/5
Increase in minimum frame rate: 40.0% at 30", 21.7% at 24", 56.5% at 20", 35.5% at 17" - average 38.4%
Increase in average frame rate: 35.0% at 30", 32.3% at 24", 36.1% at 20", 37.2% at 17" - average 35.2%
TDP for each card: Approximately identical (150W vs 151W)
Measured power consumption for each card:
http://www.bit-tech.net/hardware/graphics/2009/10/14/ati-radeon-hd-5850-review/9
Power consumption (27W idle for HD5850 by specification): Assume 85% efficient PSU - (104*0.85)+27 = 115W for HD5850. (128*0.85)+27 = 136W for HD4870.

TDP vs. In-game performance: (1.384*(150/151)) = 1.3748 - 37.5% efficiency increase
3DMark Power consumption vs. In-game performance: (1.384*(135.8/115.4)) = 1.6287 - 62.9% efficiency increase.

Not a bad result, but not exactly the results pulled from HD6870 Crossfire:
bc2.gif

This is just one of several 95%+ results. I won't spam the thread with them all, but they're pretty easy to find.

I'm sure it's not impossible to find that the HD7 generation achieves a 90% efficiency increase for some games (not considering FLOPS) but on the whole, I don't think the new 150W 28nm card is going to surpass two HD6870s in crossfire outside of titles where crossfire does not scale properly.
 
I don't really see the point you're trying to make. Are you saying performance per watt is completely irrelevant to gamers?

In this discussion, yes. Its totally irrelevant. We are talking about performance, not efficiency. Performance per watt is about how efficient the GPU is within the same power envelope, not how well it performs in games. Its totally meaningless for a gamer, unless you have a heat or electricity issue.

Therefore, your calculations are meaningless as well (and they are wrong, since the 4870 was replaced by the 5870, not the 5850).
 
Last edited:
It's far from irrelevant. There is only so far you can go with power consumption before the size (and therefore case), power requirement (so PSU) and heat become a real problem. This will define what level of access people have to this new improved performance, and for the most part, gamers don't care about FLOPS, they care about the frame rate and frame stability they get in games, so game fps per watt is still of some worth. More in fact, than FLOPS per watt, to someone who isn't running GPU-computing applications.
 
It's far from irrelevant. There is only so far you can go with power consumption before the size (and therefore case), power requirement (so PSU) and heat become a real problem. This will define what level of access people have to this new improved performance, and for the most part, gamers don't care about FLOPS, they care about the frame rate and frame stability they get in games, so game fps per watt is still of some worth. More in fact, than FLOPS per watt, to someone who isn't running GPU-computing applications.

Its completely irrelevant. MAX possible TDP of the new cards will be 300W, since AMD and Nvidia will keep themselves within PCI-e specs. OP's Corsair 650W PSU will be able to run a single AMD 7XXX or Nvidia 6XX.

Even for those that run systems with higher power requirements, "performance per watt" doesn't mean crap. Either they can run it, or they need to buy a new PSU. Its not about efficiency for gamers, but for performance.

OP is buying a GPU for 2 years with a budget of $650, which probably would give him the option to by an AMD 7970 or GTX 680 to replace his 6870. Thats definitly worth checking out if he can wait.
 
Last edited:
We seem to have come off at a bit of a tangent here. The original quote was that you'd see very nearly twice the performance for the same price. This therefore means you'd see something in the realms of 60% faster than an HD6970 for $180 at launch. With the pricing of previous releases, does this seem realistic to you? The point I was trying to make is that while the new generation will provide oodles more processing power for the same power draw and heat output, I don't think we'll see such a giant leap forward in performance per dollar.
IIRC when the HD5870 came out, the single-GPU range-topping HD4890 (ignoring the nvidia side for a minute) was around £150-£200 in the UK, and the new HD5870, while a good 60% faster, was also 50% more expensive, at over £300, and this was not due to the lack of available silicon, it's because it was the best card out there by miles, and AMD had every right to charge people a lot for it, especially given how expensive the GTX285 was at the time.
Expecting a card to show up with 60% extra performance, similar to previous new generations, at barely half the price of the current top-end card, seems drastically unlikely.
 
We seem to have come off at a bit of a tangent here. The original quote was that you'd see very nearly twice the performance for the same price. This therefore means you'd see something in the realms of 60% faster than an HD6970 for $180 at launch. With the pricing of previous releases, does this seem realistic to you? The point I was trying to make is that while the new generation will provide oodles more processing power for the same power draw and heat output, I don't think we'll see such a giant leap forward in performance per dollar.
IIRC when the HD5870 came out, the single-GPU range-topping HD4890 (ignoring the nvidia side for a minute) was around £150-£200 in the UK, and the new HD5870, while a good 60% faster, was also 50% more expensive, at over £300, and this was not due to the lack of available silicon, it's because it was the best card out there by miles, and AMD had every right to charge people a lot for it, especially given how expensive the GTX285 was at the time.
Expecting a card to show up with 60% extra performance, similar to previous new generations, at barely half the price of the current top-end card, seems drastically unlikely.

Before we continue any discussion here, can we agree upon the following:

The 4870 got replaced by the 5870, not the 5850. Look at AMD's own slide if you are still in doubt.

The 4890 wasn't replaced by the 5870.

Performance per watt doesn't mean crap to gamers in general. Nobody asks how a GPU performs for each watt used, but they ask how it performs in games (as in FPS) and if the PSU is sufficient.

If you still have some funny ideas about the 4870 being replaced by the 5850 and similar, please don't reply to my posts. I don't wish to be rude, but I find it pointless spending time on those posts.

OP has a budget of $650 and want more performance. He has explained that he prefer single card and that he is going to upgrade for the next 2 years. In my opinion, he will get the most out of his budget by waiting for the next gen AMD and/or Nvidia cards. I have given the reasons as of why in several posts and included links showing the data.

If you disagree, fine. If you want to argue against my opinion, fine. But, for god sake, keep it relevant. :)
 
Last edited:
He can spend as little as $115 of his $650 budget and get better than 580GTX performance now and still have enough money to upgrade to 7900 series later on if he wants.
 
He can spend as little as $115 of his $650 budget and get better than 580GTX performance now and still have enough money to upgrade to 7900 series later on if he wants.

Seeing AMDs current price history, you are probably right :)
 
thanks for all this info. I have to admit the CF cheap but high performance route sounds attractive. the GTX 580 is still nagging away a little, and getting another one later could also be an upgrade that lasts. I guess it becomes a question of what is the more reliable multi card set up.
 
In terms of the product lineup, yes, the HD4870 was replaced by the HD5870, and the HD4890 was not replaced with anything (I wouldn't consider the HD6970 to count). However, both in terms of power consumption, and more importantly launch price, the HD4870's effective successor was still the HD5850. That wasn't the same point in the product lineup, but it was the same position for price and power, the product range just expanded upwards.

Crossfire is still not to everyone's liking, and that's perfectly fair enough. I would advise against really expensive single GPU cards like the GTX580 now moreso than ever because of the proximity to the HD7 series launch, because I don't think there's a doubt in anyone's mind that even the HD7850 (or whatever it's called) will leave the 580 for dust.
 
Back
Top