Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Am I the only one noticing nVidia's incremental improvments here? I havn't been astounded since the 8800 generation.
My eyes are on AMD.
Zarathustra[H];1036405075 said:I'm a little bit disappointed - however - that the [H] reviewed this card without overclocking it...
Am I the only one noticing nVidia's incremental improvments here? I havn't been astounded since the 8800 generation.
My eyes are on AMD.
A bit suprised it wasn't compared to the 5970 instead of the 5870, considering the 5970 is amd's top end and the 580 nvidia's.
Anything but a clear performance win for 580 vs 6970 would be a big fail for Nvidia, seeing how 580 is much larger, and probably will consume more power.
GTX 580 is really not impressive at all. Yes, it is much better than 480, but it's only 20-30% faster than a card that is one year (!!!) older, waaaaay smaller, consumes much less, and is 40 % cheaper.
Big Thanks! goes out to the [H] Crew for putting out another well layed-out review on such a short notice.
The performance numbers fall in-line with what I imagined they would be in real world usage, not from nVidia's hype or voiced in the forums. With that said, I've asked my 2 relatives to hold off purchasing the GTX 580 (to their dismay) and wait for the AMD 5900 series to be released. They both want to go with single card solutions for the games they play and waiting until the 5900 series to arrive on the scene would be a better suggestion than to just pull the trigger and get the latest and greatest now and possibly regret the purchase later.
Better to be armed with fact based information than to hop into the fray based on rumor, I always say.
GTX580 is on par with a 5870 2GB card but its $200 more..heee no.
The cheapest GTX580 is $559 on newegg the review says $500. blegh
The Cheapest HD5870 2GB is $339.
I disagree ingeneral with the review but this in specific bothers me.
Will it overclock or will the chip that down clocks it kick in? Also if I were to keep my card for a long time, will future stressful games make that chip kick in? Is there a hardware or software indicator that it has enabled the limiter? Sorry if I missed it in the article; was excitedly reading.
Say if I turn AA on in a future title and the fps drops to 20, is it going to down clock? Or a game like Mafia II where PhysiX made some Nvidia cards cry (if I remember correctly; don't own that title). I know you'll can't look into crystal balls, but I was just wondering if it kicked on a title basis or just stress in general. Maybe what I'm really trying to ask is what is the definition of stress to Nvidia.
Thx for the great review as always. Now to see which of the nephews wants to buy (2) 460's.
Kyle is there a chance to get a review if there is an adverse reaction to cooling if the new vapor chamber cooling is put in a 90 degree position like in an RV02 or FT02. I remember there being a problem with this in some of the earlier ATI cards.
Simply put.....RAW ,Naked and true to the bone review.
WYSIWYG.
Amazing review guys. Cant wait for the updated review with the 5970 in there.
Question , any chance you guys can do a Crossfire review of the 6870 & 6850 ? or has that been done ?
It seems that contrary to rumors nothing was stripped from the g100 core, it was just reorganized and optimized. Texture units did increase but to only to 64 instead of the 128 most people were hoping.
Any info on yeilds? If they are good, the 480 and 470 should die soon. I'm also curious on an eventual GTX570, would it have the same SPs as the 480 or the 470?
great review. as always truly unique compared to what other sites offer.
i love the real gameplay performance.
however, i did feel that something was missing. a performance comparison with morphological AA on.
i know Nvidia doesnt support it and that why this comparison seems important to me.
i have a 8800 gts 512 and im in the market for a new gpu. right now, amd with its morphological AA seemed to be a clear winner for me. it has less performance impact and equal quality. so, when i make a choice between 2 gpu's this is an important aspect. id like a current amd gpu using MLAA compared to the closest nvidia has to offer image quality wise in order to see if MLAA provides enough of a performance boost to tip the balance.
id like to see how MLAA changes the balance when making a choice in getting a new card.
5870 2GB is AMD's top end SINGLE-GPU, for the launch, we felt it best to compare the top end single GPUs, and at the time this evaluation was put together the Eyefinity6 card was the price competition to the 580.
I am working on a 5970 vs. 580 followup today.
How do you get that the 580 is on par with the 5870?
It's states that in the conclusion. 5 line from the bottom or so.
MLAA has not been qualified on Radeon HD 5xxx series yet. Therefore, I could not test it. I was not going to use a hacked driver. When MLAA comes to the Radeon HD 5xxx series, we will use it too.
Why exactly does single gpu card vs dual gpu card matter all of a sudden? It wasn't that long ago that the mentality seemed to be compare their high end to the competitions high end, amount of gpu's be damned. Now it seems price and amount of gpus also factor in. A bit of a strange philosophy when you consider amd's strategy for the high end is multi gpu cards.
Why exactly does single gpu card vs dual gpu card matter all of a sudden? It wasn't that long ago that the mentality seemed to be compare their high end to the competitions high end, amount of gpu's be damned. Now it seems price and amount of gpus also factor in. A bit of a strange philosophy when you consider amd's strategy for the high end is multi gpu cards.
5870 2GB is AMD's top end SINGLE-GPU, for the launch, we felt it best to compare the top end single GPUs, and at the time this evaluation was put together the Eyefinity6 card was the price competition to the 580.
To me, it is cause it is near silent, use a GTX 480 for a while, then put in a GTX 580 and find 20% faster performance than GTX 480 and performance you can't hear screaming at you. It's a big improvement.
These are good questions, I'm not really sure how that is going to work out. I can say in the games I tested, I never experienced that happening, and pushed them to insane levels to find the highest playable settings, well beyond what was playable. It is something we will keep a look out for. What I'm most concerned about is how it works with Overclocking, if overclocking will push the card to its thermal limit, thus kicking in the hardware. But, the the thermal limit of the chip is 97c, so I'd think it would have to get really high. I kinda look at the monitoring like it is targeted more for OCCT and Furmark specifically, heh. We'll see.
I think you missed this part from the conclusion. The 5870 held up well, but the 580 is definitely faster by a good margin:
"In all games however, the GeForce GTX 580 surpassed the Radeon HD 5870. We experienced the GeForce GTX 580 to be about 30% better performing than the Radeon HD 5870 on average. Once again, Medal of Honor showed the greatest performance difference, with the GTX 580 being 42% faster than the Radeon HD 5870."
5870 2GB is AMD's top end SINGLE-GPU, for the launch, we felt it best to compare the top end single GPUs, and at the time this evaluation was put together the Eyefinity6 card was the price competition to the 580.
I am working on a 5970 vs. 580 followup today.
Time factor, I would have loved to as well.
Why exactly does single gpu card vs dual gpu card matter all of a sudden? It wasn't that long ago that the mentality seemed to be compare their high end to the competitions high end, amount of gpu's be damned. Now it seems price and amount of gpus also factor in. A bit of a strange philosophy when you consider amd's strategy for the high end is multi gpu cards.
From Guru3D:
The new advanced power monitoring management function is well ... dissapointing. If the ICs where there as overprotection for drawing too much power it would have been fine. But it was designed and implemented to detect specific applications such as Furmark and then throttle down the GPU. We really dislike the fact that ODMs like NVIDIA try to dictate how we as a consumer or press should stress the tested hardware. NVIDIA's defense here is that ATI has been doing this on R5000/6000 as well, yet we think the difference is that ATI does not enable it on stress tests, yet is simply a common safety feature when you go way beyond specs. We have not seen ATI cards clock down with Furmark as of recently, unless we clocked say the memory too high after which it clocked down as safety feature. No matter how you look at it or try to explain it, this is going to be a sore topic for now and in the future. There are however many ways to bypass this feature and I expect that any decent reviewer will do so. Much like any protection, if one application does not work, we'll move on to the next one.
These are good questions, I'm not really sure how that is going to work out. I can say in the games I tested, I never experienced that happening, and pushed them to insane levels to find the highest playable settings, well beyond what was playable. It is something we will keep a look out for. What I'm most concerned about is how it works with Overclocking, if overclocking will push the card to its thermal limit, thus kicking in the hardware. But, the the thermal limit of the chip is 97c, so I'd think it would have to get really high. I kinda look at the monitoring like it is targeted more for OCCT and Furmark specifically, heh. We'll see.
5870 2GB is AMD's top end SINGLE-GPU, for the launch, we felt it best to compare the top end single GPUs, and at the time this evaluation was put together the Eyefinity6 card was the price competition to the 580.
I am working on a 5970 vs. 580 followup today.
I think nvidia did a great job. Thumbs up can't wait to see if they do a dual gpu version of the gf110
Time factor, I would have loved to as well.
If they're pricing these at $500 I'd hate to see the cost of a 580 flavor dual GPU card.
This is not an issue...
Guru3D quote
...proven by this:
Brent's quote