ASUS Radeon R9 290 DirectCU II OC Video Card Review @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,630
ASUS Radeon R9 290 DirectCU II OC Video Card Review - On our test bench today is a factory overclocked Radeon R9 290 from ASUS sporting the DirectCU II cooling system. We will compare it to the NVIDIA GeForce GTX 780 to determine which card reins supreme. AMD prices have finally stabilized back to normal, and this pushes one card into an extreme disadvantage from the other. Find out which one.
 
If I was doing a new build - I would likely go with one of these cards. With a good cooler, these AMD 290 cards are really solid - and are an especially good value at their MSRP pricing. It's too bad AMD didn't put out a better reference design.

Just wondering if Nvidia will be responding with a price adjustment.
 
Speaking of the launch reviews, does the Asus R9 290 really pull an entire 55 watts less at load than the reference 290 (and the reference GTX 780 now pulls 19 watts more than last year)? If so, that is absolutely astonishing and it's no wonder that this card runs FAR cooler than a reference board.

http://www.hardocp.com/article/2013/11/04/amd_radeon_r9_290_video_card_review/8#.U3pT9vldWSo

http://www.hardocp.com/article/2014...rectcu_ii_oc_video_card_review/9#.U3pUB_ldWSp



EDIT: I see now that different power supplies were used, different motherboards, and different processor clock speeds (and presumably voltages, as well). In the launch review, system idle without GPU was 90w, in this new review the idle draw without GPU is 62w. This absolutely skews the power draw figures to compare to older reviews. Wish there was an easy way to get just the GPU wattage. :(
 
Last edited:
Has AMD corrected the performance regression which was present in 14.x beta drivers in the latest 14.4 WHQL drivers . the launch reviews show a stock R9 290 faster than a stock GTX 780 in Crysis 3. so whats the reason for a R9 290 OC being slower than stock GTX 780 in Crysis 3

http://www.hardocp.com/article/2013/11/04/amd_radeon_r9_290_video_card_review/3#.U3pBuHb098E

http://www.hardocp.com/article/2014...rectcu_ii_oc_video_card_review/7#.U3pEfHb098E

Could be driver performance improvements on the NVidia side I suppose.
 
Has AMD corrected the performance regression which was present in 14.x beta drivers in the latest 14.4 WHQL drivers . the launch reviews show a stock R9 290 faster than a stock GTX 780 in Crysis 3. so whats the reason for a R9 290 OC being slower than stock GTX 780 in Crysis 3

http://www.hardocp.com/article/2013/11/04/amd_radeon_r9_290_video_card_review/3#.U3pBuHb098E

http://www.hardocp.com/article/2014...rectcu_ii_oc_video_card_review/7#.U3pEfHb098E

The game has been patched a few times since then as well as numerous driver releases from both camps. Without testing on the same rig/same version/etc, I think it would be difficult to state with certainty whether the drivers or the game are responsible for the performance decrease.

The other thing to point out is that the length of time in the runthrough in the launch article was around 10 minutes, whereas the runthrough in the one I just completed was closer to 7-8 minutes. That alone could adjust the average rates up or down between either card. At the end of the day, 2FPS is not likely to make a difference between whether a game is playable at a particular graphics quality setting - in this case, they were both playable and in a "Pepsi Challenge" scenario you would not be able to tell which card you're playing on.

EDIT: I see now that different power supplies were used, different motherboards, and different processor clock speeds (and presumably voltages, as well). In the launch review, system idle without GPU was 90w, in this new review the idle draw without GPU is 62w. This absolutely skews the power draw figures to compare to older reviews. Wish there was an easy way to get just the GPU wattage. :(

There really isn't a great way to get GPU wattage and compare it across reviews. First, Killawatts are going to have variation between units measuring the power draw, power supplies will have different levels of efficiency at different wattage levels (and could, in theory, change over time with the same power supply), as well as CPU and other system component usage while gaming and even how well binned the massively OC'ed CPU is will cause the power measurements to vary. I would say that you can compare power draw within one of our articles and sometimes across articles written by the same reviewer (as many of the equipment variables are accounted for), but different reviewers (or when our rigs get upgraded) can toss the comparisons out the window.
 
So you once again missed the opportunity to actually put a custom R9 290 and a custom GTX 780 head-to-head on a OC battle? Comparing the results of your earlier PNY GTX 780 review and this... the 780 looks pretty darn strong.
 
So you once again missed the opportunity to actually put a custom R9 290 and a custom GTX 780 head-to-head on a OC battle? Comparing the results of your earlier PNY GTX 780 review and this... the 780 looks pretty darn strong.

is it strong enough to justify the $70.00 price difference? winning by a few 3-4 FPS is not worth that much of a price difference.
 
is it strong enough to justify the $70.00 price difference? winning by a few 3-4 FPS is not worth that much of a price difference.
That's ultimately up to the buyer, isn't it? It's subjective.

I think the custom 290s at $399 are a pretty stellar deal. I wonder if any of these can still be unlocked to 290Xs?
 
is it strong enough to justify the $70.00 price difference? winning by a few 3-4 FPS is not worth that much of a price difference.

Value propositions are always vague by nature. Performance is not. Comparing a custom 290 to a reference 780, then overclocking the 290 and leaving the 780 suffer from the very strict temp and power limits... Just saying, that is not the whole picture on the performance front.
 
Value propositions are always vague by nature. Performance is not. Comparing a custom 290 to a reference 780, then overclocking the 290 and leaving the 780 suffer from the very strict temp and power limits... Just saying, that is not the whole picture on the performance front.

But the GK110 reference cooler is supposed to be the bomb. ;) Supposedly it's only AMD reference that suffers from throttling and reduced performance.
 
But the GK110 reference cooler is supposed to be the bomb. ;) Supposedly it's only AMD reference that suffers from throttling and reduced performance.

Nice try, but failed. The cooler is "the bomb", at least if the best blower type cooler on the market is enough to be called that. The throttling is caused by Nvidia's decision to limit temperature, power usage and fan noise below that of AMD's 290/290X. Give it more headroom and behold.

The reference GK110 cards throttle way more than reference Hawaii. It's not a hardware limitation though, purely software.
 
Nice try, but failed. The cooler is "the bomb", at least if the best blower type cooler on the market is enough to be called that. The throttling is caused by Nvidia's decision to limit temperature, power usage and fan noise below that of AMD's 290/290X. Give it more headroom and behold.

The reference GK110 cards throttle way more than reference Hawaii. It's not a hardware limitation though, purely software.

True that. :D

780tiVS290X.png


In this chart we see out of box performance for GK110 (780, 780 ti, Titan) and what they do if you up the fan speeds (85% IIRC), increase the temp targets to 95° (same as Hawaii) and increase the power target to 106% (Uber for GK110 in this chart). Out of box Hawaii is actually faster in real world use (ie: Inside a case and allowed to run until temps stabilize.). My only beef is all of the negative press AMD got becuase the reference cards throttled under all kinds of artificial scenarios from not setting the cards to uber, to actually pumping heat into the cases like a PSU hotbox, but nVidia got a complete pass from the mainstream press. What this did was to allow nVidia to show artificial performance marks as well as inflating the overall power efficiency of their cards. While I don't fault nVidia for this I do fault the press for overlooking/ignoring/hiding this fact while going to great lengths to emphasize it in AMD's case. I'd love to know why.
 
I would love them to show two custom cards, a 780 and 290 same manufacturer, and see what comes of it, also +50mv isnt very much on a 290, crank these little buggers. If possible can you give us VRM temps in the future? I like to see the aftermarket cooling everything effectively on the card not just the cores.
 
The last GTX 780 review said it made the 290 look like a poor value.
Now this review says the 290 makes the 780 look like a poor value.

Consistency is key.
 
The last GTX 780 review said it made the 290 look like a poor value.
Now this review says the 290 makes the 780 look like a poor value.

Consistency is key.

Two different authors.

Based on the data in the PNY review it was clearly a better card than a reference 290. Based on this new review it shows that the Asus card is better than the Nvidia 780 reference card. It doesn't really say much other than the fact the R9-290 and 780 are very close competitors.

Generally speaking it looks like this

R9-290
Pros- Cheaper, as fast if not faster, 4gb vram,
Cons- Reference cards are said to be loud and hot ( i don't agree)

780
Pros, shadow play(people are raving about it, even though 3rd party video capture as existed for a long time now), physics ( in certain games) , generally accepted better reference cooling
Cons- less vram, more expensive
 
After comparing the two reviews against each other (the PNY 780 XLR8 numbers vs here), There isn't a clear performance leader between the two. The AMD card is ever so slightly faster in Metro and TR, while the converse is true in Far Cry. Both achieve 60+FPS in BF4, but not enough to difference to offer a gameplay disparity (60 vs 67 frames).

The Crisis 3 is tossed out because according to the testing outline, they used very different patch levels between the two. For some reason, the AMD review tested with basically the shipped version of C3 (1.0.xx), while the PNY card was tested with v 1.2. Two major patches to a game can cause dramatic test variances, made very obvious by the fact that the PNY card is about 40-50% faster than the stock 780 in the AMD test, which is flatout wrong.

Overall, it seems to be a wash performance wise. That being the case, the PNY card costs 6-15% more, depending on what vendor you buy from (the newegg price is still in stock, but the amazon price on the PNY card has dropped to $469).
 
After comparing the two reviews against each other (the PNY 780 XLR8 numbers vs here), There isn't a clear performance leader between the two. The AMD card is ever so slightly faster in Metro and TR, while the converse is true in Far Cry. Both achieve 60+FPS in BF4, but not enough to difference to offer a gameplay disparity (60 vs 67 frames).

The Crisis 3 is tossed out because according to the testing outline, they used very different patch levels between the two. For some reason, the AMD review tested with basically the shipped version of C3 (1.0.xx), while the PNY card was tested with v 1.2. Two major patches to a game can cause dramatic test variances, made very obvious by the fact that the PNY card is about 40-50% faster than the stock 780 in the AMD test, which is flatout wrong.

Overall, it seems to be a wash performance wise. That being the case, the PNY card costs 6-15% more, depending on what vendor you buy from (the newegg price is still in stock, but the amazon price on the PNY card has dropped to $469).

You seem to be reading the Metro results wrong. The PNY highest playable results are with PhysX. In the apples-to-apples results, the PNY 780 OC is clearly in front of the Asus 290 OC.
 
You seem to be reading the Metro results wrong. The PNY highest playable results are with PhysX. In the apples-to-apples results, the PNY 780 OC is clearly in front of the Asus 290 OC.

Ah, true. The frame advantage it displays in the apples-to-apples is the same type as the BF4 (no gameplay variance), but it is there.

I haven't personally owned an nVidia card since my 9600GT back in the day. Mainly because price/performance just worked too well in AMDs favor. 9600GT to 4850 to 4870x2 to 7870s. It basically comes down to value and at this juncture, I'd go with the 780 if I were buying right now. PhysX is enough of an improvement that it's very nice to have when it's there. The times when it's not, the playable performance between the two cards is functionally identical.

So the minimalist in me says 290 all the way, that extra $20-$30 could be spent somewhere else. But after having played PhysX games with it enabled, yeah...I'll shell out an extra Jackson just to have it when it's available.
 
290 is such a bargain now. I wouldn't touch a 290x tbh, the price difference just isn't worth it and you are easily in 780/780ti territory by then. but the 400 - 450 price point is being held down solidly by the 290 for now.
 
In my experience, VRM1 runs hot on this model in 290 and 290x due to the design. Just under 105 C in my tests.
 
Looking to pick up 2x 290/290X's for CF in a WC loop. As one of the few custom PCB cards that actually has a full-cover EK block (albeit only in nickel..) this card popped up on my radar. Considering the fact that the fan and noise are going to be a non-issue as a result, does this card really bring much to the table over the reference design strictly on the merits of the PCB and components used? The difference in price over reference is probably ~$100-$150 over 2 cards and 2 water blocks which is noticeable, but doable if its worth it
 
In my experience, VRM1 runs hot on this model in 290 and 290x due to the design. Just under 105 C in my tests.

This is my concern - and also what shows up on NewEgg (lots of angry users with hot VRMs and glitching). I guess [H]'s card did not exhibit the issues?
 
Back
Top