AMD Radeon R9 Fury X Video Card Review @ [H]

Yeah, a monster case like my HUGE N200!

Hey, if you have the desire to add another AIO into your uATX chassis, then more power to you.

As for me, I'm not going to clutter up the inside of my chassis with a 240-280mm CPU AIO and two 120mm GPU rads just to get inferior GPU performance and specs for the same cost...and I'll venture to guess that neither will a lot of others.
 
Last edited:
i took the time to go past the " no hdmi 2.0" part and read the rest. the numbers are good/competitive.

I do not agree with the conclusion that Fury X suffers from low VRAM. I saw ZERO proof of that inside the article...
Read the article again.
It even suffers at 1440p.
 
You're still not considering that the ROP count is quite lower than the 980 Ti - 64 ROPs vs. 96 ROPs. That will hurt in AA / AF performance quite a bit at high resolutions.

I have NO idea how you arrived at that response for my post. I said the card had one job, and it didn't do it is the implication.
 
Why no real world testing at 4k? I know, they'd probably come up empty, but for those on team red, I can imagine the conspiracy theorists in the posts ahead ;)

ALL testing we do is real world gameplay.
 
Hey Kyle or Brent, any update from AMD about unlocked voltage?

That's basically the only thing keeping me from ordering a 980 Ti at this point.
 
I bet that the next iteration of this technology, on a smaller node process and with 8GB or 16GB of HBM 2 memory would be awesome... if AMD can stay afloat until then that is...

The problem is I don't think the die shrink will solve their issues with being competitive, they need to do some significant work. Otherwise it will be yet another iteration on GCN which is bearing out as consistently behind Nvidia's own offerings with each generation.
 
Last edited:
The problem is I don't think the die shrink will solve their issues with being competitive, they need to do some significant work. Otherwise t will be yet another iteration on GCN which is bearing out as consistently behind Nvidia's own offerings with each generation.

They said they are focusing on GCN optimizations for Arctic Islands, aiming at 2xPerf/Watt.

If they can nail that I think they'll be fine.
 
I just checked the OCN thread and they don't know if voltage is locked or not.

Remember, just because the card is at 60C that's around max for liquid. Maybe you could use push/pull fans to get the temp down but you don't want to do 95C with an AIO.

Who knows what the VRMs can handle. Someone needs to temp check those before people go nuts. VRMs on my Titan X's are one reason I haven't hard volt modded yet (the other is time).
 
I just checked the OCN thread and they don't know if voltage is locked or not.

Remember, just because the card is at 60C that's around max for liquid. Maybe you could use push/pull fans to get the temp down but you don't want to do 95C with an AIO.

Who knows what the VRMs can handle. Someone needs to temp check those before people go nuts. VRMs on my Titan X's are one reason I haven't hard volt modded yet (the other is time).

Yeah, the fact is that if the VRMs on that card are getting that hot at stock, we are not going to realize any big overclock.

I have discussed our next steps with Brent and staying on top of Fury X performance as it is now is not a priority.
 
It's not a bad card, performance scaling between 1440p and 4k is pretty strange though. Have to believe that drivers will definitely scale the performance higher as times goes on.

AMD will probably not even bother touching the price right now with the regular Fury being their volume product and priced similar to a GTX980.
 
Hey Kyle or Brent, any update from AMD about unlocked voltage?

That's basically the only thing keeping me from ordering a 980 Ti at this point.

If you unlock the voltage on the Fury X you will start a fire.:eek:

You can unlock the voltage on the 980Ti. As soon as I install my waterblocks, that is precisely what I intend to do.
 
I'm not really sure what that means... Does that mean if/when overclocking tools with voltage adjustments become available, [H] isn't going to bother?

Currently we see no reason to spend resources overclocking the Fury X. If that changes in the future, we will look into it.
 
i took the time to go past the " no hdmi 2.0" part and read the rest. the numbers are good/competitive.

I do not agree with the conclusion that Fury X suffers from low VRAM. I saw ZERO proof of that inside the article.

Fiji is not a fail, it is competitive in both price and performance with 980 Ti. it runs cooler and quieter. Comparing it will Bulldozer is totally unfair.

The big letdown is that neither card can beat SLI 970 at 4k:(

Now let us wait for the Dual Fiji card. the 450w 4870x2 was the VGA i gamed on the longest; purchased one on launch and kept it until the 6950 came out. Dual Fiji can repeat the sucess.
Did you actually read the review? It's slower and costs the the same as the 980 Ti. In what way is that "competitive"? If it's faster and costs the same or less, great. Offering worse performance for the same price doesn't equate to value.

The AIO cooler is nice, as is the small size of the card. I don't think it's a bad product. It's just not a good product at $650 and I don't see how anyone can claim that it is.
 
I see some comments here about the VRMs on the Fury X but does anyone have a link? I'm surprised there are temp problems with them being that they are water cooled.
 
I am wondering why the testing was not done with the 15.15? I figured the reviews would be based on those drivers instead of the 15.5?
Are you guys just not reading the review?

For the new AMD Radeon R9 Fury X video card we are using AMD supplied drivers for use only with the R9 300 and Fury series. This is driver version: "AMD-15.15-Radeon300-Series-Win8.1-Win7-64Bit-June15." It should be noted that this driver will not work or install on AMD Radeon 200 series, so we cannot install it for the R9 290X comparison.
 
"Currently" I agree, I'm specifically asking about if/when the tools become available though.

Specifically tell me when those tools are going to be available and exactly what all will be able to be controlled by those and I will give a specific answer.
 
I expect more to squeezed out of this thing if/when AMD release more mature drivers. They really need to spend more resources on that aspect now. I am curious now as to how their new air cooler is designed to handle the heat. I hope they've cracked it!
 
I expect more to squeezed out of this thing if/when AMD release more mature drivers. They really need to spend more resources on that aspect now. I am curious now as to how their new air cooler is designed to handle the heat. I hope they've cracked it!
The amount of heat going into the GPU heatsink itself has gone up substantially compared to the 290X, despite power usage going down overall.
 
Hey, if you have the desire to add another AIO into your uATX chassis, then more power to you.

As for me, I'm not going to clutter up the inside of my chassis with a 240-280mm CPU AIO and two 120mm GPU rads just to get inferior GPU performance and specs for the same cost...and I'll venture to guess that neither will a lot of others.

Aww hell no. I'll stick with my 980 Strix. Just sayin' I easly could install a pair of Fury Xs in my wee case along with the H100 I've already got on my CPU and it would all run just peachy keen.
 
The amount of heat going into the GPU heatsink itself has gone up substantially compared to the 290X, despite power usage going down overall.
I'm not sure I understand what you mean here. If it's pulling a similar amount of power as the 290X, how is it putting off "substantially more" heat? That's not how physics works.
 
I got really lucky and was able to sell my old rig for $2800 maybe 6 weeks ago. The plan then was to sit on the money and build a Skylake CPU / New AMD GPU system ( around this time ) and just use my laptop to get by.

The more I thought about it, I didn't want to buy into 28nm again on top of I saw those early reviews of Skylake engineer samples and saw that the 5820k was still going to be faster at stock speed.

So I built my current system ( see sig )

Looks like I made all the right choices in not waiting and just building using 2 x 970's.

I'm going to upgrade next year using the new Pascal GPU and 6 core CPU's.
 
Aww hell no. I'll stick with my 980 Strix. Just sayin' I easly could install a pair of Fury Xs in my wee case along with the H100 I've already got on my CPU and it would all run just peachy keen.

Like me, you may also need a larger PSU to handle the higher power draw of two FuX's if they were to be the chosen pixel pushers, especially when running a modest 24/7 OC on the CPU under water. I'd be extremely hesistant to do so with anything less than 850-1000W. So that would be yet another expense required on top of $1300 in GPUs, without changing/upgrading anything else...
 
Ok now that I am at home and have time to give comment here it is.

First I have read the article and all posts here. I am not overly fond of the article as it leads itself to really unprofessional writing. I have stated before that an article reviewing any piece of equipment should leave personal remarks and words with negative connotation till the CONCLUSION. As some, few, have mentioned earlier it is somewhat hard to read as it lacks objectivity from minute one. The section about the R9-300 series and posts about Vram size and resolution for gaming would have been better suited for the conclusion as it seems more of a slant than anything substantive. That section set the tone for the rest of the article and what makes it hard not to feel there was an agenda, whether there is one or not.

Then the line in 1440p BF4 benchmark about giving the middle finger. That was quite infantile and again better suited for the Conclusion, where opinions belong. I think my point about word usage and placing of such is made so on to another point.

Seems for the greater part, the reviews findings of fact are in line with most others so I am not debating them. Though I think BF4 needs another look (it was the only test that seemed off). However that being said, the test suite does seem shallow and does give credence to some scrutiny. It was stated that the latest games were used and that was the reason (for being shallow , as in small in number not intellect). I get not using Skyrim though it is still a widely played game, but there are other recent games that don't seem to get used that fit the criteria. Ryse: Son of Rome released Oct 10,2014 and therefore is a recent game. It also happened to be a game that played very well on AMD, scrutiny is warranted for both why it is and is not used. Then you also have Dragon Age: Inquisition which didn't seem to show affinity to one side or the other, that released end of 2014 and was quite an anticipated release. Again fitting the criteria. However the choice of games are limited greatly to ones that do inherently favor Nvidia. NOT A CONSPIRACY but a statement of fact. I would rather have them exist in reviews than not. My argument is not against their existence in [H] bench suite but rather the exclusion of others, or in this case the dismissal of concern by the authors.

And as was stated early on in the first few pages was the concern over a statement in the review:

We saw up to 5GB of usage when the VRAM capacity was there to support it. This game is bottlenecked on every 4GB video card at 1440p with maximum in-game settings.

But when you look at the 4K bench tests, the same Fury that at 1440p was behind the 6Gb 980Ti now was neck and neck with it. Therefore the concern was the facts not the statement that unfortunately was blown off by the author and editor. Point being if 4Gb was indeed a bottleneck and reason for concern at just 1440p, then what was going on at 4K. That was the question and a good one seeing the facts didn't back up the statement. Even if 4Gb is a concern and a factor for the fury in the results then what was the concern affecting the 980Ti?

Anyway those are a few of my concerns and observations. Sad part is PcPer has had excellent and objective reviews of the 300 series (R9-390 in particular) and Fury. Used to be I couldn't stand their reviews for the same reason I dislike this one, too much opinion throughout with little objectivity. I was fine with the PcPer conclusion which wasn't a great deal different than most that the Fury was a bit less than stellar, not living up to the hype. But at least they gave the positives and negatives without the flaming.
 
Kudos to AMD for bringing Fury X to market and taking the risk of gen 1 HBM...however, despite the lead-up promo push..it still looks/feels like a rushed product. Unfortunately all the 'little' things that could be ignored if performance was stellar(I was one of those fooled into believing >10% over Titan X) will now be picked apart - no HDMI 2.0, no DVI, the 'factory fitted' water cooling, overclocking etc...Fury? A lower clocked Fury X? Depending on yields? Hopefully AMD can do some magic before the next 2 versions of this are 'unleashed'!?
 
I see some comments here about the VRMs on the Fury X but does anyone have a link? I'm surprised there are temp problems with them being that they are water cooled.

The water block on the Fury X is not actually a full cover block in the sense that it is a one piece block with water channels throughout.

It looks like it is a typically die only AIO block over top a metal base plate covering the card. It does have a copper tube section running over the plate (presumably contacting? is it soldered?) where the VRMs would contact.

This however is not the same as an actual one piece full cover block with water channels directly over the VRM area.
 
Do you realize that you are merely arguing semantics about choosing a chassis with superior cooling capability to defend an inferior performing product that doesn't even have a wide enthusiast market scope due to a mandatory AIO (which negates the need for having the amount of chassis airflow vs using air cooled GPUs) with the same price tag as the competition that kills it in every single metric except operating temps?

I'll agree with you it's an inferior product. But choosing the case over the GPU card is the same as choosing design over function. It's akin to sticking a supercharged 6.2L V8 into a Mini cooper because it looks cute. But in all honestly it's stupid idea.

If design is your goal, then this GPU is not for you.
 
Ok now that I am at home and have time to give comment here it is.

<snip>
I disagree... I come here because they're not afraid to call things as they see them. I'm not interested in how a review "should" be written (in fact I could care less), I'm interested in the opinions of the reviewers as they have proven themselves trustworthy over many years. I want to know their opinions, not be left guessing until the end. [H] doesn't follow most sites in the way they review things and I really appreciate them for that.

I'm not suggesting you're doing this, but people really need to stop crying bias every time a review is put out that they disagree with or that disagrees with their fanboyism.
 
Once I read the in depth article on tech report on the architecture, I knew it would be a bust.

Same number of ROPs, but more shaders and textures? That screams unbalanced architecture, which would mean it would have very inconsistent performance compared to the 290x.

They also revealed that the die size was limited by the interposer, which is probably why they couldn't fit more ROPs. That combined with the total fuckup with 8gb.390 versus 4gb fury makes it look quite stupid.

It doesn't matter how fucking fast your vram is if you have to swap over pcie. Who the fuck approved this architecture? gddr5 made the cut on the 4870 because the density was finally comparable to gddr3.

I didn't know any of this. So honestly it legitimately seems like this was designed around 20nm, they got fucked into using 28nm because of fab issues (for all gpu manufacturers) and they also hadto answer maxwell, then the interposer limited die size, they sacrificed what they could, because they just HAD to go with HBM that wasn't an option to NOT use. And you end up with a lopsided fucked up card that could have screamed with 8gb of gddr5 on it and a proper amount of ROPs
 
Last edited:

I agree. 4GB and lack of HDMI 2.0 is not future proof.

I'm praying to God that somehow this will overclock like a monster. But the negative reviews are already out. And it will likely be too little too late.

And before someone says it can't overclock, it can. It's just locked in drivers for now. There's no point in so much cooling power if you can't overclock the thing.
 
I'll agree with you it's an inferior product. But choosing the case over the GPU card is the same as choosing design over function. It's akin to sticking a supercharged 6.2L V8 into a Mini cooper because it looks cute. But in all honestly it's stupid idea.

If design is your goal, then this GPU is not for you.

However, I would happily drop a 5.7 (350) or 7.4 (454) into a S10.
 
Great review as always, I appreciate the time and effort.

Fury X landed right where I thought it might land.
 
AMD just cant dig themself out..

I really hope they fix thier fury X with 8gb ram soon to at 980ti price points. if hey cant i wonder how long amd is going to stay around. we need somebody to compete against the intel/nvidia combo
 
prior to this month, when was the last time AMD brought out a regularly scheduled driver release?

and you're banking on this why?

WHQL - not much, "betas" often...

in the past both companies have some times done driver tweaks that helped alot, but this time i don't think it can save AMD new cards at all.
 
Back
Top