AMD Radeon R9 Fury X Video Card Review @ [H]

The amount of butthurt by amd fans in this review thread is hilarious.

This is what I love about [H] reviews, they don't sugarcoat anything. They tell it like it is.
 
It's not really AMDs fault that they cant bring it with the budget and smaller personnel pool.




A lot of people like to bitch at AMD and in turn give NVidia their money. Anyone remember GTX 970 bullshit? If you guys really like AMD stop expecting a miracle and a graphics card that can walk on water and beat the crap out of NVidia. How is AMD supposed to do that when it has no money.


You guys want AMD to be competitive? Buy their shit.

These are businesses, not charities.
 
I don't get the people complaining about the lack of overclocking in the article. Are you sure you want to go down that road of overclocking the Fury X and then overclocking the 980Ti? 980 Ti overclocks to about 30%, Fury X - 5-10%, It will not be pretty for the Fury X.

That's because it's voltage locked.

As it appears now, the backside voltage regulators are baking hot at 100C. This appears to be a design oversight by AMD as there is no way to cool them.

What we do NOT know is how the front side is working. Like motherboards, voltage phase regulators come online (turned on) as they are needed. The front side regulators may or may not be in use and those are the ones that are cooled. So it may be possible (albeit unlikely) that with some card bios adjustments more power could be shunted to the front side regulators where it's cooled. But as it stands now we are voltage locked which prevents large overclocking.

It's simply a shortsighted design decision.
 
Kyle, not sure if you'll see this or if it's been answered. Was the 980ti boosting? Or were they "heated up" before testing to stabilize clocks? Just trying to get a better idea of whats doin.


We ALWAYS run the cards a while before taking framerate data. We never use a "cool" card.

Looking back over the 980 Ti review, I see we did not report actual in-game clocks. Our bad on that as I know we usually do. Let me see if I can Brent to ping in on that as I am sure he has that data. NVIDIA squeezed us on time even more so than usual on that review, so it is not surprising to me that something like that fell through the cracks.
 
That's because it's voltage locked.

As it appears now, the backside voltage regulators are baking hot at 100C. This appears to be a design oversight by AMD as there is no way to cool them.

What we do NOT know is how the front side is working. Like motherboards, voltage phase regulators come online (turned on) as they are needed. The front side regulators may or may not be in use and those are the ones that are cooled. So it may be possible (albeit unlikely) that with some card bios adjustments more power could be shunted to the front side regulators where it's cooled. But as it stands now we are voltage locked which prevents large overclocking.

It's simply a shortsighted design decision.



But AMD said the Fury X is a overclockers dream! ;)
 
So for you guys it DOES NOT MATTER that dev A develops game B...

1. Using proprietary code from nVIDIA.
2. "some times" receiving money from nVIDIA.
3. Optimizes the game to nVIDIA gpus because the game is using nVIDIA proprietary code.
4. nVIDIA has more time AND KNOWLEDGE to polish their drivers given they have access to 100% of the game code.
5. AMD has NO access to 100% of the game code.
6. AMD has less time to polish their drivers... because sometimes the proprietary code is added 1 week or 2 from release day.
7. ...and... what if that proprietary code... that "secret code"... that "black box" detects the presence of AMD gpus and hinders the performance (!?)

...so for you guys all this is business as usual... cool ! :cool:

How a particular card runs a particular game with the best available drivers is how that card runs that game with the best available drivers. What goes on in the background that leads to that performance is of no consequence to the reviewer. To consider that in the review is to introduce bias.
 
Sorry, did you explain how in the Dying Light tests you find that at 1440p you think it is VRAM limited, but then in 4K, it isn't? Or are you saying that it is in both...

Doesn't add up is all.
 
for me it seems Fury owns Nvidia so I am buying it.
Cant rely on the way Hardocp do their testing as I never bought stuff based on that.
I check the way I do gaming and for my set up Nvidia dont offer me anything better.
Fury however does is why I buy AMD as its superior stuff.

The [H] review gives you results you don't like so you choose to ignore the review? This folks, is the epitome of being a fan boy. If the Fury X was 10% slower across the board you'd probably still say you can't rely on the way [H] does testing. If you don't look at testing, why are you here? The hot dogs?
 
The [H] review gives you results you don't like so you choose to ignore the review? This folks, is the epitome of being a fan boy. If the Fury X was 10% slower across the board you'd probably still say you can't rely on the way [H] does testing. If you don't look at testing, why are you here? The hot dogs?

any review who dont run Mantle test in an amd card and compare that to nvidia zero fps btw in mantle dont really understand gaming on amd hardware.
semiaccurate run it with mantle which just shows if a reviewer want to make something look in a particular way they can do so. The future is amd and Fury - the old tech Nvidia sells is so EOL and obselete.

so you have a 20% boost here with Mantle and soon windows 10 is coming out using the codebase AMD created with Mantle for their GCN tech so guess what its clobber time for AMD in win 10 utilizing dx 12.:D

http://semiaccurate.com/2015/06/24/amds-radeon-goes-premium-with-the-fury-x/
 
Oh man. AMD dropping the ball so hard. Their newest generation is so much slower than nvidia's already out generation, and the price is the same......RIP
 
The amount of butthurt by amd fans in this review thread is hilarious.

This is what I love about [H] reviews, they don't sugarcoat anything. They tell it like it is.

I think the other thread that devolved into hot dogs was awesome. Gotta let people vent, but at the same time keep everyone following the rules. Not an easy job lol. I think for the mental state of some people here, AMD has to definitively "win" sometimes.
 
How a particular card runs a particular game with the best available drivers is how that card runs that game with the best available drivers. What goes on in the background that leads to that performance is of no consequence to the reviewer. To consider that in the review is to introduce bias.

For most it isn't the test suite but the totality of it. I mentioned in an earlier post 2games that fit the criteria but are not in the test suite. 1 is more aligned with AMD performance, the other more neutral. The only things they don't have are TWIMTBP or GW involvement. Again the start of a grew conspiracy even if devoid of proof, which most good conspiracies are.
 
any review who dont run Mantle test in an amd card and compare that to nvidia zero fps btw in mantle dont really understand gaming on amd hardware.
semiaccurate run it with mantle which just shows if a reviewer want to make something look in a particular way they can do so. The future is amd and Fury - the old tech Nvidia sells is so EOL and obselete.

so you have a 20% boost here with Mantle and soon windows 10 is coming out using the codebase AMD created with Mantle for their GCN tech so guess what its clobber time for AMD in win 10 utilizing dx 12.:D

http://semiaccurate.com/2015/06/24/amds-radeon-goes-premium-with-the-fury-x/

Ok even I have to admit you are going a bit far there, almost to the same degree as the others you are arguing against. Doesn't realm help your stance.
 
So for you guys it DOES NOT MATTER that dev A develops game B...

1. Using proprietary code from nVIDIA.
2. "some times" receiving money from nVIDIA.
3. Optimizes the game to nVIDIA gpus because the game is using nVIDIA proprietary code.
4. nVIDIA has more time AND KNOWLEDGE to polish their drivers given they have access to 100% of the game code.
5. AMD has NO access to 100% of the game code.
6. AMD has less time to polish their drivers... because sometimes the proprietary code is added 1 week or 2 from release day.
7. ...and... what if that proprietary code... that "secret code"... that "black box" detects the presence of AMD gpus and hinders the performance (!?)

...so for you guys all this is business as usual... cool ! :cool:

Is there an instance where nVidia paid/was given 100% access to game code, but AMD was denied the chance to be given/pay for the same 100% access to game code?
 
any review who dont run Mantle test in an amd card and compare that to nvidia zero fps btw in mantle dont really understand gaming on amd hardware.
semiaccurate run it with mantle which just shows if a reviewer want to make something look in a particular way they can do so. The future is amd and Fury - the old tech Nvidia sells is so EOL and obselete.

so you have a 20% boost here with Mantle and soon windows 10 is coming out using the codebase AMD created with Mantle for their GCN tech so guess what its clobber time for AMD in win 10 utilizing dx 12.:D

http://semiaccurate.com/2015/06/24/amds-radeon-goes-premium-with-the-fury-x/


Dude people were right lol, if the benchmarks at launch weren't as you expected and disappointing, they said you would talk about direct x 12 and drivers lol. You are getting a little too predictable.
 
any review who dont run Mantle test in an amd card and compare that to nvidia zero fps btw in mantle dont really understand gaming on amd hardware.
semiaccurate run it with mantle which just shows if a reviewer want to make something look in a particular way they can do so. The future is amd and Fury - the old tech Nvidia sells is so EOL and obselete.

so you have a 20% boost here with Mantle and soon windows 10 is coming out using the codebase AMD created with Mantle for their GCN tech so guess what its clobber time for AMD in win 10 utilizing dx 12.:D

http://semiaccurate.com/2015/06/24/amds-radeon-goes-premium-with-the-fury-x/

Mantle in BF4 sucks. It sucks sucks sucks. It sucked on my 7950 and it sucked on my 290x. It is not a smooth game play experience. The sudden drop in FPS for no reason is just crap. DX11 in this case is a smoother experience. I use SweetFX, which is DX anyway, so I never bother with mantle now.
 
The [H] review gives you results you don't like so you choose to ignore the review? This folks, is the epitome of being a fan boy. If the Fury X was 10% slower across the board you'd probably still say you can't rely on the way [H] does testing. If you don't look at testing, why are you here? The hot dogs?

I hate to go there but if you check his posting history and presence on other forums (same name) you can come to your own conclusions.
 
So for you guys it DOES NOT MATTER that dev A develops game B...

1. Using proprietary code from nVIDIA.
2. "some times" receiving money from nVIDIA.
3. Optimizes the game to nVIDIA gpus because the game is using nVIDIA proprietary code.
4. nVIDIA has more time AND KNOWLEDGE to polish their drivers given they have access to 100% of the game code.
5. AMD has NO access to 100% of the game code.
6. AMD has less time to polish their drivers... because sometimes the proprietary code is added 1 week or 2 from release day.
7. ...and... what if that proprietary code... that "secret code"... that "black box" detects the presence of AMD gpus and hinders the performance (!?)

...so for you guys all this is business as usual... cool ! :cool:

I don't know, but maybe one could get some older games run benches, see what happens?..
 
So you are quite childish then, that certainly explains the way the article was written. Well see no point in going forward with this line of discussion as rational debate is devoid on any of your responses.

Your thoughts are noted.
 
I think the [H] review is fair. I was concerned a while back when the specs were released that not touching ROPs one bit was an oversight on AMD's part, especially for a card marketed towards 4K. I don't understand why they shoehorned 4096 GCN cores in, added texture management units, and then completely left the ROPs untouched.

I pre-ordered it fully expecting that it wouldn't be any more of a 4K card than a Titan X or 980Ti so results were pretty much what I was expecting, but I want it in a small system and the card fulfills that purpose to some extent. But then again I game at 2560x1440 so the results here were relatively disappointing.

I don't get why people get so caught up in company allegiances so much so that they have to over-react to (what are perceived as) negative reviews for their products. I'd rather be in the know about it than unpleasantly surprised when I put that product through its paces. I don't associate this product launch to the Bulldozer series of CPUs because they were, and are still not even in the ballpark performance-wise which can't necessarily be said about Fury.

I currently have two custom water cooled r9 290x 8GB cards so that fulfills the purpose of a 4K gaming machine, but if I decide to dedicate my main rig to a single card for 4K, I'd just try to (currently) find the fastest factory over-clocked GTX 980TI and try and WC it to extract more performance.

I would think the next lower-cost variant of the Fury might end up being a better card for most anyways, but unless it's a good $100.00/$150.00 less and provides similar delta in performance as an R9 290 does to an R9 290X it might be another product that is released to similar, lukewarm fanfare.

Good review Brent and Kyle, keep it up.
 
DX 12 is NOT going to be the magic sauce that will save AMD or any card, just like all DX's before this, games have to specifically take advantage of DX12's features in order to benefit from, and given how DX's evolved in the past, unless Windows really push game developers to use DX12, it may be years before we see DX12's effects on AMD cards, which by then Fury X would have been irrelavant to the market.

Hell, I'd find 16nm to be more magical than DX12, and 16nm may already make Fury X (or any other similar performing card) irrelevant long before DX12 takes root.
 
There is one thing bugging me in the review, why is min 33 fps and max 47 considered unplayable?

I'd say that's well within the realms of playable. Not for competitive gaming, but for a single player game, where eyecandy is more important than sheer competitiveness, it's more than enough for me. I'd refuse to lower the graphics settings for most games even if it dips into the low 20s sometimes.
 
any review who dont run Mantle test in an amd card and compare that to nvidia zero fps btw in mantle dont really understand gaming on amd hardware.
semiaccurate run it with mantle which just shows if a reviewer want to make something look in a particular way they can do so. The future is amd and Fury - the old tech Nvidia sells is so EOL and obselete.

so you have a 20% boost here with Mantle and soon windows 10 is coming out using the codebase AMD created with Mantle for their GCN tech so guess what its clobber time for AMD in win 10 utilizing dx 12.:D

http://semiaccurate.com/2015/06/24/amds-radeon-goes-premium-with-the-fury-x/

You still betting on a 20% boost using mantle? Did you bother to read this?
Initially, I tested BF4 on the Radeons using the Mantle API, since it was available. Oddly enough, the Fury X's performance was kind of lackluster with Mantle, so I tried switching over to Direct3D for that card. Doing so boosted performance from about 32 FPS to 40 FPS. The results below for the Fury X come from D3D.

Oops, another possible AMD driver issue or game patch needed. Looks like AMD accidented the driver yet again lol.
 
any review who dont run Mantle test in an amd card and compare that to nvidia zero fps btw in mantle dont really understand gaming on amd hardware.
semiaccurate run it with mantle which just shows if a reviewer want to make something look in a particular way they can do so. The future is amd and Fury - the old tech Nvidia sells is so EOL and obselete.

so you have a 20% boost here with Mantle and soon windows 10 is coming out using the codebase AMD created with Mantle for their GCN tech so guess what its clobber time for AMD in win 10 utilizing dx 12.:D

http://semiaccurate.com/2015/06/24/amds-radeon-goes-premium-with-the-fury-x/

Semi-accurate how aptly named.

Testing a card designed for 4k at 1080p against another AMD card from 2013 and then against itself at 1080p.
 
There is one thing bugging me in the review, why is min 33 fps and max 47 considered unplayable?

I'd say that's well within the realms of playable. Not for competitive gaming, but for a single player game, where eyecandy is more important than sheer competitiveness, it's more than enough for me. I'd refuse to lower the graphics settings for most games even if it dips into the low 20s sometimes.

That heavily depends on the person.

I hate sudden dibs into 30's (for example with V-Sync on when one of the frames misses its draw rate), I have tried, but I simply cannot tolerate 30fps, probably not even 45fps.

This assumes VRR screens, if this is a FRR screen, my criteria would be even higher (locked 60fps with V-sync), I find both tearing and Stutter from V-Sync to be much more of a distracter than lower details.
 
There is one thing bugging me in the review, why is min 33 fps and max 47 considered unplayable?

I'd say that's well within the realms of playable. Not for competitive gaming, but for a single player game, where eyecandy is more important than sheer competitiveness, it's more than enough for me. I'd refuse to lower the graphics settings for most games even if it dips into the low 20s sometimes.


Well, this is why we actually play the games and report back to you instead of just running a canned benchmark. We have found over the years that framerates many times have little to do with how well a game "plays." We have seen instances where 45fps on card A is a better experience than 60fps on card B. Our testing and subjective feedback on this in our reviews is what spawned NVIDIA's FCAT program. ( I posted on that earlier in this thread as well. ) And yes, while game X might be fine at 30fps, we find that game Y is not. This is surely subjective, but we report our opinions on what is "playable." Your opinion might surely differ.
 
So for you guys it DOES NOT MATTER that dev A develops game B...

1. Using proprietary code from nVIDIA.
2. "some times" receiving money from nVIDIA.
3. Optimizes the game to nVIDIA gpus because the game is using nVIDIA proprietary code.
4. nVIDIA has more time AND KNOWLEDGE to polish their drivers given they have access to 100% of the game code.
5. AMD has NO access to 100% of the game code.
6. AMD has less time to polish their drivers... because sometimes the proprietary code is added 1 week or 2 from release day.
7. ...and... what if that proprietary code... that "secret code"... that "black box" detects the presence of AMD gpus and hinders the performance (!?)

...so for you guys all this is business as usual... cool ! :cool:

You mean like AMD Mantle and 3DFX Glide? Proprietary APIs have been around since the inception of PC gaming with 3D accelerators. It is in fact business as usual.

If it's not OpenGL (or it's upcoming successor, Vulkan), then it's proprietary. DirectX is a black box.
 
any review who dont run Mantle test in an amd card and compare that to nvidia zero fps btw in mantle dont really understand gaming on amd hardware.

We test with Mantle and DX all the time and go with the API that gives us the best performance at that time. Using Mantle in this review with the Fury X would have given us worse results.
 
any review who dont run Mantle test in an amd card and compare that to nvidia zero fps btw in mantle dont really understand gaming on amd hardware.
semiaccurate run it with mantle which just shows if a reviewer want to make something look in a particular way they can do so. The future is amd and Fury - the old tech Nvidia sells is so EOL and obselete.

so you have a 20% boost here with Mantle and soon windows 10 is coming out using the codebase AMD created with Mantle for their GCN tech so guess what its clobber time for AMD in win 10 utilizing dx 12.:D

http://semiaccurate.com/2015/06/24/amds-radeon-goes-premium-with-the-fury-x/

You want them to include the mantle results? LOL you're not going to like it...
 
1st Wrong! They are sold out everywhere... the card is in the right price and it comes with wc.
2nd You should wait for new drivers. 390x "290x" almost 3 year old card is currently trading blow with GTX980. and the best part is Fury X trading blows with Titan X and 980ti. what do you think is going to happen when they release better drivers?
3rd HBM is not fancy, is actually new tech thats why Nvidia wants yo use it...
4th stop being a fanboy, they aren't paying you right?

For the price of this card I can get a Gtx 980 and overclock the hell out of it and still have money for a few games and maybe even a closed loop gpu cooler to boot. That or just get a 980ti and call it good.

If that doesn't work how is it that you came to a different conclusion than that of the overwhelming majority of respondents here as well as [H]? Did you see the same numbers and conclusion we all did?
 
Is there an instance where nVidia paid/was given 100% access to game code, but AMD was denied the chance to be given/pay for the same 100% access to game code?

Every game that uses GameWorks... :mad:

...and AMD is in the business of GPUs... not games.

To help devs in the development of their games is a choice and should not be an obligation.
If a game uses DX 11, AMD should have the best possible driver for that game....... but it's hard or even impossible to manage that when you don't have 100% access to the game...
 
Honestly? I tend to wonder how well my 980 gtx with an aio water cooler on it clocked at 1590/3800 would fair vs a fury x.

I think it'd be a pretty fair fight.
 
Every game that uses GameWorks... :mad:

...and AMD is in the business of GPUs... not games.

To help devs in the development of their games is a choice and should not be an obligation.
If a game uses DX 11, AMD should have the best possible driver for that game....... but it's hard or even impossible to manage that when you don't have 100% access to the game...


Please take this discussion to anther thread or start your own please. This has nothing to do with our review and has already been addressed by HardOCP.
 
Just asking again in case it got missed during all the needless fighting/trolling:

Did you explain how in the Dying Light tests you find that at 1440p you think it is VRAM limited, but then in 4K, it isn't? Or are you saying that it is in both...?

Doesn't add up is all.
 
There is one thing bugging me in the review, why is min 33 fps and max 47 considered unplayable?

A number of reviews stated that the framerate numbers belie the actual smoothness of the gaming experience in a number of games on Fury X, so that's not terribly surprising.
 
Actually I have not yet posted my opinion about the article, considering I cracked the whip at Brent...

Anyway, here it goes. When reviews came out for the Fury X across several websites, I usually pay attention to the most important detail of GPU reviews: Frame rate graphs, with the exception of THG's review, all reviews agree on one thing: Fury X soundedly beats its predecessor 290x, but does so erratically (depending on the level of tesselation), and does not offer 980ti like performance, despite costing the exactly the same.

Thus [H] review accurately reflected my conclusion from graphs. Other reviews goes on about how much it is better from 290x and such, but mentions little of how it compared to 980ti or Titan X.

I don't think they are sugar coating, it is still AMD's best single GPU card yet, so AMD die hards will have at least something to look forward to, but when taking nVidia's offering into account, things change dramatically.

Basically, the way I see it, Fury X stands in a very strange position, not a pleasant position in fact.

Fury X on its own seems to be able to handle 1440p well enough, but it's certainly struggling in some areas, and thus a Crossfire setup might be the best to yield optimal experience at that resolution. Now, thing is, due to the fact it has AIO cooler, fitting two of these in the same case could pose some challenge (from what I can tell, I have limited experience in radiators as I actively avoid liquid, especially water, in my computer), so it'll be challenging to install a crossfire in this.

At 4k, Fury X definitely requires Crossfire, just as 4k would most likely require SLI Titan X or 980ti.

What about 1080p? Well if you are gaming on 1080p, unless you have 144hz, you probably will not get much more benefit from Fury X than you would from say 390x, and $650 for a GPU to game on 1080p sounds quite steep.

So basically, I have

1. The card is not suited for Crossfire, in general, unless you have a case that can accomodate the radiators. It's a decision one pretty much have to plan in advance. It's not a decision you can make on a whim, certainly not on the same level as an air cooled card.

I somewhat feel that the addition of AIO cooler on the Fury X is AMD's last ditch effort to eke out as much performance as it can from the chip.

If I were to get an AIO card, I would probably prefer a dual GPU card.

2. It's currently too expensive to be used as a 1080p card, just enough for 1440p and certainly not enough for 4k, which the latter is hampered by the additional AIO's.

3. I agree that HBM is wasted on the Fury X to a degree. GDDR5 is tried and true, and VRAM size is what matters more. There is still plenty of potential of HBM VRAM, but it is not realised here. a 290x probably would have been a better candidate for HBM shakedown run before incorporating it fully into their flagships.

Anyway, that's my worthless 2cents
 
Brent, Kyle: Thank you for the review.

Seems to me the Fury is a good card, not a great one. If AMD drops its price, it would be competitive with an equivalently priced Nvidia card. (That's my interpretation of the results of your review.)

Thanks.
 
What good does any of this bickering do? The fact is, Fury X loses. AMD.... loses. nVidia wins here, big time. The real questions people should be asking are "What is my budget?" and "How does the fact that flagship GPU's will easily cost $1,000 in the future affect me, as a consumer?"
 
Back
Top