AMD Radeon R9 Fury X Video Card Review @ [H]

A faster fan is going to help cooldown the PCI-E bracket which is the issue of contention.
It also won't help the back of the PCB.

FX pulls over 430W in synthetic tests according to TPU.

Doesn't need to unless you torture test it...do you play torture tests?
 
So, a bit slower than the 980Ti, less vram at the same price. That seems foolish.

Coupled with AMDs sluggish driver update cadence I'm not sure why you'd get this.
 
Yes, I've read on some sites, criticism of [H] and yourself that was unsubstantiated and just some fanboi rubbish not worth repeating. Your mention of FUD leads me to ask have you guys, given the amount of testing you all do over time, seen evidence of Kepler hobbling/performance degradation? I have been using a Titan(original) and have not noticed this card getting any worse but keep reading 780/Tis aren't as competitive as previously against 290/X...is it more a matter of AMD drivers improving? It's relevant in the 'Fury X will improve with better drivers' discussion...

Sry for going a bit OT, enjoyed the article/review although a bit disappointed...

I honestly cannot give you am answer on that because w do not have the time currently to investigate that. Quite frankly though, if that were the case I think you would already see hard evidence of that in lots of places. It is not as if the community does not have access to those cards and drivers.
 
I read that HBM1 wasn't actually limited to 4GB, it was just that AMD originally designed the GPU believing/hoping they'd be able to put it on a smaller process when it came time, which would've left more room for another 4GB. But because the smaller process couldn't happen in the end, they were left cramped for space and could only fit 4GB.

In any case you're probably right, im sure when Pascal rolls into town with 8GB and HBM2 it will be a party with cherry ice cream and people taking their pants off.

A few years ago Anandtech had a short blurb stating that nVidia is much more conservative when moving to a new process. AMD is much more likely to push the envelope, but this time they got bit.
 
I feel bad for the guys at AMD and Hynix who dedicated years of their life to HBM and it gets unveiiled to the world on the disappointing Fury X.

Hopefully HBM will get the praise it deserves next year.
 
You can make up a y factor for anything.

You can't just make up a y factor when using 3 points - in this case 1080p, 1440p and 4k.

I decided to go back in time to cement my case.

It took alot of digging but here is a link to vram usage in crysis 1:
http://hardforum.com/showthread.php?t=1456645

defaultluser reports .31 GB@ .48 MP, .36GB@ .79 MP .45GB@ 1.31 MP
and .575 GB at 1.9 megapixels

using 1.9x + y = .575 and .48x + y = .310
.....using substitution....

x = .187 Yep, thats right. Still the same GB/ megapixel as today
y = .22 GB much more reasonable. This was 4xAA btw.

Checking with .79 megapixels or 1024x768 .....

.79x + y = .368 compared to .36 GB that they recorded

VRAMgate revealed ladies and gentleman.
 
I feel bad for the guys at AMD and Hynix who dedicated years of their life to HBM and it gets unveiiled to the world on the disappointing Fury X.

Hopefully HBM will get the praise it deserves next year.

I wouldn't blame Hynix. They didnt create the Fiji GPU die, that was all AMD.

HBM is fantastic!!!! It's just needed to be on Maxwell and not Fiji.

I still would LOVE to see a APU with 1024 GCN and 8GB of HBM. That would be 1 lil beast of a CPU.
 
Yes, it was explained away by one guy with "superfluous memory"

Superfluous memory that went up from .22 GB with Crysis 1 using 4xAA -
(still a good looking game even by todays standards)
all the way to 4.5 GB of "superfluous memory" on the case of Middle Earth or a
20 times increase.

Can't wait until 2021 when we need 90 GB of "superfluous memory"
 
A faster fan isn't going to help cooldown the PCI-E bracket which is the issue of contention.
It also won't help the back of the PCB.

FX pulls over 430W in synthetic tests according to TPU.
It uses around 280W while gaming. Why use a torture bench as a reference point for temps?
Well sorta hurts the card for folding, depending...
 
Kyle and [H]ard are solid, it is Brent that grinds with his proverbial jaded high school fanboy writings that annoy so many people.
 
I feel bad for the guys at AMD and Hynix who dedicated years of their life to HBM and it gets unveiiled to the world on the disappointing Fury X.

Hopefully HBM will get the praise it deserves next year.

Hoping the next batch of product after this initial release is a better platform for HBM.
The tech is good, don't get me wrong. It's just that it got included in a flawed platform.
 
So, a bit slower than the 980Ti, less vram at the same price. That seems foolish.

Coupled with AMDs sluggish driver update cadence I'm not sure why you'd get this.

Not to mention it doesn't OC as well.

Wish I bought the 980 Ti Hybrids when they were in stock instead of waiting for this crap. Now I have to wait a few more weeks for Hybrid to re-stock.
 
Not to mention it doesn't OC as well.

Wish I bought the 980 Ti Hybrids when they were in stock instead of waiting for this crap. Now I have to wait a few more weeks for Hybrid to re-stock.

Well that might change. Right now there is no program that can access the voltage or memory for overclocking.

I know Unwinder has comment he is waiting for MSI To send him a fury so he could have Afterburner work with it.
 
14351085919S0HOOZkGA_8_2.gif



What is going on here? AMD Fury X is barely 10% faster then 290x. That just doesn't seem right.

Total lack of driver support?
 
You know how nVidia knew exactly how to position the 980ti? Because they have made a 4GB HBM card in house and tested the thing and knew the 980ti would best it by this amount. And they also could see the 4k writing on the wall.

Like the review says this card doesn't make sense strategically at all. I just hope this doesn't mark the beginning of the end for AMD (of course if they do end up folding/ restructuring again/selling, the beginning of the end will said to have begun over a year ago).

Thanks to the authors for another great review. I basically don't by hardware until it's reviewed here and at pcper.
 
You know how nVidia knew exactly how to position the 980ti? Because they have made a 4GB HBM card in house and tested the thing and knew the 980ti would best it by this amount. And they also could see the 4k writing on the wall.

Interesting theory, but it wasn't *just* performance positioning they were a step ahead of AMD on. It was the also the timing, the pricing, it all just came together as if they had ears inside AMD. I'm halfway kidding, since it wasn't exactly hard to predict what AMD's next move was, but Fury X just didn't seem to take NV by surprise at all.
 
Well that might change. Right now there is no program that can access the voltage or memory for overclocking.

I know Unwinder has comment he is waiting for MSI To send him a fury so he could have Afterburner work with it.

While this wouldn't be the first time a new flagship launched voltage locked and then gets unlocked later, I get the strange feeling this goes beyond just third party tools needing an update. Its like AMD is keeping voltage locked for some other reason they aren't willing to disclose. Worried people are going to fry the VRM's perhaps? OC headroom already used up to arrive at a competitive stock base clock?

Put another way, if voltage control was a game changer I would think they would've seen to it that voltage be unlocked day one for the reviews. I know the infamous line during the E3 presentation "You'll be able to overclock this thing like no tomorrow" has already been beat to death, but clearly there's a disconnect here.
 
Last edited:
Interesting theory, but it wasn't *just* performance positioning they were a step ahead of AMD on. It was the also the timing, the pricing, it all just came together as if they had ears inside AMD. I'm halfway kidding, since it wasn't exactly hard to predict what AMD's next move was, but Fury X just didn't seem to take NV by surprise at all.

Timing was in AMD's favor IMO. 980Ti came out and AMD knew exactly what it's performance was and exactly what it was priced at and they still managed to screw up the execution.
 
I feel bad for the guys at AMD and Hynix who dedicated years of their life to HBM and it gets unveiiled to the world on the disappointing Fury X.

Hopefully HBM will get the praise it deserves next year.

Haha this is the 30GB OCZ SSD... the... Vertex? 'Affordable' new tech, you can see the potential, its really good at some things, but you need to make some comprmises to get it... and in a year it'll look like garbage. LOL.
 
Timing was in AMD's favor IMO. 980Ti came out and AMD knew exactly what it's performance was and exactly what it was priced at and they still managed to screw up the execution.

Other than pricing... there wasn't really anything they could do.
 
Interesting theory, but it wasn't *just* performance positioning they were a step ahead of AMD on. It was the also the timing, the pricing, it all just came together as if they had ears inside AMD. I'm halfway kidding, since it wasn't exactly hard to predict what AMD's next move was, but Fury X just didn't seem to take NV by surprise at all.

It's uncanny how they knew where to price it and when to release vis-a-vis performance vs. Fury X. But of course they can anticipate the approximate vital stats of AMD's release. It may well have been that AMD wanted to charge the rumored $750-850 for the Fury X and on the 980Ti's release realized they were toast at that price. So going to $650 was a necessity and may have been a big disappointment for them.
 
It may well have been that AMD wanted to charge the rumored $750-850 for the Fury X and on the 980Ti's release realized they were toast at that price. So going to $650 was a necessity and may have been a big disappointment for them.

I'm re-watching the E3 presentation now and looking for any signs of defeat in their eyes.
 
It would also explain why they haven't lowered it to the much more palatable $500 which is the price point it should be at to be anywhere near successful. Too bad, I was really rooting for them.
 
FuryX has 64 ROPs and it shows up quickly in some of the games that specifically use ROPs. Nvidia was clever to launch their Big Maxwells with 96 ROPs.

Memory bandwidth still doesn't count for shit.
 
Its misleading the audience.

Harcdocp wrote
"This card is built for 4K gaming, not 1440p…."

Then you run tests with 4k, not 1080p nor 1440p.
anything else just misleads the audience.
1080p you buy a 380 for 2560p you buy a 390x.

4k you buy Fury x.

Issue for me as usual is what I game and use my machine for with eyefinity the tests with all 3 cards will show an equal perfomance in my gameplay if I did the review.
Naturally that wont work for a review as you need to find a difference to you know make an audience educated about the cards but for me there wont be a difference in how I game.

Features like windows 10 which is a big thing for me then makes a Fury x much more sensible than older tech like the 980ti especially for 4k gaming etc..I plan to keep the card a few years as I dont upgrade often and again I rather buy new tech than old tech there when I do such and then again Fury x wins me over as for me windows 10 again is the key criteria I use to choose my card and then new tech wins and Fury it is
 
It doesn't work that way. Review sites don't review cards based on just their strengths.. They take into account all resolutions to effectively rate the card. Fury X is marketed as a high end GPU, so it will get tested against other high end GPUs at it's price point. It would be misleading if [H] reviewed only 4K and even then, it doesn't beat the 980Ti. AMD dropped the ball on this. It's disappointing but it is what it is.
 
Its misleading the audience.

Harcdocp wrote
"This card is built for 4K gaming, not 1440p…."

Then you run tests with 4k, not 1080p nor 1440p.
anything else just misleads the audience.
1080p you buy a 380 for 2560p you buy a 390x.

4k you buy Fury x.

Issue for me as usual is what I game and use my machine for with eyefinity the tests with all 3 cards will show an equal perfomance in my gameplay if I did the review.
Naturally that wont work for a review as you need to find a difference to you know make an audience educated about the cards but for me there wont be a difference in how I game.

Features like windows 10 which is a big thing for me then makes a Fury x much more sensible than older tech like the 980ti especially for 4k gaming etc..I plan to keep the card a few years as I dont upgrade often and again I rather buy new tech than old tech there when I do such and then again Fury x wins me over as for me windows 10 again is the key criteria I use to choose my card and then new tech wins and Fury it is

[H] isn't making that statement that fury x is meant for 4k. they're quoting the narrative that amd is pushing with their product launch. the 4k tests show that the fury x is close to the ti but consistently behind in single card performance.

as an ardent multi gpu devotee and a recent user of 4k since last summer, the fact remains that to achieve 4k with anything close to 60fps and settings at high or better you're going to need more than one card. what's perplexing is that amd pushed their 4k narrative but locked fury x multi gpu out of the gate. its funny you mention new tech and old tech, because the only 'new tech' thing about the fury is its memory design. the ti and titan x, while utilizing an older established memory interface still represent the most cutting edge shader engine available.
 
I feel bad for the guys at AMD and Hynix who dedicated years of their life to HBM and it gets unveiiled to the world on the disappointing Fury X.

Hopefully HBM will get the praise it deserves next year.

Well you shouldn't, considering that HBM is a tech that will be useful for many future architectures to come. HBM itself has gotten a successful introduction IMO, as it is working as intended. Fury might not have been fast enough to really allow HBM to shine, but better to start early than late.
 
Companies like to sometimes sell consumers on how things are done but at the end what primarily matter is what the product does for them.

The HBM issue is that apparently some people wanted to believe (or led to believe) it is some kind of secret sauce that would have hidden end user benefits.

If you look at Fiji performance scaling relative to R9 290x in terms of SPs and its TDP along with AMD's claimed perf/watt improvements it actually lands rather where you'd expect.

HBM really addresses an engineering/design issue for AMD (given their constraints) in terms of building Fiji as a 4096 SP chip. It might have been marketed to end users but really was not going to end up an important factor in terms of an actual graphics card purchase, which you'd simply be purchasing based upon what that graphics card offers not how it is done.
 
Its misleading the audience.

Harcdocp wrote
"This card is built for 4K gaming, not 1440p…."

Then you run tests with 4k, not 1080p nor 1440p.
anything else just misleads the audience.
1080p you buy a 380 for 2560p you buy a 390x.

4k you buy Fury x.

Issue for me as usual is what I game and use my machine for with eyefinity the tests with all 3 cards will show an equal perfomance in my gameplay if I did the review.
Naturally that wont work for a review as you need to find a difference to you know make an audience educated about the cards but for me there wont be a difference in how I game.

Features like windows 10 which is a big thing for me then makes a Fury x much more sensible than older tech like the 980ti especially for 4k gaming etc..I plan to keep the card a few years as I dont upgrade often and again I rather buy new tech than old tech there when I do such and then again Fury x wins me over as for me windows 10 again is the key criteria I use to choose my card and then new tech wins and Fury it is

Hypothesis ... Assertions and fan boy orgasms.

Big Fury's new tech got blown away by big Maxwell and it's not even funny. Please go and buy the big Fury, you truly deserve it!
 
All GW features are run in DX11. That is why they can run on AMD hardware. They use standard DX API calls.

With all due respect... how on earth can you make that statement ????

Have you actually seen the GameWorks code ????
 
With all due respect... how on earth can you make that statement ????

Have you actually seen the GameWorks code ????

he is not wrong, the calls are DX11 and therefore why they can run on AMD hardware. However the issue all along was that AMD would have a more difficult time optimizing because of the nature of GW being a black box. Whereas having access to the code would allow a more direct and timely approach. This is probably why you see so many posting about slower drivers now more than worse drivers being AMD drivers are pretty sound.
 
I say AMD can reciprocate. Don't think It would matter an ounce to NVidia. Game works started after Mantle really if you check the dates. Mantle was never opened up to other architectures apart from GCN which forced NVidia to make counter moves.

Intel and NVidia wanted early access to Mantle but AMD kept saying they are working on a public release which never happened. Good thing Microsoft saw all this and took AMD's concepts and moulded them into DX 12 so closer to metal worked with all the architectures generically.
 
I say AMD can reciprocate. Don't think It would matter an ounce to NVidia. Game works started after Mantle really if you check the dates. Mantle was never opened up to other architectures apart from GCN which forced NVidia to make counter moves.

Intel and NVidia wanted early access to Mantle but AMD kept saying they are working on a public release which never happened. Good thing Microsoft saw all this and took AMD's concepts and moulded them into DX 12 so closer to metal worked with all the architectures generically.

Heard about Intel, but never Nvidia. Can you provide a link on it? Thanks.
 
he is not wrong, the calls are DX11 and therefore why they can run on AMD hardware. However the issue all along was that AMD would have a more difficult time optimizing because of the nature of GW being a black box. Whereas having access to the code would allow a more direct and timely approach. This is probably why you see so many posting about slower drivers now more than worse drivers being AMD drivers are pretty sound.


I ask the same question to you: Have you seen the GameWorks code ????

What if in the GameWorks code non-DX11 calls are made ? What if those calls only work on nVIDIA gpus ?
If those non-DX11 calls are made with a AMD gpu they return a error and upon an error a generic DX11 call is made... all this making the AMD gpu lose time !

Have you ever thought about this ?.......
 
I think it was more than fair to leave out the Mantle results when it became apparent GCN 1.2 was slower in Mantle mode than D3D11. A lesser site with an actual bias could've thrown that in with a "just sayin" disclaimer. And had you included the Mantle results, the same people would've been screaming that you shouldn't have included them and that you're just trying to cast the card in a negative light.

It was the right call going with results of whichever graphics API showed the best performance for the game. That's the most relevant metric since that's what someone will set the game to when they get the card home.

Our goal is to show hardware in its best possible light in how it would be used by the desktop gamer or hardware enthusiast. So yeah, you get it. :)

I feel bad for the guys at AMD and Hynix who dedicated years of their life to HBM and it gets unveiiled to the world on the disappointing Fury X.

Hopefully HBM will get the praise it deserves next year.

HBM will come into its own, just not this week.

Kyle and [H]ard are solid, it is Brent that grinds with his proverbial jaded high school fanboy writings that annoy so many people.

Brent is the best GPU reviewer in the business, hands down. If his verbiage annoys you so badly, don't read the text. Just give the graphs a look and come to your own conclusions. We have never asked anyone to agree with our analysis. This is why we share all the data with our readers. Analyze the data yourself and make your own call. :) That is all good with me.

Its misleading the audience.

I would suggest a reading comprehension course at your local community college. Your postings have however stepped over into the entertaining category though, for me at least. Thanks for the chuckle.

With all due respect... how on earth can you make that statement ????

Have you actually seen the GameWorks code ????

You do realize that these effects you are so concerned with do actually work on DX11 and AMD hardware......right?
 
Back
Top