AMD Radeon R9 Fury X CrossFire at 4K Review @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,626
AMD Radeon R9 Fury X CrossFire at 4K Review - The ultimate 4K battle is about to begin, AMD Radeon R9 Fury X CrossFire, NVIDIA GeForce GTX 980 Ti SLI, and NVIDIA GeForce GTX TITAN X SLI will compete for the best gameplay experience at 4K resolution. Find out what $1300 to $2000 worth of GPU backbone will buy you. And find out if Fiji really can 4K.
 
I always liked how you guys go I depth with the settings. It's interesting to see how Fury X did well by turning off some settings.

It'd be interesting to see a single GPU thrown in too to help gauge scaling and compare minimums. I realize you only have so much time.

Great review!
 
Thanks for going the distance with this series.

What I'm taking from this is when a) 4GB limitation is not a restraint and b) Crossfire is not screwing up, the huge memory bandwidth that HBM provides is really helpful at high resolution.
 
From the Power page...

Though temps aren't shown the big difference between AMD Radeon R9 Fury X CrossFire and the competition cards is that both Fury X cards have closed loop liquid cooling. This allows the GPUs to remain much cooler than the reference cards we used for the competition. Overall heat dissipation can be directed where you want by moving the radiators. There seemed to be this hot air bubble around the TITAN X SLI and 980 Ti SLI cards feeling with our hands, whereas with Fury X CrossFire this wasn't so.

Having said all that, why couldn't actual temps be shown?
 
The conclusions from the testing are very interesting.

Both card sets in this game suite are quite capable.

I fully agree with the last two sentences in this article.:D
 
My take on this review: Fury X Crossfire, assuming you play only Crossfire enabled games, would be a better choice for 4k IF you can fit the 2 liquid cooling fans in your case or if you setup a test bench rig.

Also to consider is that there is a plethora of available 4k FreeSync monitors as well as many more on the way, where as a comparable nVidia setup will be both more expensive (though this is concentrated solely on the monitor), 4k option is very limited at the moment, with no 4k G-Sync in sight (G-Sync seem to be concentrating a lot on 1440p & 21:9's, 4k is really slow, and no news on that G-Sync 4k Asus showed off a while back).

Not sure how I feel about Crossfire support in general and the 2 liquid coolers though.
 
I guess I get to mention the elephant in the room. Fury X overclocks very little. 980ti overclocks significantly. I would guess that the places where Fury X was on par or better in this review would be negated by that fact. Still, good to see Fury X making a solid showing. I hope AMD can update drivers to unleash its true potential.
 
Wonderful review. Glad to see Project Cars running so nicely now on AMD hardware. All those bug fixing patches have really paid off. It's quite enjoyable to play now and worthy of inclusion into the testing suite.

Thanks for taking the time and effort to do this for the [H]ardocp community! :)
 
Nice review. My takeaways are that nvidia needs to improve mgpu scaling and that cfx when it works has improved a lot compared to 7970 era. The driver team at AMD needs to step up their game for me to spend any money on the cards though.
 
Nice review. My takeaways are that nvidia needs to improve mgpu scaling and that cfx when it works has improved a lot compared to 7970 era. The driver team at AMD needs to step up their game for me to spend any money on the cards though.

With that in mind, cfx with 7970s and recent drivers is very good for the most part.

It basically works excellent in the games it works in (Witcher 3 does not stutter as it apparently does with the Fury and Fury X cards).

I really haven't run into any games that I am interested in that cfx doesn't work good.
 
May be off topic but how many/much of the Pcie lanes do these cards use or really need?
 
Fury X is blown away the moment you overclock those 980TIs... Even if you oc'd those fury cards.
 
So, pretty much what every reasonable person expected. The Fury X crossfire trades blows with stock 980 Ti in most games. AMD seems to have superior scaling capability over SLI. AMD, however, continues to be at the mercy of very poor AMD crossfire driver support, making crossfire simply broken in many games for extended periods of time.

That being said, you can overclock the 980 Ti right this second and get +20% performance. I would have been curious to see that in the review, but again - based on other reviews - that effect of that performance boost makes the choice clear when the price levels are identical.
 
May be off topic but how many/much of the Pcie lanes do these cards use or really need?

Check bus usage with MSI Afterburner or get someone to do it for you if you dont have the kit.
My observation is it is very low for single card.
 
Thanks for the review, [H]!

Impressive showing for the Fury X's.
 
that was clearly a lot of work

thanks guys


it really is sad that AMD seems to have the silicon, but not the code to really compete
 
Great review as always. Love seeing $gobs worth of hardware being pushed to limits :)

HBM2 at 8gb will probably alleviate some performance bottlenecks, and makes me look forward to the games that will push that type of configuration. What new cool shit is just a few years away...

The only thing I think that would add to this review would be a sli and cfx performance scaling comparison. I've tried comparing the numbers myself to some past [H] reviews but since they can't reach 4k resolutions at desired performance levels, the data isn't there to compare. I did extrapolate just a bit, 4k resolution is 2.25x the number of pixels as 1440p, with 2x the gpu. So 1.125x the 1440p single gpu performance would represent 100% scaled performace in sli/cfx/4k. Someone correct me if I am doing the math wrong.

So BF4 single gpu/1440p performance at 71.4(fury x), 92.1(980ti) come out to 80.3 / 103.6 respectively. Actual values were 77.4(fury x cfx) / 76.5 (980ti sli). so 96% scaling rate for cfx (pretty decent) and 73.8% for sli, not bad but not great. I think 85% or better scaling is what they should be aiming for.

Any chance H can give us a few sli numbers at the prior reviews' resolutions, or single gpu at the 4k resolution? Just to see if my extrapolated scaling matches real world. If it does, I can just make the same calculations for a few other games to see what the ranges are.

For the nvidia sli, I am glad it is at least playable and smooth, would prefer that to a higher scaling % where shortcuts are taken that might result in the stuttery or chopiness issues (if that somehow is part of the reason for the differences in scaling).
 
Really good article there. Again, actual VRAM usage figures would have helped.

While I realise that the test configuration is a reasonably common one, I wonder if there was an unintentional bottleneck? Radeon cards now communicate over the PCIe bus whereas Geforce cards use a dedicated link. The PCIe 3.0 bus of the test system was operating at 8x/8x instead of 16x/16x, and I wonder if that made a difference?
 
Great review as always. Love seeing $gobs worth of hardware being pushed to limits :)

HBM2 at 8gb will probably alleviate some performance bottlenecks, and makes me look forward to the games that will push that type of configuration. What new cool shit is just a few years away...

The only thing I think that would add to this review would be a sli and cfx performance scaling comparison. I've tried comparing the numbers myself to some past [H] reviews but since they can't reach 4k resolutions at desired performance levels, the data isn't there to compare. I did extrapolate just a bit, 4k resolution is 2.25x the number of pixels as 1440p, with 2x the gpu. So 1.125x the 1440p single gpu performance would represent 100% scaled performace in sli/cfx/4k. Someone correct me if I am doing the math wrong.

So BF4 single gpu/1440p performance at 71.4(fury x), 92.1(980ti) come out to 80.3 / 103.6 respectively. Actual values were 77.4(fury x cfx) / 76.5 (980ti sli). so 96% scaling rate for cfx (pretty decent) and 73.8% for sli, not bad but not great. I think 85% or better scaling is what they should be aiming for.

Any chance H can give us a few sli numbers at the prior reviews' resolutions, or single gpu at the 4k resolution? Just to see if my extrapolated scaling matches real world. If it does, I can just make the same calculations for a few other games to see what the ranges are.

For the nvidia sli, I am glad it is at least playable and smooth, would prefer that to a higher scaling % where shortcuts are taken that might result in the stuttery or chopiness issues (if that somehow is part of the reason for the differences in scaling).

Every review I've seen shows that, when crossfire works correctly, Fury X scales better than SLI. Nvidia needs to work on that.
 
Really good article there. Again, actual VRAM usage figures would have helped.

While I realise that the test configuration is a reasonably common one, I wonder if there was an unintentional bottleneck? Radeon cards now communicate over the PCIe bus whereas Geforce cards use a dedicated link. The PCIe 3.0 bus of the test system was operating at 8x/8x instead of 16x/16x, and I wonder if that made a difference?


My suggestion is absolutely not, otherwise we would have not run the test on this platform.
 
I don't know what happends everytime Hopc test Far Cry4, but I have both two GPU's, TitanX's and two FuryX's and have tested them up to 5K, but the CF-scaling in this game was flawness here (w10 64) not aonly that, it's also the most impressive example of CF in my exp.
Also the Fury combo was faster, ~65 vs 48 fps for SLI @ 5K, no AA, volumetric fog and SSBC AO, ands its crazy smooth.

I have the idea AMD is still optimizing its memory usage for HBM, allready notice extended distance in GTA5 kills perf for now, but I can play with everything to the max, with only extended distence disabled, up to 5K! (no AA)

Afa PCI bandwitdh is conserned, I don't think it (2 x pcie gen.3 x8) will hold up perf up to 4K, but I have a strong feeling it does at 5K and beyond.
 
Last edited:
I don't know what happends everytime Hopc test Far Cry4, but I have both GPU's, TitanX and FuryX's and have tested them up to 5K, but the CF-scaling in this game was flawness here (w10 64) not aonly that, it's also the most impressive example of CF in my exp.

We reached out to AMD weeks ago in order to get some help and AMD had no response with any help. Not much we can do beyond that.
 
Hopefully they'll get back to you (us)! Would be nice to see a solution... or even just a cause of the issue for that matter.
 
From the Power page...



Having said all that, why couldn't actual temps be shown?

They aren't all that interesting or relevant. Since the GeForce cards were reference cards they hit the 85c cap, and the Fury X cards were at 50c due to liquid cooling, that's it.
 
Good read. Lots of hard work there. It seems FC4 is fixed in the 15.9.1 beta.
 
I guess I get to mention the elephant in the room. Fury X overclocks very little. 980ti overclocks significantly. I would guess that the places where Fury X was on par or better in this review would be negated by that fact. Still, good to see Fury X making a solid showing. I hope AMD can update drivers to unleash its true potential.

Our 980 Ti's were running at 1190MHz in SLI, with retail cards able to hit 1500MHz or higher, yes, that clock speed is going to make a noticeable improvement in performance.
 
Really good article there. Again, actual VRAM usage figures would have helped.

While I realise that the test configuration is a reasonably common one, I wonder if there was an unintentional bottleneck? Radeon cards now communicate over the PCIe bus whereas Geforce cards use a dedicated link. The PCIe 3.0 bus of the test system was operating at 8x/8x instead of 16x/16x, and I wonder if that made a difference?

I noted the VRAM usage in Project Cars and Dying Light in the review, the only other one worth noting would have been GTAV. Yes, time is always a constraint, and I don't have the ability to look at Fury X CrossFire anymore as we had to give the borrowed card back. Though the number wasn't noted in GTAV you can clearly see the result in the lack of VRAM.
 
Deja vu all over again with these Crossfire drivers. I remember a few years ago when they were just terrible and we had the stuttering issues that resulted in Nvidia cards yielding better gameplay despite lower frame rates. But it seemed like they got them figured out and for the last few years they've been as good as Nvidia's. Now here we are back at the stuttering and Crossfire outright broken on games.
 
I noted the VRAM usage in Project Cars and Dying Light in the review,

I must confess I was looking for rather more precise figures than you provided.

That's actually given me an idea for an article you might like to do; I'll start a new thread.
 
Last edited:
Deja vu all over again with these Crossfire drivers. I remember a few years ago when they were just terrible and we had the stuttering issues that resulted in Nvidia cards yielding better gameplay despite lower frame rates. But it seemed like they got them figured out and for the last few years they've been as good as Nvidia's. Now here we are back at the stuttering and Crossfire outright broken on games.

Definitely a roller coaster on CrosFire support. There was hope back during all the Frame Pacing technology discussions. It feels like AMD has backslid and we are back to pre-frame pacing technology behavior.

The ultimate answer is that someday multi-GPU acceleration will move away from profile based operation and implement a truly baked into hardware solution that works outside of needing driver profile updates. As it is now a whole freaking lot depends on the software support from AMD and NVIDIA to make each game work well, it is a very involved and expensive endevour.
 
I must confess I was looking for rather more precise figures than you provided.

That's actually given me an idea for an article you might like to do; I'll start a new thread.

Those figures are precise numbers, I didn't make them up, they are written down from actual GPUz readings.
 
Awesome review guys. So much info - it will be a great reference for a long time to come. I must say that I am more confused of VRAM consumption than ever. We were to think that 4 GB was hurting Dying Light at 1440p. Now it is ran at 4k with high quality settings and the Fury X in CFX blows it away. However, 4 GB in the 980 SLI suffers. What is even stranger is that the Fury becomes MORE competitive when settings are increased. It is clear, THERE IS NO EFFECTIVE WAY IN MEASURING VRAM CONSUMPTION. That being said, I would venture to guess that the 980ti will have a much better resale value 2-3 years down the road.

I've tried comparing the numbers myself to some past [H] reviews but since they can't reach 4k resolutions at desired performance levels, the data isn't there to compare. I did extrapolate just a bit, 4k resolution is 2.25x the number of pixels as 1440p, with 2x the gpu. So 1.125x the 1440p single gpu performance would represent 100% scaled performace in sli/cfx/4k. Someone correct me if I am doing the math wrong.

So BF4 single gpu/1440p performance at 71.4(fury x), 92.1(980ti) come out to 80.3 / 103.6 respectively. Actual values were 77.4(fury x cfx) / 76.5 (980ti sli). so 96% scaling rate for cfx (pretty decent) and 73.8% for sli, not bad but not great. I think 85% or better scaling is what they should be aiming for.

Great work, I am glad you did this. I assume the settings were the same (other than resolution). 96% scaling is amazing. CFX has even shown that with 3 cards in some reviews. Even 85% of the 980ti is great. Since you are not buying a 2nd
cpu / motherboard / power supply, you are more than doubling your performance/$ of the entire PC in both cases. As good as CFX can be, both would prefer 85% in most cases instead of a 50/50 shot of perfect and awful scaling.

Our 980 Ti's were running at 1190MHz in SLI, with retail cards able to hit 1500MHz or higher, yes, that clock speed is going to make a noticeable improvement in performance.

Yep, they o/c well and even then power consumption would still be competititive.
 
Awesome review guys. So much info - it will be a great reference for a long time to come. I must say that I am more confused of VRAM consumption than ever. We were to think that 4 GB was hurting Dying Light at 1440p. Now it is ran at 4k with high quality settings and the Fury X in CFX blows it away. However, 4 GB in the 980 SLI suffers. What is even stranger is that the Fury becomes MORE competitive when settings are increased. It is clear, THERE IS NO EFFECTIVE WAY IN MEASURING VRAM CONSUMPTION.

If there is one important fact I want to get across to people from this review is that the conclusion on being VRAM constrained in games has no definitive answer. It is variable, it varies based on the game. It is completely game dependent. There is no one sweeping conclusion that fits all.

Some games will suffer with only 4GB of framebuffer at 4K, some games will not. Some games will want and need more VRAM, some games won't care. Some games will perform better on the 980 Ti SLI 6GB solution, some game's won't.

Will 4GB framebuffer on Fury X CF cause games to be slower? The answer is yes and no. Some games yes, Some games no.

The only way to find out is to specifically test every single game.
 
he's asking if the bottom card would have enough hose to reach a top mount rad location

Obviously that would depend on how far the top of case is from the card slot it is in. Maybe Brent could tell us how long the hose is on the Fury X? But I would think this to be documented on someone AIBs spec page.
 
Thanks guys for getting this review out, Looks like next gen cards will be the first 4K capable single card solutions.
 
Back
Top