AMD Radeon R9 Fury X Video Card Review @ [H]

Dude you need to get shop-runner it is free two day shipping and I have not paid them a penny for the 2years I have used shop-runner.
(I never even gave them my credit card info, it boggles the mind as to how that company still exists)

newegg doesn't do shoprunner anymore. you have to PAY for newegg premier if you want "free" shipping.
 
Well if voltage turn out to be a bust it flies in the face of Joe Macri saying it was an overclockers dream. I just wonder if the proximity of the ram to the gpu has made them cap the overclocking potential as hbm has been locked down.

Yes, this is the most disappointing thing about the Fury X... :mad:

I still have these words from AMD CTO Joe Macri echoing in my head:

“You’ll be able to overclock this thing like no tomorrow...”

“This is an overclocker’s dream.”

:(
 
If the VRMs end up being an overclocking bottleneck once we get voltage control, there are some things we can do there to cool those better. Not exactly rocket science, but I do not want to start modding the card for cooling till we find out more about it in its stock form.
 
Yes, this is the most disappointing thing about the Fury X... :mad:

I still have these words from AMD CTO Joe Macri echoing in my head:

“You’ll be able to overclock this thing like no tomorrow...”

“This is an overclocker’s dream.”

:(
We'll see what happens when the voltage gets unlocked but I don't have high hopes.
Watching recent AMD interviews/presentations (mostly Richard Huddy) reveals a lot of misleading or out-right false information.

If someone were listening to AMD with no existing knowledge of what they're talking about, it might sound pretty good. But knowing what we already know, their explanations seem more like excuses and dodging the issue and most people just nod their heads and write it off. See: 300 series rebrands, HDMI 2.0 / adapters, 4 GB VRAM.

Somebody needs to sit down with these people and grill them. I mean, don't be a dick about it, but this is stuff AMD is certainly talking about internally and I think we deserve to know what's really happening. Tired of marketing spin. AMD prides themselves being an "open" platform and all this nonsense about Nvidia's evil black box but it seems AMD is trying to hide their own "black boxes".

Frankly at this point it's become quite obvious they are trying to manipulate the hardware community. I don't know why they think they can lie publicly and we won't figure it out... We aren't stupid.
 
so if we get voltage modification, seeing as the GPU is sitting @ 37 degrees, there is a lot of headroom with temperature, you'll be able to add some voltage! it could still be a crazy OCing card.

As long as you don't bake those nicely isolated vrms on back side of card
 
Is it normal for VRM's to be that isolated? I saw someone pasting THG's thermal image of the Titan X and they all seem to be around the GPU core, despite that it was registering higher than Fury X's VRM's.
 
The most interesting thing I took from Huddy's statements. This obviously ties in with what we represented in the review. This pertaining to the 4GB VRAM size on the Fury X. As stated above we will surely be looking more into this and how it compares to other cards. Worth noting our test system uses 16GB of RAM at 1600MHz.

"If we just shuffle this memory around, the GPU does all its hard work in the working set, which is 4GB, and anything else we can swap in an out. So what we see is 4GB is far from a limit. What happens is that the amount of RAM you have in your PC, that effectively gets added to the frame buffer. The more you have, the faster your system runs, the better it handles those big and bulky games."
 
I just want to take a moment to tip my hat to Kyle and Brent. Thanks for being so active in this thread and really trying to respond to valid questions and ideas people have.

I realize plenty of people don't even understand the idea, but your engagement here really shows respect and earns it from those of us mature enough to appreciate it.

Thanks, guys.
 
I just want to take a moment to tip my hat to Kyle and Brent. Thanks for being so active in this thread and really trying to respond to valid questions and ideas people have.

I realize plenty of people don't even understand the idea, but your engagement here really shows respect and earns it from those of us mature enough to appreciate it.

Thanks, guys.

No Problem :p I'm just a gamer with a passion for hardware and the gaming experience. Believe you me, we listen to your feedback. My next review was born out of the feedback and questions from this thread, I think it will answer a lot of questions about the Fury X and 4K, it will be no small feat.
 
What Huddy is saying in that statement could technically be possible *but* the game would have to be designed for it. And I just don't see developers creating multiple texture handling approaches for games. The game polls the driver for the amount of VRAM and that's that and won't change any time soon.

The other route would be if AMD shims it in their driver - presenting the virtualized VRAM pool to the game (8GB or 12GB) and then handling the swapping abstracted in driver. If they were planning to do that it's another thing they should've had ready at launch.

I think our Dying Light results have already shown us this answer, but let's give AMD the benefit of the doubt and go back in and test for this specifically. Maybe we find a game that is dipping heavy into VRAM, then go back and test with 4GB. 8GB, and 16GB of system memory and see how it all pans out?
 
I think reviewers that tend to play a lot with those cards, can add a opinion point in the conclusions page about their experiences overtime in long gaming sessions with the cards, as I think there's a point where the card can not just not swap enough with RAM and PageFile and just start to stutter..

This is a good point, gameplay longevity can be affected by VRAM/RAM consumption over time and clock speed throttling when the GPU warms up. If time is not given to actually playing games over a long period of time, results can be skewed. This is another reason by benchmarks, or timedemos, or someone not playing for long, can have different results than us and not truly show what the gameplay performance is like.
 
Kyle that sounds like an awesome idea about testing with differing amounts of system ram. Great review as always and I really hope AMD pulls some driver magic to help the fury perform better. The main game I have noticed eating up ram and Pagefile is shadows of mordor but I do not have dying light yet.
 
I didn't catch it in your review or I just missed it, but do you know why the GPU voltage is locked? Also, do you know if it will be unlocked in the future?
 
This is a good point, gameplay longevity can be affected by VRAM/RAM consumption over time and clock speed throttling when the GPU warms up. If time is not given to actually playing games over a long period of time, results can be skewed. This is another reason by benchmarks, or timedemos, or someone not playing for long, can have different results than us and not truly show what the gameplay performance is like.

that's why I like [H] reviews over any other review site, because I know game sessions are long enough to provide better real world results..
 
No amount of overclocking can fix the lack of HDMI 2.0.

Maybe. It could be that if you overclock it high enough, it will open a portal to another dimension where the card DOES have HDMI 2.0. :p
 
Maybe. It could be that if you overclock it high enough, it will open a portal to another dimension where the card DOES have HDMI 2.0. :p

You've unlocked the ultra-secret AMD Gaming Evolved apps ultimate achievement award; your copy of HL3 is in the mail.
 
Is it normal for VRM's to be that isolated? I saw someone pasting THG's thermal image of the Titan X and they all seem to be around the GPU core, despite that it was registering higher than Fury X's VRM's.
That image was actually of the VRAM chips on the back of the Titan X getting hot, which makes it even worse as far as heat goes. On the Titan X there are 12 x 4Gb memory chips surrounding the core on both sides of the PCB. They have to be physically close to the core to decrease latency. The VRMs on the Titan X are mounted on the front of the PCB near the inside of the blower fan and are passively cooled.
 
No Problem :p I'm just a gamer with a passion for hardware and the gaming experience. Believe you me, we listen to your feedback. My next review was born out of the feedback and questions from this thread, I think it will answer a lot of questions about the Fury X and 4K, it will be no small feat.

Brent, there's been a lot of talk about VRAM usage. In particular, people saying VRAM usage != VRAM needed. The only way I really see around this is to have a card around like the 390x in both 4 and 8GB flavors, and in the cases where a game is using more than 4GB VRAM to then retest that game, once with the 4GB 390x and once again with the 8GB 390x and document the performance difference when the only variable is VRAM. Is this something your next article will be addressing?
 
I'm curious how the Fury X would have fared if the number of ROP's were increased to 96 or 128?
 
Brent, there's been a lot of talk about VRAM usage. In particular, people saying VRAM usage != VRAM needed. The only way I really see around this is to have a card around like the 390x in both 4 and 8GB flavors, and in the cases where a game is using more than 4GB VRAM to then retest that game, once with the 4GB 390x and once again with the 8GB 390x and document the performance difference when the only variable is VRAM. Is this something your next article will be addressing?

You'd need to test frame times to tell the difference.
 
I'm curious how the Fury X would have fared if the number of ROP's were increased to 96 or 128?

Isn't a greater ROP count needed more for higher resolutions? I don't think it would have helped with the 1080p and 1440p inefficiencies. AA-wise, most benches are using shader based FXAA or SMAA which isn't ROP bound.

Improved 4K performance further, most likely, but 4K performance is already pretty good on the card. That said, I imagine 96 or 128 ROPs would have given them a clear win at 4K, since it's already close there.
 
Could the card potentially throttle itself if the VRM's reach a certain temp due to oc or bad case airflow in a warm room? I have a cheap am3+ board without VRM heatsinks with an fx8320 for my htpc and it throttled when the VRM got past a certain temp when i tried benchmarking it for shits n giggles once.
 
Could the card potentially throttle itself if the VRM's reach a certain temp due to oc or bad case airflow in a warm room? I have a cheap am3+ board without VRM heatsinks with an fx8320 for my htpc and it throttled when the VRM got past a certain temp when i tried benchmarking it for shits n giggles once.

yes it can throttle and even drop to 2D Mode, if the temps are still high then it can just shut-off the card...
 
yes it can throttle and even drop to 2D Mode, if the temps are still high then it can just shut-off the card...

I've not seen any indication that VRM temps are used as control elements.
 
So you are only taking 4k performance into account? What about the majority of users who are on 1080p, the lesser amount on 1440p, and the few who are on 2180p.... I guess observing results that make you feel you are getting more for your money is what you like.... If you are only looking at this card for 4k, then sure, its almost 40% faster than the 290x but then again, it gets beat by the 980Ti most of the time for nearly the same price.

At 1080p, the framerate is likely CPU bottlenecked since the 980ti and the Fury are so fast. However, the drivers for fury are just utterly, terribly bad at the moment and so the CPU overhead is enormous relative to the 980 ti at lower resolutions. Which is why the performance is so bad at 1080p and 1440p. I can't see any other explanation. At 4K the GPU becomes the bottleneck so it's a better test of pure GPU performance rather than how nicely the drivers play with the rest of the system.

I don't think I would buy fury either at the moment, especially with its lack of overclockability, but I'm willing to wait and see if performance improves over time.
 
I've not seen any indication that VRM temps are used as control elements.

run furmark with any AMD card and you will see how it throttle ridicously due to VRM overheat.... I can make one of my 280X to throttle down to 501mhz (2D clocks) and still the core be at 70C. (which is far of throttle by temperature).. the card itself should throttle the core to alleviate the load in VRMs and same I can do with a Friend's 290X.. the greater throttle I've seen in any of my nvidia cards with furmark was with one of my oldie 660TI, and was to 1163mhz (still above the advertised boost clock) while in real world gaming or benchmark it can go to 1254mhz out of the box..
 
This whole launch is just confusing to me.
I kind of see the 300's because they did get them close to the 980's, but the fury seems to be a.... rush job. like, the marketing guys said,"we need something to reveal" and the engineers said "well, its not ready yet" ... and here we are.
 
I find the assertion that they will be able to optimize memory usage in drivers to prevent 4GB from becoming a bottleneck concerning. That is exactly what NVIDIA claimed once it was revealed that the 970 only has 3.5GB of full speed VRAM, and they claimed they had built algorithms/heuristics to allocate the memory intelligently so that only what was truly needed would be in the working set.

Problem is, optimizing on an individual game by game basis means that only the most important AAA titles will see any real work done, and you can see how that works out already with how CrossFire and SLI support is handled. Some games work great, some games don't or take a long time to fix, and less popular titles are ignored.

Not my preferred approach.
 
I find the assertion that they will be able to optimize memory usage in drivers to prevent 4GB from becoming a bottleneck concerning. That is exactly what NVIDIA claimed once it was revealed that the 970 only has 3.5GB of full speed VRAM, and they claimed they had built algorithms/heuristics to allocate the memory intelligently so that only what was truly needed would be in the working set.

Problem is, optimizing on an individual game by game basis means that only the most important AAA titles will see any real work done, and you can see how that works out already with how CrossFire and SLI support is handled. Some games work great, some games don't or take a long time to fix, and less popular titles are ignored.

Not my preferred approach.

but its possible in fact, nvidia are making that move with any card with asynchronous memory controller and asymmetrical memory configurations.. if nvidia can do that, i don't see a reason why AMD can't do it to keep fresh the memory pool.. bigger bus and faster bandwidth allow faster swap of textures which certainly can help, so basically is possible.. also I think not all games are so sensitive to 4GB vRAM so the work in a basis game to game can also work.. but we know slow are AMD in their driver development, don't work as good as nvidia in that aspect..
 
but its possible in fact, nvidia are making that move with any card with asynchronous memory controller and asymmetrical memory configurations.. if nvidia can do that, i don't see a reason why AMD can't do it to keep fresh the memory pool.. bigger bus and faster bandwidth allow faster swap of textures which certainly can help, so basically is possible.. also I think not all games are so sensitive to 4GB vRAM so the work in a basis game to game can also work.. but we know slow are AMD in their driver development, don't work as good as nvidia in that aspect..
Obviously it's POSSIBLE, but swapping memory from system memory to VRAM is very slow. NVIDIA's 970 has noticeable issues with swapping memory from the 3.5GB pool to the 0.5GB pool on the card itself, which I believe still has more memory bandwidth than most dual channel DDR3 systems.

It's just not a good solution vs having more RAM on the actual card. It requires their driver team to optimize for any games that encounter problems and I'm not confident that would be done in a timely manner. I'm not blaming AMD here either, I wouldn't buy a 970 for the same reason.
 
Obviously it's POSSIBLE, but swapping memory from system memory to VRAM is very slow. NVIDIA's 970 has noticeable issues with swapping memory from the 3.5GB pool to the 0.5GB pool on the card itself, which I believe still has more memory bandwidth than most dual channel DDR3 systems.

It's just not a good solution vs having more RAM on the actual card. It requires their driver team to optimize for any games that encounter problems and I'm not confident that would be done in a timely manner. I'm not blaming AMD here either, I wouldn't buy a 970 for the same reason.

exactly same reason I never recommend a GTX 970 unless the person really desire to stay with green team and are budget constrained.. but that's off topic, at the end, is possible but as you, I don't see it working in AMD's hands..
 
At 1080p, the framerate is likely CPU bottlenecked since the 980ti and the Fury are so fast. However, the drivers for fury are just utterly, terribly bad at the moment and so the CPU overhead is enormous relative to the 980 ti at lower resolutions. Which is why the performance is so bad at 1080p and 1440p. I can't see any other explanation. At 4K the GPU becomes the bottleneck so it's a better test of pure GPU performance rather than how nicely the drivers play with the rest of the system.

I don't think I would buy fury either at the moment, especially with its lack of overclockability, but I'm willing to wait and see if performance improves over time.

Ah, fair enough.
I didnt use Furmark on my 290x because it ages cards too quick.

It doesnt change that AMD probably locked the voltage because the VRMs will quickly end up being the limiting factor.
Why the hell havent they given them any cooling, it might well have been an overclocking demon.

Perhaps there are other concerns with the memory interface if the GPU voltage is raised.
There may be an incompatibility or leakage that upsets the memory, not helped by the direct connection.

I imagine some home brewed cooling solutions will emerge and some manual voltage mods so we should find out its real capabilites in a few weeks/months.
 
Back
Top