AMD Radeon R9 Fury X Video Card Review @ [H]

Good point durquavian, I am starting to have my doubts after doing a bit more research.
Perhaps we will get clarification with a Fury X crossfire vs. 980ti in SLI running 4k surround
:p

tweaktown ran a couple of articles a vram consumption, but I am calling B.S on it. They have several games running over 4 GB with AA in 2.5k (I am calling 1440p 2.5k, think it will catch on?). i think Battlefield 4 showed 7.5 Gb when running 4k. Sorry, not buying it.

What would be nice is a clear cut designation for needed and used. Hell if i had 12Gb I would want as much populated if it would help gameplay. Honestly Vram being a hard limit with reasonable settings wont happen till we are pushing 4k at greater than 60 with ease.
 
The article by tweaktown is sensationalism and rather irresponsible. By the end of it, they are basically having uninformed consumers run out and get GPUs with 12+ GB of VRAM "in order to play tomorrows games" since ME:SoM is shown to use 8.5 GB. This could have easily been validated by showing a large performance drop between the 6 GB 980ti and the 12 GB Titan or even between a 4 GB and 8 GB 390x. They also did not compare how different architectures and memory speed affected VRAM consumption which would be especially important with HBM since it is an entirely new beast.
 
You can think of HBM as memory with higher bandwidth and lower power.

Since when ever has memory bandwidth been a replacement for memory capacity, answer is never.
 
Ok so now I am home and will give greater detail into this post (too many not getting the point):

Problem here is, and in most reviews, there is no proof of 4Gb being an issue. None. Every benchmark I have seen @4k has the Fury = 980Ti/TitanX, 4Gb against 6/12Gb. It's not to say it isn't an issue but it alone doesn't explain why the Fury is neck and neck with its NVIDIA competitors. Therefore the complaint that 4Gb is the issue for AMD when that is the one tier it puts up a solid performance, invalidates the complaint. It doesn't explain why the NVIDIA counter parts struggle against the same card they easily out performed at lower resolutions. Of course there is bus width and bandwidth but the issue being made was 4Gb which seems unfounded or needs a lot more investigation before making such claims.

This whole discussion started with a poster asking the very question I had all based on the review and what was written. It was alluded that 4Gb was an issue for a flagship GPU as it would hinder its performance at 4k by even stating that the 980Ti with 6Gb was using 5.7Gb of its Vram. Even using the loose numbers of performance at different resolution levels stated earlier, the 4k performance does not follow the lower resolution performance levels. Add the 4Gb Vram one would assume based on the articles concern that the Vram limitation would falter the Fury to poor performance levels compared to the already proven strength of the 980Ti/TitanX at lower resolutions. But the 4k benches do not show this outcome.

So lets look at the point I was making. Say the 4Gb Vram is definitely hampering the Fury's performance, then what is hampering the 980Ti's or TitanX? It is already documented in the 1440p benches that the 980Ti/TitanX are better performers. Even at 1080p on other review sites. And given they have higher Vram counts compared to the Fury they should not see much if any issue at 4k, given the assumption in the article. But in the 4k Benches they are virtually tied. Percents do not give adequate reality in low FPS results. Most 4k benches result in 20-40FPS. Therefore 10% is at best 3-4 frames. Any way the graphs show no real hitches or Vram limiting spikes one would expect with Vram issues. Again it doesn't mean 4Gb is not limiting at all, but rather there is more info needed to explain the result.

None of this is pointing to false benches or biased results but rather accepting the results as fact it leaves more questions than answers given thru the review. There are numerous reasons for AMD Fury not being hit as hard at the 4k Benches, bandwidth or bus can easily be advantageous at higher resolutions. Simply put there is not enough evidence to prove the Vram was a limiting factor, or in this case the only factor to 4k gaming.

4GB was slowing me down in Dying Light even at just 1440p /shrug

Let's see what happens with two of these in CrossFire at 4K trying to run high game settings
 
Warning: skip if you hate math

I found it fascinating that 4k does not use 4 times as much memory as 1080p. Its like there is a
'y factor' where y is memory reserved despite resolution. Bare with me...

Take for example Tomb Raider which uses 1.5 GB ram at 1080p and 3.1 GB with 4k
Now use 2.077 (megapixels) for 1080p and 8.1 for 4k
Lets call 'x' the amount of vram needed / megapixel
and 'y' this vram overhead i mentioned earlier that is not affected by resolution.

Starting with 4k:
8.4x + y = 3.1 .... and now 1080p:
2.07x + y = 1.5 .... using subsitution,
8.4x + 1.5 - 2.07x = 3.1 ...reduce to x and
x=.253 or in other words .253 GB needed for each Megapixel
now we can solve for y:
2.07x + y = 1.5
y = 1.5 - 2.07x ....replace x with .253 and
y=.977

Now lets test with 1440p or 3.7 Megapixels which was said to use 1.94 GB

3.7x + y = 1.91 ---> pretty damn close!!

At quick glance, Metro: last light seems to have a lower y-factor (about 0.8 GB) whereas games
like GTA V seem to have an extremely high y factor of about 4 GB! I am not sure I quite understand the point of some of these extremely high y-factors....
 
Warning: skip if you hate math

I found it fascinating that 4k does not use 4 times as much memory as 1080p. Its like there is a
'y factor' .

i thought everyone knew this? Things dont scale linearly. resolution isn't the only thing that eats ar Vram.
 
Just for fun, I tested again first using ME:SoM and i came up with
x = .1 GB and y = 4.56 where again x is GB/megapixel and y is the "bullshit overhead"
I tested on 1440 p and got 4.93 GB vs. 4.97 GB posted by tweaktown.

Then Metro Last light gave me:
x = .115 and y = 1.06
testing formula with 1440p gave me 1.49 GB vs. 1.46 GB that the recorded

ARE YOU FRICKIN KIDDING ME!!!

So if it wasn't for this mysterious 'y-factor' that has NO explanation.
We could play all these games with 2 GB or below if it wasn't for these random 'y- factors'

What was really fascinating is that the VRAM hog ME:SoM actually required LESS
VRAM/megapixel than Tomb Raider!!!
 
R.E. Mantle in BF4

At 2560x1440 4X MSAA Ultra on Fury X

D3D11 = 66.5 FPS AVG
Mantle = 55 FPS AVG (ran several runs to confirm)
 
Does anybody else see what a crock of shit this all is?? So in a couple of years, when you
run a game a 1080p, you will need 8.2 GB of vram simply because the developer decided
to set this y-factor overhead at 8.0 GB Good time to invest in samsung if you ask me.
 
Does anybody else see what a crock of shit this all is?? So in a couple of years, when you
run a game a 1080p, you will need 8.2 GB of vram simply because the developer decided
to set this y-factor overhead at 8.0 GB Good time to invest in samsung if you ask me.

I wondered why 2GB used to be perfect for 1080P not so long ago and all of the sudden 4GB is absolutely necessary. Graphic fidelity in games is not that much better than a couple years ago?
 
R.E. Mantle in BF4

At 2560x1440 4X MSAA Ultra on Fury X

D3D11 = 66.5 FPS AVG
Mantle = 55 FPS AVG (ran several runs to confirm)

Yep. AMD killed Mantle without at least updating it for GCN 1.2 before shutting the door and turning the lights out, and BF4 would've needed an update as well.

A lot of people naturally assumed Fury X would at least beat 980 Ti in Mantle based games but seems like AMD couldn't be bothered. Low hanging fruit and could've been used to create a propaganda win in some benches.
 
Last edited:
R.E. Mantle in BF4

At 2560x1440 4X MSAA Ultra on Fury X

D3D11 = 66.5 FPS AVG
Mantle = 55 FPS AVG (ran several runs to confirm)

Thought it was determined that new cards don't work well with Mantle. Mantle was great with GPUs that could work with it.
 
4GB was slowing me down in Dying Light even at just 1440p /shrug

Let's see what happens with two of these in CrossFire at 4K trying to run high game settings

I'm still shocked they want this to be the 4k "killer" and not upgrade the ROP's and stick a measly 4gb of memory on it. This won't affect my buying decision as I don't have any plans on playing at 4k (turned into a cheap gamer, got old with kids :D) but their over all performance and price will for sure be preventing me from buying this.
 
Warning: skip if you hate math

I found it fascinating that 4k does not use 4 times as much memory as 1080p. Its like there is a
'y factor' where y is memory reserved despite resolution. Bare with me...

Take for example Tomb Raider which uses 1.5 GB ram at 1080p and 3.1 GB with 4k
Now use 2.077 (megapixels) for 1080p and 8.1 for 4k
Lets call 'x' the amount of vram needed / megapixel
and 'y' this vram overhead i mentioned earlier that is not affected by resolution.

Starting with 4k:
8.4x + y = 3.1 .... and now 1080p:
2.07x + y = 1.5 .... using subsitution,
8.4x + 1.5 - 2.07x = 3.1 ...reduce to x and
x=.253 or in other words .253 GB needed for each Megapixel
now we can solve for y:
2.07x + y = 1.5
y = 1.5 - 2.07x ....replace x with .253 and
y=.977

Now lets test with 1440p or 3.7 Megapixels which was said to use 1.94 GB

3.7x + y = 1.91 ---> pretty damn close!!

At quick glance, Metro: last light seems to have a lower y-factor (about 0.8 GB) whereas games
like GTA V seem to have an extremely high y factor of about 4 GB! I am not sure I quite understand the point of some of these extremely high y-factors....

Nitpicking here, you state 8.1 for 4k but type 8.4 in the equation also shouldn't it be 2.08 instead of 2.077
 
But you see, it DOES scale linearly if you were to ignore this y-factor!!

As does everything that isnt already linear.
Some things need multiple y factors, the same principle though.


In your original post you said
I found it fascinating that 4k does not use 4 times as much memory as 1080p. Its like there is a
'y factor' where y is memory reserved despite resolution. Bare with me...

There is a large memory offset of the games graphical assets that are loaded into vram, such as textures, shadow maps etc.
Call this X1 that you can subtract from the amount of memory used by the change in res.
If vram is filled up, then assets will be streamed in more so X1 is reduced.

Aero is often left on which occupies a few hundred MB offset, called this X2.

Then you have to consider how much AA is used for each mode and the types of AA used.
Its likely that 4K will end up using less AA because it uses much needed GPU power and because its less needed visually. This will reduce the amount of memory required.
AA is a major hog of graphics memory so less AA can use a lot less vram.
AA scales with resolution, it isnt an offset as such but can change between resolutions because user preferences can affect it.

Buffering as used by the frame buffer, vsync, Z buffers etc. will also take up extra vram.
Buffering scales with resolution but is unlikely to change between different resolutions because there isnt much need to change it. Unless vsync is turned off to raise the minimum fps if it was previously on.

...
 
I'm still shocked they want this to be the 4k "killer" and not upgrade the ROP's and stick a measly 4gb of memory on it. This won't affect my buying decision as I don't have any plans on playing at 4k (turned into a cheap gamer, got old with kids :D) but their over all performance and price will for sure be preventing me from buying this.

They wanted to use HBM, and HBM is technology limited to 4GB at this time. Their choice to go with HBM ulitmiately limited them in one aspect.

I'm sure when 8GB HBM is available it will be praised as "perfect for 4K gaming" just you watch.
 
Thought it was determined that new cards don't work well with Mantle. Mantle was great with GPUs that could work with it.

There have been a lot of people spreading FUD saying our testing was flawed because we did not use Mantle and we wanted to put the proof on the table to prove those people wrong.
 
They wanted to use HBM, and HBM is technology limited to 4GB at this time. Their choice to go with HBM ulitmiately limited them in one aspect.

I'm sure when 8GB HBM is available it will be praised as "perfect for 4K gaming" just you watch.

I read that HBM1 wasn't actually limited to 4GB, it was just that AMD originally designed the GPU believing/hoping they'd be able to put it on a smaller process when it came time, which would've left more room for another 4GB. But because the smaller process couldn't happen in the end, they were left cramped for space and could only fit 4GB.

In any case you're probably right, im sure when Pascal rolls into town with 8GB and HBM2 it will be a party with cherry ice cream and people taking their pants off.
 
It 'looks' like HBM has higher latency otherwise the 50% higher bandwidth should have gone some way toward improving minimum fps figures.
 
Nitpicking here, you state 8.1 for 4k but type 8.4 in the equation also shouldn't it be 2.08 instead of 2.077

It's wrong up top. 1920x1080p is 2.07 MP. 4k should really be 8.3, but the math should still be close.

As does everything that isnt already linear.
Some things need multiple y factors, the same principle though.


In your original post you said


There is a large memory offset of the games graphical assets that are loaded into vram, such as textures, shadow maps etc.
Call this X1 that you can subtract from the amount of memory used by the change in res.
If vram is filled up, then assets will be streamed in more so X1 is reduced.

Aero is often left on which occupies a few hundred MB offset, called this X2.

Then you have to consider how much AA is used for each mode and the types of AA used.
Its likely that 4K will end up using less AA because it uses much needed GPU power and because its less needed visually. This will reduce the amount of memory required.
AA is a major hog of graphics memory so less AA can use a lot less vram.
AA scales with resolution, it isnt an offset as such but can change between resolutions because user preferences can affect it.

Buffering as used by the frame buffer, vsync, Z buffers etc. will also take up extra vram.
Buffering scales with resolution but is unlikely to change between different resolutions because there isnt much need to change it. Unless vsync is turned off to raise the minimum fps if it was previously on.

...

Can you please clarify a bit? I am just wondering what is causing the y-factors to be increasing over time. I would really like to see what the y-factor was 5-6 years ago (zero?) I am guessing the actual GB/ megapixel (x) did not really increase much.
 
I read that HBM1 wasn't actually limited to 4GB, it was just that AMD originally designed the GPU believing/hoping they'd be able to put it on a smaller process when it came time, which would've left more room for another 4GB. But because the smaller process couldn't happen in the end, they were left cramped for space and could only fit 4GB.

In any case you're probably right, im sure when Pascal rolls into town with 8GB and HBM2 it will be a party with cherry ice cream and people taking their pants off.


Per Joe Macri, of JEDEC and AMD, currently they are only able to build Gen 1 HBM modules with 4GB per GPU. He did say it was possible that we might see Gen 1 HBM further out that would support 8GB. Gen II HBM will support 8GB for sure and possibly higher as well.
 
Can you please clarify a bit? I am just wondering what is causing the y-factors to be increasing over time. I would really like to see what the y-factor was 5-6 years ago (zero?) I am guessing the actual GB/ megapixel (x) did not really increase much.

More assets are loaded into vram over time as they are needed and often arent discarded until vram fills up.
So this causes the amount of memory used to increase as you play the game.
This can cause unused assets to take up a ton of vram.
But also consider that at higher res, there is less available space left for assets so the ratio of memory used for higher res vs assets changes.

The higher res the assets, the more space they take up.
Memory can be saved by using lower res assets at lower screen res.
But if the same assets are used at all resolutions, it will look like memory use hasnt increased as much at higher res.
Changes in these kinds of strategies over the years can account for some of the differences.

There is no real y factor, every game can use different memory management techniques, and user settings + game settings affect vram used greatly.
 
Last edited:
Last one I promise - Far Cry 4. This time i used 8.3 mp for 4k and 2.07 for 1080p (thanks rumartinez)
x gave me .43 GB/ megapixel (highest so far!) with y being 2.17 GB

Again testing on 1440p:
3.7x + 2.17 = 3.76 GB wait for it..... tweaktown reported 3.77 GB :)

How can you not say there is not a y- factor? That is 4 examples now.
There should be no reason you need 2 GB, 4 GB, or (in the case of Middle Earth) 4.5 GB
of VRAM and beyond even if you play at 720p. I don't care what z-buffer, AA, whateverthehell
is being used.
 
I got that impression too.. A lot of subjectivity in the review. Was this review Nvidia-endorsed? :confused:


I also picked up on a certain tone through the article, however it fealt more like disappointment/despair at AMD again hyping a product beyond its capabilities. The guys at PcPer are themselves being accused of the same bias you and others feel [H] have shown - bias of a bias? Although I do not always agree with Kyle and Co, their methodology is sound and referenced by many other websites who are too constrained in one way or another to review shiny new tech.
As before, kudos to AMD for bringing HBM 1 to market...and the form factor isn't bad sans the WC. However, It's hard to then expect gushing reviews of a product that was over engineered in one sense without addressing some basic throughput issues, which cannot be glossed over by any amount of HBM ..or fairydust. Hopefully some driver magic can boost this GPU to a point that the lack of back-end engine resources becomes moot. I honestly feel AMD are on the right track and Fury/Nano will be well received, with Fury X being the reference point. Just don't expect any of these or the NV equivalents to carry 38x20 ultra quality signal @ over 60 fps consistently. We are not in 4K land yet....
 
Here it is again. Twice in 5 minutes from 2 different people.
Can you prove that?

As soon as you show me a reference air-cooled design for the Fury X...

Oh. TH:

Back-Side-Torture_w_600.jpg


90C on the motherboard slot and nearly 100C on the VRM pins?

Hmm.

If it looks like a duck, and walks like a duck, and quacks like a duck, and if you catch it, kill it and cook it and it tastes like duck?

Probably a duck.

But hey! Screw Occam!

Continue the Fanboy Flail!

Removed - Kyle
 
Last edited by a moderator:
It's a torture test, I believe that's the point.
Toms didn't torture their 980 Ti so there is no comparison.

Maybe they did it on the TX, I didn't check.

edit: Nope, no torture on the 980 Ti nor the Titan X. Good job, Toms. 100% FUD.

The gaming test puts the same spot on the 980 Ti in the mid-60's, slightly below the FX which makes sense.
Although... I'm using common sense and facts. Clearly a waste of time.

If you don't know what a tortue test is please let me know I will gladly explain it to you, just so we're all on the same level here.
 
Last one I promise - Far Cry 4. This time i used 8.3 mp for 4k and 2.07 for 1080p (thanks rumartinez)
x gave me .43 GB/ megapixel (highest so far!) with y being 2.17 GB

Again testing on 1440p:
3.7x + 2.17 = 3.76 GB wait for it..... tweaktown reported 3.77 GB :)

How can you not say there is not a y- factor? That is 4 examples now.
There should be no reason you need 2 GB, 4 GB, or (in the case of Middle Earth) 4.5 GB
of VRAM and beyond even if you play at 720p. I don't care what z-buffer, AA, whateverthehell
is being used.
You can make up a y factor for anything.

You are trying to compare memory used per pixel at each res without understanding explicitly how each game uses memory.
To start with you need to subtract the memory that is used by assets, which wont be known unless you can debug the code or know the game dev.

You arent considering the settings used at each res which can make a huge difference.
I explained that a lot of assets in memory are not necessarily needed but are put there in case they are needed or are left over after previously being loaded for another purpose.
Some memory is superfluous and could perhaps be further optimised, but there is only so much time devs get to work on a project.

The amount of memory used isnt something that is arbitrarily made up.
It is used for a reason in most cases.
 
It's a torture test, I believe that's the point.
Toms didn't torture their 980 Ti so there is no comparison.

Maybe they did it on the TX, I didn't check.

edit: Nope, no torture on the 980 Ti nor the Titan X. Good job, Toms. 100% FUD.

Correct and they even stated if the AIO allowed the fan to spin faster it could help cool it even better.

The normal gaming temp pictures show much cooler temps.

Pic and choose to suit your argument never works out. :rolleyes:
 
Correct and they even stated if the AIO allowed the fan to spin faster it could help cool it even better.
A faster fan isn't going to help cooldown the PCI-E bracket which is the issue of contention.
It also won't help the back of the PCB.

FX pulls over 430W in synthetic tests according to TPU.
It uses around 280W while gaming. Why use a torture bench as a reference point for temps?
 
They wanted to use HBM, and HBM is technology limited to 4GB at this time. Their choice to go with HBM ulitmiately limited them in one aspect.

I'm sure when 8GB HBM is available it will be praised as "perfect for 4K gaming" just you watch.

I don't disagree with them by pushing HBM for this product, it ultimately got them what they wanted; lower profile and lower power usage. They could of added double the stacks to the core for 8gb but I imagine this ultimately proved to be crazy expensive.

Either way 6 or 8gb won't make it better than the competition at 4k, it will alleviate the problem but AMD has more issues with logic it would seem.
 
Back
Top