AMD Radeon R9 Fury X Video Card Review @ [H]

Lots of people like cool running cards. Look at all the people that skipped the R9 200 series for Nvidia 700 series strictly because the 700 series ran cooler. Personally I could care less, but it seems that for a lot of consumers this is a valid concern.

Again that is one of those metrics that overrode all others for those particular consumers. For instance:

I don't care about noise: Have an air conditioned cabinet that houses both my Wife's computer and mine so noise is not an issue. Have a DELTA MEGA FAST fan in mine and trust me no fan is louder. I do run it @5V because @12V you can hear it thru the case and outside my house. Damn good benching fan but not the average user.

Don't care about power usage: I pay my own bills and as long as I can there is no need to concern my self with the little difference a change would make.

Money: Unfortunately it limits me more than I wish. Actually instead of money I should say WIFE. She is what keeps me from buying some of what I want, FEAR THE WRATH. LOL

Again some people are polar opposite of my metrics and that doesn't make them stupid wrong or a fanboy.
 
Interesting that a performance increase was reported when AMD say the clocks did not actually increase.
Someone has foot in mouth disease.
 
Interesting that a performance increase was reported when AMD say the clocks did not actually increase.
Someone has foot in mouth disease.

I might be mistaken... but every report of a performance improvement with "HBM overclock" I have seen also have the GPU core overclocked... so it's obvious were the performance improvement comes from... :rolleyes:
 
I might be mistaken... but every report of a performance improvement with "HBM overclock" I have seen also have the GPU core overclocked... so it's obvious were the performance improvement comes from... :rolleyes:

On the one I saw it was 9% on Core 20% HBM resulting in 20% performance.
 
I might be mistaken... but every report of a performance improvement with "HBM overclock" I have seen also have the GPU core overclocked... so it's obvious were the performance improvement comes from... :rolleyes:

lol, if thats true, its even worse.
Care to provide proof that it was always accompanied by an unannounced core overclock?

Why havent you trolled the people who made the claims?

ps
How is it obvious with a rolleyes when you "might be mistaken"
Fanboy not sure of himself, nicely exposed.
 
Last edited:
lol, if thats true, its even worse.
Care to provide proof that it was always accompanied by an unannounced core overclock?

Why havent you trolled the people who made the claims?

ps
How is it obvious with a rolleyes when you "might be mistaken"
Fanboy not sure of himself, nicely exposed.


I never said it was "unannounced"... if it was how would I know the GPU core was overcloked ? :confused:

I did not trolled... because I'm no troller.

ps: WHAT !? You are calling me a AMD Fanboy ? LOL Guess what... I AM ! LOL Good job exposing me... LOL
 
I never said it was "unannounced"... if it was how would I know the GPU core was overcloked ? :confused:
Yes you are confused.
If the overclock was announced, your point would have no grounds because the overclock was already considered.
So that only leaves unannounced overclocks with any merit for your comment.

I did not trolled... because I'm no troller.
Says the troller who trolled me lol.
You stated you might be wrong, but then said its obvious and gave me a rolleyes.

ps: WHAT !? You are calling me a AMD Fanboy ? LOL Guess what... I AM ! LOL Good job exposing me... LOL
Yes.
I have demonstrated that you trolled me for no good reason.
If you can give me another reason why you trolled me, I will consider it.
:p
 
Yes you are confused.
If the overclock was announced, your point would have no grounds because the overclock was already considered.
So that only leaves unannounced overclocks with any merit for your comment.


Says the troller who trolled me lol.
You stated you might be wrong, but then said its obvious and gave me a rolleyes.


Yes.
I have demonstrated that you trolled me for no good reason.
If you can give me another reason why you trolled me, I will consider it.
:p


You can stop that game of who trolled who because I'm not into that.

That being said... I'm not following your logic...
The thing is reports have been showing up stating that:
1. HBM can be overclocked
2. performance increases when HBM is overclocked

...I added that I was almost sure that in all those reports I have seen the GPU core was also overclocked...

Then we have this: http://wccftech.com/amd-radeon-r9-fury-memory-oveclocked-20/ (Check the UPDATE in the middle of the article.)

If what Robert Hallock (AMD technical PR) said is true than the performance increase can ONLY be because of the GPU core overclock...

That was what I was expeculating... that's why I used the "rolleyes"

Finally... I was not trolling you when I said I'm a "AMD fanboy"... I am in the sense that I will NEVER buy a nVIDIA gpu.
 
You can stop that game of who trolled who because I'm not into that.

That being said... I'm not following your logic...
The thing is reports have been showing up stating that:
1. HBM can be overclocked
2. performance increases when HBM is overclocked

...I added that I was almost sure that in all those reports I have seen the GPU core was also overclocked...
Thats not all you said.
You started with maybe... Then you said obviously and put a rolleyes on the end.
A direct troll.

Then we have this: http://wccftech.com/amd-radeon-r9-fury-memory-oveclocked-20/ (Check the UPDATE in the middle of the article.)

If what Robert Hallock (AMD technical PR) said is true than the performance increase can ONLY be because of the GPU core overclock...
You are making an assumption yet saw fit to troll me over it.

That was what I was expeculating... that's why I used the "rolleyes"
BS, saying "obviously" then rolleyes is not speculating, you stated what you think is fact otherwise it cannot be obvious.
Not only did you troll but you dont know what you are talking about.

Finally... I was not trolling you when I said I'm a "AMD fanboy"... I am in the sense that I will NEVER buy a nVIDIA gpu.
You are confused on 2 counts.
I agreed that you are an AMD fanboy. It looks like that is why you trolled me, I was confirming it.
And you did troll.
 
Enough already, kids. Can you conduct your spat offline?

I swear I've never seen so many people act like jerks over something that is ultimately designed for entertainment. This shit is supposed to be fun.
 
Thats not all you said.
You started with maybe... Then you said obviously and put a rolleyes on the end.
A direct troll.


You are making an assumption yet saw fit to troll me over it.


BS, saying "obviously" then rolleyes is not speculating, you stated what you think is fact otherwise it cannot be obvious.
Not only did you troll but you dont know what you are talking about.


You are confused on 2 counts.
I agreed that you are an AMD fanboy. It looks like that is why you trolled me, I was confirming it.
And you did troll.


You are obviously a PRO at this trolling thing... I am not.

Maybe due to bad wording on my part (English is not my first language) I was trolling without wanting or noticing.

If you think I was trolling YOU, I apologize.

I would never troll you because I am a AMD fanboy... simply because I'm not offended if you call me AMD fanboy.
 
Uhhh.. who cares where the performance comes from if it can be OCed for more performance?

Also doesn't that just mean that the memory isn't a bottleneck if it provides no perf increase when OCed?

Whats wrong with a 20% perf boost from 9% on core?
 
I'm sorry for poking the hornet's nest further, but has anyone seen this?
3v87mcvjenj.jpg


Supposedly the Fury X is taking the piss after 3.5GB of VRAM allocation, never mind the fact that bandwidth is far off from the claimed 512 GB/s at stock speed. Would be funny if this turned out to be true.

I ran this same test on a 6Gb 980Ti, it shows the same behavior for the last 500Mb of the ram. Likely it is system reserved by windows, used for page swapping, or something like that.

What would be interesting, is someone running this test on a 970, and see if the last 500Mb that's supposedly slower access, it just behaving the same as these, or if its an additional 500Mb, i.e. the slower access performance appearing at the 3Gb threshold of the test. (Assuming this thing is a reliable benchmark).

One thing to note, nvidia has said that to be able to get the full 12Gb of the Titan X available, you need about 24Gb system ram, due to the way that Windows pages the memory. I have 24Gb system memory, purposely upgraded from 12Gb when I got this 980Ti for this very reason. Running this bench with a Titan X, in a system with only 12Gb system ram, would be telling. See how much system ram has to be set aside for these high end video cards to be able to function optimally.

Someone do some testing that has a Titan X, show us the results :)

Too bad about the FuryX HBM overlocking, but also interesting that overclocking the GPU increases the vram bench scores.
 
I ran this same test on a 6Gb 980Ti, it shows the same behavior for the last 500Mb of the ram. Likely it is system reserved by windows, used for page swapping, or something like that.
On Win7 the last chunk of memory will be in-use by DWM (Aero), which those screenshots linked clearly show was active. If you are using Win7 and disable DWM (and all other GPU applications), you should have full bandwidth for all 6GB on your 980Ti. You can check for this with NVIDIA Inspector by double-clicking the NVIDIA icon and saving a dump file. You'll need to kill anything shown under "GPU Apps" in the txt file.

What this does show though, is there is nothing magical (as expected) about Fury X HBM. Just like all other GPUs, when it exceeds available physical VRAM, it needs to swap over the slow PCI-E bus.
 
I don't think anyone is calling HBM magical, but it is the future and what nvidia will be using going forth as well.
 
I don't think anyone is calling HBM magical, but it is the future and what nvidia will be using going forth as well.

AMD themselves sort of have in their marketing PR spin, when they've been addressing the 4GB limitation. Though in truth, they are just saying to make sure you have enough System RAM so you at least get full PCI-E 3.0 x16 bus speed when a swap is needed, since it will be all that more painful if swapping from the pagefile on a SSD/HDD vs RAM. Either way though, the PCI-E 3.0 x16 bus isn't really fast enough to avoid performance issues in out-of-memory situations.

Stacked memory such as HBM is the future, it just doesn't make VRAM capacity nor PCI-E bus limitations disappear. The bandwidth limitations of the PCI-E bus is the primary reason NVIDIA is creating NVLINK for the supercomputer market on their HBM GPUs, even though we'll likely never see NVLINK as consumers.
 
Not sure which thread I posted the quote in, but it was basically saying that Vram usage now was so inefficient to the point if we knew how much we would be thoroughly disgusted. AMD with HBM is making an effort with driver to clean it up and not rely upon game devs to do so, seeing how they haven't thus far, well most of them.
 
Not sure which thread I posted the quote in, but it was basically saying that Vram usage now was so inefficient to the point if we knew how much we would be thoroughly disgusted. AMD with HBM is making an effort with driver to clean it up and not rely upon game devs to do so, seeing how they haven't thus far, well most of them.

I can't speak to the quote, or what AMD may or may not be doing, but I've been thinking about this quite a bit lately. Some REALLY good looking games have come out in recent years that don't require huge amounts of VRAM. Then all of a sudden (seems like overnight or maybe just post Watch Dogs fiasco) all anyone is talking about is more more more VRAM, VRAM this, VRAM that!

I understand well requirements raising over time, but we're talking some sort of log-curve here. 1GB, 1GB, 1GB, 2GB, 2GB, 3GB, 4GB, 6GB, 8GB... At some point one needs to ask, are devs just lazy, not doing clears when they should, just letting their engines eat memory, hold onto it, etc.? I'd love to hear the opinion of someone that's done some serious graphics coding. Not just heavy use, but tight, efficient code. Like a Carmack, maybe a demo-coder, or someone who really understands squeezing performance out of less.

More is generally better, but not if it's just to compensate for sloppy work.

I get 4K requirements too, but there are games right now using more than 4GB at lower standard resolutions, and that's puzzling to me. If it's really putting it to full use in some tangible way, great, but I tend to wonder...

I don't want to halt progress with more and better memory configs on cards, but it would be a shame if all that's doing is making it easier for devs to be sloppy, and not translating into potential improvements.
 
More is generally better, but not if it's just to compensate for sloppy work.

I agree with this. And just look at a game like Arkham Knight; do you really think management is going to give a f*** about inefficient GPU RAM usage unless it completely tanks the game? Further, it's such a relatively obscure issue for the general public, and even for sophisticated reviewers like [H], how are they actually supposed to quantitatively determine the difference?
 
At some point one needs to ask, are devs just lazy, not doing clears when they should, just letting their engines eat memory, hold onto it, etc.? I'd love to hear the opinion of someone that's done some serious graphics coding. Not just heavy use, but tight, efficient code. Like a Carmack, maybe a demo-coder, or someone who really understands squeezing performance out of less.

More is generally better, but not if it's just to compensate for sloppy work.

I get 4K requirements too, but there are games right now using more than 4GB at lower standard resolutions, and that's puzzling to me. If it's really putting it to full use in some tangible way, great, but I tend to wonder...

I brought this up about 20 pages ago and only got chastised for it. The fact that we went from 220 MB for the game engine in Crysis to 4500 MB in Middle Earth all while using the same amount of VRAM/ megapixel is suspect at best.
 
I can't speak to the quote, or what AMD may or may not be doing, but I've been thinking about this quite a bit lately. Some REALLY good looking games have come out in recent years that don't require huge amounts of VRAM. Then all of a sudden (seems like overnight or maybe just post Watch Dogs fiasco) all anyone is talking about is more more more VRAM, VRAM this, VRAM that!

I understand well requirements raising over time, but we're talking some sort of log-curve here. 1GB, 1GB, 1GB, 2GB, 2GB, 3GB, 4GB, 6GB, 8GB... At some point one needs to ask, are devs just lazy, not doing clears when they should, just letting their engines eat memory, hold onto it, etc.? I'd love to hear the opinion of someone that's done some serious graphics coding. Not just heavy use, but tight, efficient code. Like a Carmack, maybe a demo-coder, or someone who really understands squeezing performance out of less.

More is generally better, but not if it's just to compensate for sloppy work.

I get 4K requirements too, but there are games right now using more than 4GB at lower standard resolutions, and that's puzzling to me. If it's really putting it to full use in some tangible way, great, but I tend to wonder...

I don't want to halt progress with more and better memory configs on cards, but it would be a shame if all that's doing is making it easier for devs to be sloppy, and not translating into potential improvements.


Do you realize that artwork is what takes a huge chunk of time in game development once an engine is done?

Do you know rendering out a set of 8k texture set takes around half a day, just to render out mind you, after that probably another 2 weeks to do final touches? That is just one texture set.

Increasing texture sizes is necessary to get all those beautiful levels you see and 8k is going to be a standard for next gen games.


If you think that is sloppy or lazy, Tell you favorite game development houses, to tone down the artwork just for you, better yet don't buy those types of games because you think they are sloppy and lazy.

The problem is they aren't lazy or sloppy as you so haply put at least most of them, going up one texture size is going up x4 the necessary memory, and to create those textures now 4 times the horse power from a pc standpoint. That's why we don't see the necessity to go up in vram quickly, but at some time there is a jump that is necessary when the processing power and time to create assets is there.

And most of the memory is used for textures, at least before we start looking into AA and AF modes, res increases the buffer sizes some but comparatively nothing takes up more memory then the textures.
 
Last edited:
I brought this up about 20 pages ago and only got chastised for it. The fact that we went from 220 MB for the game engine in Crysis to 4500 MB in Middle Earth all while using the same amount of VRAM/ megapixel is suspect at best.


Crysis the game takes a lot more then 220 mb at highest settings. The engine alone with the shaders takes up that much space without anything else going on.

Just open up the Cry engine editor with nothing loading up and run the profiler and see.
 
This I think sums up Amd fans here. It's part pity part cool factor. Rational individuals would buy the more performant Nvidia card but the intangible is how much cool factor overrides performance by the factory all in one cooler.

Most neutral games there is no discernible difference in the performance. A few % one way or the other in DX11 isn't going to be noticeable. Most people realize that liquid cooling is added value, and then there's the better performance in non gaming situations. There are lots of good reasons to buy Fury.

I prefer AMD and feel no pity what so ever. Even discounting the complete compute domination, anyone who bought the 7970 over the 680/770 ended up with a better card. Anyone who bought a 290/X over 780/ti/Titan ended up with a better card. Especially considering that the same chip in the 390/X is still keeping up with the 970/980. In almost every case they saved money too. Don't be surprised, with the more forward thinking tech in Fury that it doesn't end up better in the end than the 980 ti/Titan X. History has a strange way of repeating itself. Especially with a company like nVidia who could care less about you once they've got your money and AMD cards being aimed at open source technologies.
 
Most neutral games there is no discernible difference in the performance. A few % one way or the other in DX11 isn't going to be noticeable. Most people realize that liquid cooling is added value, and then there's the better performance in non gaming situations. There are lots of good reasons to buy Fury.

I prefer AMD and feel no pity what so ever. Even discounting the complete compute domination, anyone who bought the 7970 over the 680/770 ended up with a better card. Anyone who bought a 290/X over 780/ti/Titan ended up with a better card. Especially considering that the same chip in the 390/X is still keeping up with the 970/980. In almost every case they saved money too. Don't be surprised, with the more forward thinking tech in Fury that it doesn't end up better in the end than the 980 ti/Titan X. History has a strange way of repeating itself. Especially with a company like nVidia who could care less about you once they've got your money and AMD cards being aimed at open source technologies.

No it isn't an added value. It's part of the price of the card, nothing more and nothing less. It's not like people have the option of a CLC or not. AMD didn't add it out of the goodness of their hearts or to add more value to the card, it's there because AMD felt it was necessary. Hopefully future drivers do improve the card, something to put pressure on Nvidia in the high end would be great for everyone.
 
Most neutral games there is no discernible difference in the performance. A few % one way or the other in DX11 isn't going to be noticeable. Most people realize that liquid cooling is added value, and then there's the better performance in non gaming situations. There are lots of good reasons to buy Fury.

I prefer AMD and feel no pity what so ever. Even discounting the complete compute domination, anyone who bought the 7970 over the 680/770 ended up with a better card. Anyone who bought a 290/X over 780/ti/Titan ended up with a better card. Especially considering that the same chip in the 390/X is still keeping up with the 970/980. In almost every case they saved money too. Don't be surprised, with the more forward thinking tech in Fury that it doesn't end up better in the end than the 980 ti/Titan X. History has a strange way of repeating itself. Especially with a company like nVidia who could care less about you once they've got your money and AMD cards being aimed at open source technologies.

Considering people are running 1000Mhz baseclock 980Ti's at 1500Mhz on AIR, the possibility of maxed-from-the-factory Fury X ever performing better than it is a little misinformed.

I'm also unaware that AMD cares about "open source". freesync? nope. mantle? nope. dx12? nope. Vulkan? not yet. When Mantle didn't catch on and they punted, it wasn't for any altruistic reasons, they gave it to MS & Khronos in hopes it would eventually benefit their struggling APU's. So it was just a forced extension of the self-serving long game they were trying to play with Mantle, not because they care about "open source" or its community.
 
Last edited:
Considering people are running 1000Mhz baseclock 980Ti's at 1500Mhz on AIR, the possibility of maxed-from-the-factory Fury X ever performing better is a little delusional..

I'm also unaware that AMD cares about "open source". freesync? nope. mantle? nope. dx12? nope. Vulkan? not yet. When Mantle didn't catch on and they punted, it wasn't for any altruistic reasons, they gave it to MS & Khronos in hopes it would eventually benefit their struggling APU's. So it was just a forced extension of the self-serving long game they were trying to play with Mantle, not because they care about "open source" or its community.

Sorry but this is blatant BS. Sounds more propaganda than fact. Mantle wasn't punted. AFTER DX12 was announced AMD recommended devs move to DX12 rather than Mantle basically they both have equivalent functions and would therefore be less resource intensive for AMD. DX12 is better industry wide and APUs as well as a lot of lower end CPUs from all manufacturers will benefit greatly.
 
Considering people are running 1000Mhz baseclock 980Ti's at 1500Mhz on AIR, the possibility of maxed-from-the-factory Fury X ever performing better is a little delusional..

I'm also unaware that AMD cares about "open source". freesync? nope. mantle? nope. dx12? nope. Vulkan? not yet. When Mantle didn't catch on and they punted, it wasn't for any altruistic reasons, they gave it to MS & Khronos in hopes it would eventually benefit their struggling APU's. So it was just a forced extension of the self-serving long game they were trying to play with Mantle, not because they care about "open source" or its community.

you mean running 1500 BOOST clock?
 
It's really like going from 1175 Mhz to 1500 Mhz on air (28% OC) for nVidia, which is still impressive. :)


Oooo VRAM nonsense for AMD. Nice. Reminds me of that GTA V video where it was stuttering at 4k, would get around 3.5GB then stutter and drop to 2.5GB.

Sorry but this is blatant BS. Sounds more propaganda than fact. Mantle wasn't punted. AFTER DX12 was announced AMD recommended devs move to DX12 rather than Mantle basically they both have equivalent functions and would therefore be less resource intensive for AMD. DX12 is better industry wide and APUs as well as a lot of lower end CPUs from all manufacturers will benefit greatly.

Mantle was punted. It was a disaster from day one. All I ever read about was, "I have an issue with xx game". "Oh did you try running Dx instead of Mantle?" "That worked thanks!". Games like BF4 was a shit storm, which was their main game pushing mantle.

IMO both nVidia and AMD should stay the hell away from things like Mantle, Gameworks, ect.
 
Last edited:
Sorry but this is blatant BS. Sounds more propaganda than fact. Mantle wasn't punted. AFTER DX12 was announced AMD recommended devs move to DX12 rather than Mantle basically they both have equivalent functions and would therefore be less resource intensive for AMD. DX12 is better industry wide and APUs as well as a lot of lower end CPUs from all manufacturers will benefit greatly.

You really think AMD spent millions developing Mantle, then millions more writing bribe checks to EA & friends to get Mantle implemented in some high profile games to create hype and perception, all so they could give it away to Microsoft in the end? Nobody can be that naive.. Obviously it will benefit their APU's - that's not shining a light on anything previously unknown. But the original intention was to create AMD lockin for the benefit of their GPU's and secondarily APU's - and rightly so - but as time slipped by and developers continued to ignore it since it was too costly to implement with AMD only 20% of the GPU market, AMD cut their losses.
 
Last edited:
You really think AMD spent millions developing Mantle, then millions more writing bribe checks to EA & friends to get Mantle implemented in some high profile games to create hype and perception, all so they could give it away to Microsoft in the end? Nobody can be that naive.. Obviously it will benefit their APU's - that's not shining a light on anything previously unknown. But the original intention was to create AMD lockin for the benefit of their GPU's and secondarily APU's - and rightly so - but as time slipped by and developers continued to ignore it since it was too costly to implement with AMD only 20% of the GPU market, AMD cut their losses.

You have to look at the time line, AMD Didn't initiate Mantle Dice did. That throws the whole argument to the wind. Add to that Mantle and DX12 are virtually identical. Alot of what you are posting doesn't follow the timeline of facts or the eventual situation now.
 
You have to look at the time line, AMD Didn't initiate Mantle Dice did. That throws the whole argument to the wind. Add to that Mantle and DX12 are virtually identical. Alot of what you are posting doesn't follow the timeline of facts or the eventual situation now.


Well that's the thing, We still don't know if Dx 12 was already being worked on at the point Dice was making the specs for Mantle, all we know is many developers were saying they wanted certain things for Dx12.

If I remember correctly Mantle was first really talked about to partners and general public in Nov of 2013 by Dice's TD at a conference, Dx12 was first shown off in a game in GDC March 2014.

So either Mantle was given to MS and in 6 months Microsoft was able to create Dx12 and then have a developer modify their game, all the while drivers were made too in the same time frame, its doesn't sound right.

All this stuff doesn't matter anymore, Mantle as we currently know it is pretty much gone.
 
Last edited:
Back
Top