FuryX aging tremendously bad in 2017

so yeah, all of your logic it's absolutelly wrong..
Actually, I'm in the chip making and semi-conductor industry, I know how long R&D takes... You apparently don't. If you honestly think a company can compete with a card in a matter of a few months, you're greatly mistaking and need to read up on the process. The Fury X was in R&D for probably a year or more... And having sample silicon to test around 6 months later. JUST the tools alone needed to build the NEW kind of memory, was in it's own R&D probably for a year or 2 before. They have to figure out how to make the shit, lol, seeing how it's a BRAND NEW process of stacking memory. Which BTW, AMD doesn't do this R&D, 3rd party Tool makers do (where I work...). So, to say a company brings something out to compete with something that released within the same month or a few months after is kind of a hard argument. Most of the time they are mainly trying to beat their competitions previous flagship, and if they can get close to their companies next flagship, that's all just gravy.

If you knew what I did about the semi-conductor industry, your head would explode.... Let me give you a small incite... both companies are sand bagging. All the way to the bank. lol... Just come along for the ride.
 
I think all of the cards should be evaluated on their MSRP. With that said adding a water block to a card doesn't make it become The Flash. It is the same card that just runs cooler than the regular version. A water cooler is nothing more than a boutique solution. Yes, it can lower power draw, noise and maybe heat; but it is the same card as the air cooled version.

This thread is proof of how bad AMD's marketing was at that time. The Fury X should have been marketed as a collaboration with Cooler Master to create a limited edition Fury with a water block. The Nano should have been marketed as a baby Fury for the SFF audience.

The regular Fury and Nano should have been the same price. The Fury X should have been higher priced just enough to cover the price of the AIO cooler on it. Most importantly the Fury and Fury X should have been identical except for the water cooling parts.

The Fury was tested against the GTX 980 as that was its competitor at that price point. The 390X had double the memory and was marketed against the same GTX 980. This made absolutely no sense to me so I kept using my R9 290 4GB; overclocked and water cooled it. Sapphire had 290X 8GB variants which muddied the waters even more.

The really sad marketing was for the Fury X as a 4K only card. Who dreamed that up? The card should have been a limited edition run for collectors like the water cooled Vega 64 LE.


Whomever wrote the script for the Computex show last night hit a home run. They should stick with that style of a campaign. This is what we have. This is how we think you will use it. This is how it performs running that task.
 
Actually, I'm in the chip making and semi-conductor industry, I know how long R&D takes... You apparently don't. If you honestly think a company can compete with a card in a matter of a few months, you're greatly mistaking and need to read up on the process. The Fury X was in R&D for probably a year or more... And having sample silicon to test around 6 months later. JUST the tools alone needed to build the NEW kind of memory, was in it's own R&D probably for a year or 2 before. They have to figure out how to make the shit, lol, seeing how it's a BRAND NEW process of stacking memory. Which BTW, AMD doesn't do this R&D, 3rd party Tool makers do (where I work...). So, to say a company brings something out to compete with something that released within the same month or a few months after is kind of a hard argument. Most of the time they are mainly trying to beat their competitions previous flagship, and if they can get close to their companies next flagship, that's all just gravy.

If you knew what I did about the semi-conductor industry, your head would explode.... Let me give you a small incite... both companies are sand bagging. All the way to the bank. lol... Just come along for the ride.

You are then refuting your own statement that FuryX was going against the 980. The 980 didn't even exist when AMD was postulating the FuryX.

What really happened is about three years before the Fury line from AMD and the 9xx series from nvidia were released, both AMD and nVidia targeted a performance bracket that their cards would compete in. AMD targeted three Fury performance brackets and nVidia targeted seven (or so).

AMD thought they had a winner with the FuryX. They saw a gap between the 980 and Maxwell Titan and thought they could exploit it with a Fury chip that was pushed to it's absolute limit of stability and charge a premium to boot. The release of the 980 Ti foiled AMD's dream of exploiting that pricing gap. AMD had no choice but to release the FuryX at the same price as the 980 Ti as it performed about the same.

So, yeah, I played a little fast and loose with the 'targeted at' portion of my previous post, but AMD's performance target was no where near the 980.
 
You do know the 980 Ti came out 9 months after the 980 right? And the Fury X shortly after. But, but they time the chips got to the makes and boxing was created and shipping was preparing, blah blah, was probably a month before the 980Ti was launched. How could they have any idea about it (other than inside information which is frequently the case... but any ways..) They were at best hoping to be close to the 980 Ti, but surely beat the 980, which they had plenty of time to tweak a few things because launch. Things happen very slow in the semi-conductor industry.


Anyways.. all I'm saying is that, the 980 Ti isn't the best comparison, other than price, we all can agree the Fury X was on the spendy side, and I think they just wanted to get their money back from the extra R&D costs, which AMD basically all fotted for HBM and now other manufactures use this now and probably their patents if any.
 
Upon release, the FuryX and the 980 Ti, for the most part, traded blows. There was the detail of the FuryX sporting an old HDMI standard and some people who were upset that the card came with its own AIO water cooling solution. The FuryX had only 4 GB VRAM and the 980 Ti having 6 GB VRAM. The FuryX having legions of fans defending said 4 GB VRAM as it was 'faster memory' that could page swap faster than the 6 GB on the 980 Ti. You've been here since 2004 and have over 13000 posts. You're not new here but you seem to have an extremely selective memory loss when it comes to a card you own or used to own.

Absolutely no one with two working synapses would have paid the FuryX premium if it weren't in the same league as the card it competed with, price for price.
 
You guys do realize that the architects who design the chips are only targeting at a *theoretical* performance level. Real world performance, when released, could be better or worse due to the many unknowns in production and a live environment. And sometimes, the architects are just plain wrong. The huge problem was the 4GB HBM1 limitation. Imagine the joy of letting marketing explain to consumers why a 4GB Fury-X was a flagship compared to the lower end (in comparison) 8GB R9-390X. IMO, Fury-X was probably some architect's pet science project for HBM that should never have been released (release a Fury-X with GDDR5 controller... hmmm). I would not be surprised if Nvidia had an unreleased GeForce running HBM1, with learnings applied to their HBM2 Volta.

Also keep in mind that GCN (Tahiti/Hawaii) was doing okay against Kepler in terms of perf and perf/watt. Maxwell, despite not being a die shrink, was the real stunner. Huge perf/watt gains (which allowed for much higher absolute performance). Its not surprising that AMD did target for huge performance gains that are usually attributed with die shrinks.

Upon release, the FuryX and the 980 Ti, for the most part, traded blows.

Absolutely no one with two working synapses would have paid the FuryX premium if it weren't in the same league as the card it competed with, price for price.

Uh no. FuryX was measurably slower than 980Ti at 1080p and 1440p. 4K was even, but frankly, who ran 4K then?

https://www.techpowerup.com/reviews/AMD/R9_Fury_X/31.html
 
Last edited:
1. AMD's marketing moonies were stupid at the time the Fury X was released.
2. OP cherry-picked the fuck out of the Fury X benchmarks in order to shill for Nvidia.

These two points are not mutually exclusive.

Also, I hate the fine wine arguments about tech, no matter who is saying it. Tech ages like dog shit and slightly stinkier, nastier dog shit.
 

Quotes from the article that are salient:
It appears that AMD may no longer be longer prioritizing vRAM optimization for Fury X

Generally both cards are still very capable at 1920×1080 and even at 2560×1440 if some detail settings are lowered.

Conclusion from your linked article:
"There were a few driver issues with both cards but mostly they performed well in modern games."

Our follow-up evaluation on Monday will pit the GTX 980 Ti against the GTX 1070 to see if NVIDIA has neglected Maxwell in favor of Pascal.

So while this thread is clearly your personal witch hunt against the Fury X, it's not like you can't have a decent experience gaming on it.

The follow up evaluation comparing the 980 Ti vs the 1070 should be much more interesting personally.
 
Quotes from the article that are salient:




Conclusion from your linked article:




So while this thread is clearly your personal witch hunt against the Fury X, it's not like you can't have a decent experience gaming on it.

The follow up evaluation comparing the 980 Ti vs the 1070 should be much more interesting personally.

“The GTX 980 Ti is now even faster than the Fury X for the majority of our games. Out of 105 individual benches, the Fury X only wins 26 and ties two. Two years ago, AMD won 29 out of 108 benches, but in many cases its performance was much closer to the GTX 980 Ti’s.”

B-b-b-but fine wine!

It sure as shit is relevant because I still hear that fine wine bullshit thrown around.
 
AMD is a solid innovator, but I quesition their HBM decision.
How can you risk your entire GPU product line to a new memory type when it was unproven, and likely has less suppliers available (even more expensive memory)?
 
AMD is a solid innovator, but I quesition their HBM decision.
How can you risk your entire GPU product line to a new memory type when it was unproven, and likely has less suppliers available (even more expensive memory)?

It definitely didn't help, that's for sure. They were already going up against a big competitor. What they needed was cheaper memory. Their hopes that the bandwidth would carry them were ill placed.
 
AMD is a solid innovator, but I quesition their HBM decision.
How can you risk your entire GPU product line to a new memory type when it was unproven, and likely has less suppliers available (even more expensive memory)?

they needed all the possible bandwidth to feed that amount of shaders, and they needed to save every bit of power and still they made a 275W - 320W GPU.. 512bit (the minimum needed) bus with GDDR5 would have launched the TPD to the roof, which it's also the reason why it was only available with water cooling to keep thermals and power consumption down..
 
they needed all the possible bandwidth to feed that amount of shaders, and they needed to save every bit of power and still they made a 275W - 320W GPU.. 512bit (the minimum needed) bus with GDDR5 would have launched the TPD to the roof, which it's also the reason why it was only available with water cooling to keep thermals and power consumption down..

Yet, the Fury Non X did not have any thermal issues to speak of. (They did have good fans and heatsinks on them though.) In fact, when I was using the 2 x Furies, Crossfire worked extremely well, especially with direct X 12, when it was supported.
 
they needed all the possible bandwidth to feed that amount of shaders, and they needed to save every bit of power and still they made a 275W - 320W GPU.. 512bit (the minimum needed) bus with GDDR5 would have launched the TPD to the roof, which it's also the reason why it was only available with water cooling to keep thermals and power consumption down..

I find it hard to believe they needed the bandwidth. I am pretty sure I tested it myself. My Fury X is long dead or I’d try it now.

HBM was a piss poor choice from a DFM standpoint.

I’ve worked in corporations long enough to know how these shitty ideas take hold. High up says we need to innovate like X company did to compete! (Like Apple with the original iphone.) A game changer! Someone mentions HBM. Doesn’t really vet it out, gets passed up the chain as a huge opportunity. Once higher ups tell upper management something they never go back on it. They’ll ride it into the ground even if proven wrong. That’s my experience.

No properly done DFM would have ever let this design go forward. Half of [H] saw this shit show coming.
 
The kicker was they were limited to 4GB HBM which meant driver devs had to tweak drivers for high performing games that exceeded 3.5GB approx.
The downside means the card losing performance over time as tweaks are left out of drivers as demonstrated.

Another bad choice that killed the chance of recommendation was lack of HDMI 2.0.
UHD wasnt on their radar until next gen yet it was starting to take off.
AMD missed a great chance to capitalise, they gave it all to NVidia until the Displayport to HDMI 2.0 adapters came to market.
 
I find it hard to believe they needed the bandwidth. I am pretty sure I tested it myself. My Fury X is long dead or I’d try it now.

HBM was a piss poor choice from a DFM standpoint.

I’ve worked in corporations long enough to know how these shitty ideas take hold. High up says we need to innovate like X company did to compete! (Like Apple with the original iphone.) A game changer! Someone mentions HBM. Doesn’t really vet it out, gets passed up the chain as a huge opportunity. Once higher ups tell upper management something they never go back on it. They’ll ride it into the ground even if proven wrong. That’s my experience.

No properly done DFM would have ever let this design go forward. Half of [H] saw this shit show coming.
HBM should have given a great advantage. Problem was the GCN structure they were using, for whatever reason held/holds it back, even now. No arguing the tech isn't great, hell their competition is even using it. No one has completely stated why the structure of Fury/Vega hasn't been able to fully capitalize on HBM's potential.
If you have solid tech details, and not upper management theories, please share.
 
HBM should have given a great advantage. Problem was the GCN structure they were using, for whatever reason held/holds it back, even now. No arguing the tech isn't great, hell their competition is even using it. No one has completely stated why the structure of Fury/Vega hasn't been able to fully capitalize on HBM's potential.
If you have solid tech details, and not upper management theories, please share.

lol, because it simply doesn’t need it. You could have figured it out from a 290x.

And in the previous post I was replying to using HBM moves the heat to the GPU die, rather than (generally) passively cooled GDDR5, at least cooled in a manner that doesn't affect die temp.

There's no defending HBM. It's a shitty choice from technical performance, manufacturing (DFM), supply line, pricing, ect. And arm chair assholes on [H] had this foresight....
 
Last edited:
lol, because it simply doesn’t need it. You could have figured it out from a 290x.

And in the previous post I was replying to using HBM moves the heat to the GPU die, rather than (generally) passively cooled GDDR5, at least cooled in a manner that doesn't affect die temp.

There's no defending HBM. It's a shitty choice from technical performance, manufacturing (DFM), supply line, pricing, ect. And arm chair assholes on [H] had this foresight....
And yet Nvidia is now using it. Why?
 
Years later, on a much more powerful gpu, and a completely different application. Seriously?
There's no defending HBM. It's a shitty choice from technical performance, manufacturing (DFM), supply line, pricing, ect.
Yes seriously. That was your quote I was responding to.
So NOW it is good. Got it.
 
lol, because it simply doesn’t need it. You could have figured it out from a 290x.

And in the previous post I was replying to using HBM moves the heat to the GPU die, rather than (generally) passively cooled GDDR5, at least cooled in a manner that doesn't affect die temp.

There's no defending HBM. It's a shitty choice from technical performance, manufacturing (DFM), supply line, pricing, ect. And arm chair assholes on [H] had this foresight....

it need it, Fury X had a flaw with the memory controllers being unable to fully use the maximum bandwidth, it was the reason why despite it should be capable of 512GB/s bandwidth, it reached WAY less than that in real world (shown below).. GCN as architecture is bandwidth hungry and bandwidth starved in most cases, it was the main reason why the R9 280X/HD7970 was able to outperform typically with ease the newer tonga r9 380X in bandwidth intensive games, its also the reason why VEGA which it's truly able to overclock HBM receive big boost of performance by overclocking it as didn't happen with the Fury X and we are speaking of numbers of around 15%-20% of better performance, that speak itself how starved are those 4096 cores for bandwidth...

b3d-bandwidth.gif
 
And yet Nvidia is now using it. Why?

For consumer GPUs, they're not.

It still doesn't make sense for gaming workloads, at least with memory efficient architectures. The only place that it does make some sense is in say Kaby-G, but we've yet to see that platform realized in a 'killer' product.
 
My Fury X still holds up for my uses at 1200p outside some alpha/beta games that haven't reached the optimization stages yet. Reviewers like to use settings to exaggerate VRAM limits that I honestly can't tell apart aside from side-by-side comparisons. The day I can't hold a solid 60+ FPS in War Thunder at beautiful settings is the day I know I need to upgrade. Of the games in the review above I only own Ashes of the Singularity, Wolfentstein, and Doom. I never got very into Ashes it was interesting for a few weeks but I got burned out playing it early on when there was only 1 faction and way less variation from one skirmish to the next. I also hated how the single-player campaign was designed so you HAD to play the super fast rush style vs the AI or you always lost once you got to the harder missions.

Solo player stuff like Doom/Wolfenstein well I've always been willing to sacrifice framerate for eye candy in single player FPS titles but when it comes to the "NIGHTMARE" or whatever texture sizes not fitting within Fiji's VRAM, would you even be able to tell them apart during real world gameplay at my resolution? Looking over the old reviews it was only at those settings the Fury X wasn't faster than the 390 8GB. Some day after I have a newer monitor and a newer GPU I'll revisit these titles with everything cranked but by then I imagine they will still look a tad dated compared to the brand new games we get in 2019.

Now when it comes to upcoming titles like the aforementioned alphas and betas I'm in I don't expect to have all the detail sliders maxed in a higher budget game that's coming out over 3 years after my GPU was born. Likewise if a 980 Ti owner was complaining over in the NVidia forum about how it couldn't run many of those titles well at 1440p 144hz would anyone take them seriously? In a lot of cases where the Fury X is struggling around the 60 FPS mark at 1440p the 980 Ti is still just comfortably above 60 but not enough so that it would warrant say the upgrade between a 60hz monitor and a 144hz monitor for the tiny bump alone - it would be entirely about upgrading from a plain jane 60hz to a G-Sync panel.

The first results in that Babeltech benchmark show the Fury performing vastly better at Firestrike and Timespy now which further proves that whole series of Futuremark tech demos should have never been taken seriously. Babeltech should move that crud to the bottom of the list and clearly mark it different with a big ol' ?WTF?
 
This is what’s known as the long troll. Like the ‘long con’ only much more pathetic.

It's NEVER a troll when it's confirmed by several different media and test outlets:

Dozens of 2017 games where the FuryX is barely faster than an RX580, and way behind the 980Ti

https://hardforum.com/threads/furyx-aging-tremendously-bad-in-2017.1948025/

[GameGPU] 980Ti is 30% faster than FuryX in 2017 games

http://gamegpu.com/test-video-cards/podvedenie-itogov-po-graficheskim-resheniyam-2017-goda

[ComputerBase] GTX 1070 (~980Ti) is considerably ahead of the Fury X

https://www.computerbase.de/2017-12...marks_in_1920__1080_2560__1440_und_3840__2160

[BabelTech] AMD’s “fine wine” theory has apparently not turned out so well for the Fury X.

https://babeltechreviews.com/amds-fine-wine-revisited-the-fury-x-vs-the-gtx-980-ti/3/

[BabelTech] The GTX 980 Ti is now even faster than the Fury X for the majority of our games than when we benchmarked it about 2 years ago

https://babeltechreviews.com/the-gtx-1070-versus-the-gtx-980-ti/3/

2018 Update, more advantages to the 980Ti

[HardwareUnboxed] GTX 980Ti is Aging Considerably Better Than FuryX!

 
Last edited:
^ Man, seriously, you are trying way too hard to bash a GPU that is already over three years old - we get it already, you don't need to provide any more proof, and props to you for providing said proof up to this point.
While I can appreciate your enthusiasm for this, and while you might be right about it being limited at 2K and 4K resolutions (not surprising for a 4GB GPU to be limited at those resolutions on modern games), for 1080p and below, however, the Fury X is still very capable.

Most of the other GPUs that are around the same processing-power and era are aging better because they have more VRAM to support higher textures/settings and higher resolutions.
If the Fury X had been equipped with 6GB or 8GB of VRAM, I'm sure it would most likely be very similar to a GTX 980 or GTX980Ti in overall performance.

Why keep hammering the issue?
 
Why keep hammering the issue?
You just have to see the dozens ignorant comments made by specific members casting doubts on said proof even after providing literally HUNDREDS of links. They don't even try to analyze the evidence, they just chime in to repeatedly say you're wrong!
 
Can we at least agree that 2k IS 1080p? It's 1080p, 1440p, and 2160p OR 2k, 2.5k (despite my best efforts, this has not caught on), and 4k.
Not sure what you are getting at.... 1080p 1200p 1440p 1600p etc all have their own pixel counts and thus cannot be lumped together like that.
 
I am just saying 1920x1080p is 2k since 3840x2160 is 4k. As for 2560x1440, well that makes the most sense to be called 2.5k since the pattern is using the horizontal pixel count.

2560×1600p gets a little bonus. Maybe 2.6k.

Ultrawides like a 2560×1080p are even trickier - let's go with 2.2k.

But one thing is for sure: Having 3x 4k monitors is NOT 12k gaming :p
 
I am just saying 1920x1080p is 2k since 3840x2160 is 4k. As for 2560x1440, well that makes the most sense to be called 2.5k since the pattern is using the horizontal pixel count.

2560×1600p gets a little bonus. Maybe 2.6k.

Ultrawides like a 2560×1080p are even trickier - let's go with 2.2k.

But one thing is for sure: Having 3x 4k monitors is NOT 12k gaming :p
Here’s the logic. The 4K name doesn’t have anything to do with horizontal resolution.

1080p
1920 pixels x 1080 pixels = 2,073,600 pixels

4K
3840 pixels x 2160 pixels = 8,294,400
Or
4096 pixels x 2160 pixels = 8,847,360


4K is roughly 4x the pixels of the decade + long standard of 1080p


12k (three 4K displays) is roughly 12x the pixel count of 1080p
 
Last edited:
Here’s the logic. The 4K name doesn’t have anything to do with horizontal resolution.

1080p
1920 pixels x 1080 pixels = 2,073,600 pixels

4K
3840 pixels x 2160 pixels = 8,294,400
Or
4096 pixels x 2160 pixels = 8,847,360


4K is roughly 4x the pixels of the decade + long stamdard of1080p


12k (three 4K displays) is roughly 12x the pixel count of 1080p

By that logic 8k displays should really be called 16k it is 16x the pixels of 1080p.

Math problem for somebody smarter than me atm: If you rearranged 3x 4k screen into one large screen, what would the pixel dimensions be given the same ratio as a 3840×2160 screen?
 
Last edited:
By that logic 8k displays should really be called 16k it is 16x the pixels of 1080p.

Math problem for somebody smarter than me atm: If you rearranged 3x 4k screen into one large screen, what would the pixel dimensions be given the same ratio as a 3840×2160 screen?

Ah got it: 6651x3741 or "6.6k"
 
Surely after 3 years the newest games will have shown how far the FuryX has fallen behind.
Let's use a single soured for consistency over time and compare to the 980ti, shall we:

At launch: https://www.techpowerup.com/reviews/AMD/R9_Fury_X/31.html
4k - 2% slower
1440p - 8% slower
1080p - 12% slower

2 years ago ago: https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1070/24.html
4k- 3% faster
1440p - equal
1080p - 7% slower

Today: https://www.techpowerup.com/reviews/ASRock/RX_580_Phantom_Gaming_X/31.html
4k- 4% faster
1440p - 2% faster
1080p - 3% slower

Perhaps that meager 4GB in 2015 wasn't such a disaster after all.
 
You just have to see the dozens ignorant comments made by specific members casting doubts on said proof even after providing literally HUNDREDS of links. They don't even try to analyze the evidence, they just chime in to repeatedly say you're wrong!

The real question is why are you even bothering to post hundreds of links? You are trying to prove a point that nobody is disagreeing with because you have some bug up your ass about the fury card. I've never seen someone make it a personal Vendetta to prove which three year old card is better.
 
Back
Top