FuryX completely abandoned now, barely matching the 1060 or 580!

This is very true, if you bought this card thinking you were going to be gaming @ 4k with max settings, you were only lying to yourself.

And whatever happened to AMD's magic frame bufffer efficiency that was supposed to make 4GB HBR as good as 8GB DDR4?

Don't ever believe anyone if they tell you a lower framebuffer is as good as a higher one. With every GPU coupled with a deficient amount of VRAM, AMD or nV (and their army of schills) purport they have some sort of voodoo that nullifies the issue -- and it never does. EVERY. SINGLE. TIME. Same thing happened with 4870 256MB, 7800 GT, 780 ti 3GB, GTX 970 3.5GB... Don't believe it.
 
And whatever happened to AMD's magic frame bufffer efficiency that was supposed to make 4GB HBR as good as 8GB DDR4?

Don't ever believe anyone if they tell you a lower framebuffer is as good as a higher one. With every GPU coupled with a deficient amount of VRAM, AMD or nV (and their army of schills) purport they have some sort of voodoo that nullifies the issue -- and it never does. EVERY. SINGLE. TIME. Same thing happened with 4870 256MB, 7800 GT, 780 ti 3GB, GTX 970 3.5GB... Don't believe it.
I don't beleive it, hence I'm not the one upset ;). I bought my fury tri-x for $200... It's still as fast as an rx580 or 1060... Which are selling for as much as I paid years ago. I'm not upset one bit about it, and the fury nano in my itx case is a great card for small builds. I got a great deal on it and I know what it can and can't do.
 
  • Like
Reactions: noko
like this
Is the Fury X really -50% in assassins creed and -90% in Wolfenstein? I don’t care about a 10% drift but -50 and -90% would be alarming.
That 90% hit, if true, would be because the limits of the 4GB HBM memory were hit. If you lower one setting like AA it would be back in line. From my own experience with the Fury when you got that limit of HBM it’s a steep steep nosedive on performance, but lowering a single setting to reduce VRAM usage usually put it well back into the mix with a single monitor resolution. For instance at my 7680x1440 resolution with 2015 Star Wars Battlefront, I could run medium settings or a mostly high mixture at 70FPS range. If I toggled to all high, I would drop to 10-15FPS.
 
That 90% hit, if true, would be because the limits of the 4GB HBM memory were hit. If you lower one setting like AA it would be back in line. From my own experience with the Fury when you got that limit of HBM it’s a steep steep nosedive on performance, but lowering a single setting to reduce VRAM usage usually put it well back into the mix with a single monitor resolution. For instance at my 7680x1440 resolution with 2015 Star Wars Battlefront, I could run medium settings or a mostly high mixture at 70FPS range. If I toggled to all high, I would drop to 10-15FPS.
Run it at a setting that let's it within limits and it's fine. Once it has to use system ram... All bets are off.
 
That 90% hit, if true, would be because the limits of the 4GB HBM memory were hit. If you lower one setting like AA it would be back in line. From my own experience with the Fury when you got that limit of HBM it’s a steep steep nosedive on performance, but lowering a single setting to reduce VRAM usage usually put it well back into the mix with a single monitor resolution. For instance at my 7680x1440 resolution with 2015 Star Wars Battlefront, I could run medium settings or a mostly high mixture at 70FPS range. If I toggled to all high, I would drop to 10-15FPS.

Yeah, I figured it was something like that with VRAM. I clicked on the link but my phone doesn’t translate it.

This thread should be named the Roy/Raja era of lies or something.

It highlights not to trust corporations (nVidia or AMD) is what OP reminds me. The Fury X VRAM underplaying (optimize drivers or what have you), pump issues, intentionally inaccurate benchmarks, ect. nVidia has their own fair share.

On the bright side I don’t think we’ve seen any nonsense since AMD cleaned house. I respect that.
 
Last edited:
And whatever happened to AMD's magic frame bufffer efficiency that was supposed to make 4GB HBR as good as 8GB DDR4?

Don't ever believe anyone if they tell you a lower framebuffer is as good as a higher one. With every GPU coupled with a deficient amount of VRAM, AMD or nV (and their army of schills) purport they have some sort of voodoo that nullifies the issue -- and it never does. EVERY. SINGLE. TIME. Same thing happened with 4870 256MB, 7800 GT, 780 ti 3GB, GTX 970 3.5GB... Don't believe it.
it is called HBCC,also what? why do you compare DDR4 to HBR? what HBR? HBM you mean
 
The regular GTX 980 has only 4GB yet it's faster in those titles as well, so VRAM alone doesn't explain it.

come on man.

you know better than this and are just trolling.

It's QUITE obvious that the 4GB of video RAM is a problem with any of the 4GB cards - including the 980. Unless you are seriously suggesting a 980ti is almost twice as fast as a 980.



upload_2019-9-17_14-8-7.png



In fact when you throw your link into google translator you learn the following:

При разрешении 1920х1080 потребление ОЗУ у системы видеокартой с 4-мя гигабайтами 9100 мегабайт, с 6-ю гигабайтами 7000 мегабайт, с 8-ю гигабайит 7400 мегабайт, с 11-ю гигабайтами 7700 мегабайт и с 16-ю гигабайтами 7300 мегабайт.

translates to:

With a resolution of 1920 × 1080, the system consumes RAM with a video card with 4 gigabytes of 9100 megabytes, with 6 gigabytes of 7000 megabytes, with 8 gigabytes of 7400 megabytes, with 11 gigabytes of 7700 megabytes and with 16 gigabytes of 7300 megabytes.


Which correlates to this chart. You can see that 4GB graphics cards use 9GB of system RAM (which is more than any other card VRAM size or resolution requires with the exception of 4GB at 1440p or 4K), which obviously indicates a massive amount of RAM caching from the video card to the System RAM - which with the Fury X required a very unique driver instruction set to be efficient. You can argue that's a frustration with the engineering of the card, but you can't argue that the 980 at 4GB performs significantly better. Both are hit hard by their 4GB VRAM limitation on this particular game.

upload_2019-9-17_14-12-24.png
 
Last edited:
It's QUITE obvious that the 4GB of video RAM is a problem with any of the 4GB cards - including the 980. Unless you are suggesting it is expected a 980 is 50% the speed of a 980TI.
Maybe in Wolfenstein, but other games don't exhibit the same issue. Ace Combat, Assassin's Creed, Assetto Corsa .. etc ..
 
This 290x is only a collector's item to me now but a rare look at an untouched card as never been apart or flashed .. just gamed and never mined on .. It's limit is the 4Gb memory buffer size of Hawaii seems to not matter with a Ryzen 3600 pushing it on driver 19.9.2 .

 
This 290x is only a collector's item to me now but a rare look at an untouched card as never been apart or flashed .. just gamed and never mined on .. It's limit is the 4Gb memory buffer size of Hawaii seems to not matter with a Ryzen 3600 pushing it on driver 19.9.2 .



Strange Bridgade isn’t exactly a demanding game. You only need 2-3GB at 1080p... might as well run Minesweeper and proclaim it’s doing great.
 
Strange Bridgade isn’t exactly a demanding game. You only need 2-3GB at 1080p... might as well run Minesweeper and proclaim it’s doing great.

Maybe because the game is not built to run like thrash .. but if you want demanding then start at Ultra 1080p take that thing called resolution scaling up to say 120% plus some as It will make your video card feel it .
 
Borderlands 3: barely at the level of 1060/580 @1080p, 980Ti is 20% faster!
https://gamegpu.com/action-/-fps-/-tps/borderlands-3-test-gpu-cpu

Assassin's Creed Odyssey: slower than a 1060 @1080p, 980Ti is 50% faster!
https://gamegpu.com/action-/-fps-/-tps/assassin-s-creed-odyssey-test-gpu-cpu-2018

Ghost Recon Breakpoint: slower than a 1060 @1080p, 980Ti is 30% faster!
https://gamegpu.com/mmorpg-/-онлайн-игры/ghost-recon-breakpoint-beta-test-gpu-cpu

Metro Exodus: barely at the level of 1060 @1080p at Ultra settings, 980Ti is 25% faster!
https://gamegpu.com/action-/-fps-/-tps/metro-exodus-v-1-0-1-1

Wolfenstein: Youngblood: massively slower than a 1060 @1080p, 980Ti is 90% faster!
https://gamegpu.com/action-/-fps-/-tps/wolfenstein-youngblood-test-gpu-cpu

Crackdown 3: barely faster than 1060 @1080p, 980Ti is 20% faster!
https://gamegpu.com/action-/-fps-/-tps/crackdown-3-test-gpu-cpu

Ace Combat 7: slower than a 1060 @1080p, 980Ti is 40% faster!
https://gamegpu.com/racing-simulators-/-гонки/ace-combat-7-skies-unknown-test-gpu-cpu

Resident Evil 2: barely faster than 1060 @1080p, 980Ti is 70% faster!
https://gamegpu.com/action-/-fps-/-tps/resident-evil-2-test-gpu-cpu

Beyond Two Souls: slower than a 1060 @1080p, 980Ti is 50% faster!
https://gamegpu.com/action-/-fps-/-tps/beyond-two-souls-test-gpu-cpu

The Sinking City: barely faster than 1060 @1080p, 980Ti is 22% faster!
https://gamegpu.com/action-/-fps-/-tps/the-sinking-city-staging-test-gpu-cpu

Draugen: slower than 1060 @1080p, 980Ti is 32% faster!
https://gamegpu.com/rpg/ролевые/draugen-test-gpu-cpu

Total War THREE KINGDOMS: barely faster than 1060 @1080p, 980Ti is 25% faster!
https://gamegpu.com/rts-/-стратегии/total-war-three-kingdoms-test-gpu-cpu

A Plague Tale Innocence: barely faster than 1060 @1080p, 980Ti is 30% faster!
https://gamegpu.com/rpg/ролевые/a-plague-tale-innocence-test-gpu-cpu

Fade to Silence: slightly faster than 1060 @1080p, 980Ti is 28% faster!
https://gamegpu.com/rpg/ролевые/fade-to-silence-test-gpu-cpu

Anno 1800: barely faster than 1060 @1080p, 980Ti is 30% faster!
https://gamegpu.com/rts-/-стратегии/anno1800-test-gpu-cpu

Tropico 6: barely faster than 1060 @1080p, 980Ti is 35% faster!
https://gamegpu.com/rts-/-стратегии/tropico-6-test-gpu-cpu-2

Generation Zero: barely faster than 1060 @1080p, 980Ti is 25% faster!
https://gamegpu.com/mmorpg-/-онлайн-игры/generation-zero-test-gpu-cpu

WRC 8 FIA World Rally Championship: barely faster than a 1060 @1080p, 980Ti is 25% faster!
https://gamegpu.com/racing-simulators-/-гонки/wrc-8-fia-world-rally-championship-test-gpu-cpu

Age of Wonders Planetfall: slower than a 1060 @1080p, 980Ti is 22% faster!
https://gamegpu.com/rts-/-стратегии/age-of-wonders-planetfall-test-gpu-cpu

Ancestors: The Humankind Odyssey: barely faster than a 1060 @1080p, 980Ti is 22% faster!
https://gamegpu.com/rpg/ролевые/ancestors-the-humankind-odyssey

Dirt Ralley 2: barely faster than a 480 @1080p, 980Ti is 25% faster!
https://gamegpu.com/racing-simulators-/-гонки/dirt-rally-2-0-test-gpu-cpu

Assetto Corsa Competizione: slower than a 1060 @1080p, 980Ti is 60% faster!
https://gamegpu.com/racing-simulators-/-гонки/assetto-corsa-competizione-0-5-2-test-gpu-cpu
Wow you really love that site. You wouldn't have any others you bothered to compare with? Fury hasn't suddenly become bad, nor the 980ti suddenly good. Is AMD prioritizing drivers for the latest and greatest on this card? Nope.
 
That's sweet and all but I have to say I do own several AMD gpus currently active, and even own one all AMD only rig.. I am an AMD owner but a neutral one and I can be really critic.. I criticized A LOT the Fury X and I still consider it as a massive failure. (Hey I loved the fury Nano.. it was the only good part of the fury lines).. and I still think the 7970 (280x) it's actually the best gpu ever made followed closely by the 290X..

however my critics are not towards AMD only but to their annoying fans and loyalist who think everything made by AMD it's perfect and will always be perfect no matter what...



Negative, Ghost Rider. 9700 Pro owns that spot.
 
The point of these thread is about how everyone claimed AMD cards get better later on and how Nvidia was supposedly know to regress over time. The truth is neither did that.
yes why not compare a 4years old architecture which weakness was the geometry processing in 2019 games,then these 2019 games need a lot more vram than 2015's titles when R9 FURY (x) was doing just fine

AMD made GCN thinking on compute workloads and/or light geometry workload https://wccftech.com/fallout-4-nvidia-gameworks/
 
this is what I love of hypocrisy, specially of this forum, when Nvidia release a new line of GPUS and focus optimizations for that architecture then Nvidia its evil because it abandon its loyal customers and on purpose it stop optimizing old generations of gpus or even gimp it to make current gpus more appealing..

this fury series scenario was the same that happened with nvidia GTX 700 series, gtx 780 and 780ti gpus once competed against r9 290 and 290x gpus, then years later we got people everywhere talking about AMD fine fine wine and how the 290x that once rivaled the 780ti were now competing with the gtx 980, however nobody ever expected maxwell would age so amazingly over time destroying the Finewine conspiracy as gtx 980 and 970 still acompete against 290x and 290 respectively but now the gtx 980ti out performing in every possible scenario to the Fury X.. uuuuhhh praise to AMD for keeping support of older GPUS, shame on Nvidia for focus on current tech, now we should praise AMD for abandoning certain lines of GPUS in order to keep focus on more viable and profitable series.. lol I love that kind of hypocrisy because yes, we know what are we buying today and no matter the future, what matter its the today performance, after all, we buy for what they were worth today at the time of purchase.. right?

truth is, AMD had actually no reason to stopping support on Fury Lines as Polaris its essentially polaris at the same way essentially its Vega without the enlarged pipeline for higher clocks.. (as it was proven by reviews).. same reason AMD have no reason on stop optimizing Vega Series (including VII) just because of navi...
 
Negative, Ghost Rider. 9700 Pro owns that spot.

Wrong... 9700 pro, ok, was a big jump in performance over the 8500, and it performed typically 20% over the nvidia's geforce 4 Ti 4600 and in some very specific scenarios with high AA and AF levels it was even able to perform 50% faster than the geforce4 Ti, but also didn't helped that nvidia's FX lines where completely crap... however the life of the 9700 PRO (or even the refreshed 9800 PRO versions) were of about 3 to 4 years... HD7970 was launched between December 2011 and January 2012 we are speaking of over 7 years of amazing life, the 7970 it's still a capable GPU for 1080P resolutions, nothing ever have lasted that long being a playable GPU on modern tittles, only I think Hawaii will be able to touch and keep that crown..
 
this is what I love of hypocrisy, specially of this forum, when Nvidia release a new line of GPUS and focus optimizations for that architecture then Nvidia its evil because it abandon its loyal customers and on purpose it stop optimizing old generations of gpus or even gimp it to make current gpus more appealing..

this fury series scenario was the same that happened with nvidia GTX 700 series, gtx 780 and 780ti gpus once competed against r9 290 and 290x gpus, then years later we got people everywhere talking about AMD fine fine wine and how the 290x that once rivaled the 780ti were now competing with the gtx 980, however nobody ever expected maxwell would age so amazingly over time destroying the Finewine conspiracy as gtx 980 and 970 still acompete against 290x and 290 respectively but now the gtx 980ti out performing in every possible scenario to the Fury X.. uuuuhhh praise to AMD for keeping support of older GPUS, shame on Nvidia for focus on current tech, now we should praise AMD for abandoning certain lines of GPUS in order to keep focus on more viable and profitable series.. lol I love that kind of hypocrisy because yes, we know what are we buying today and no matter the future, what matter its the today performance, after all, we buy for what they were worth today at the time of purchase.. right?

truth is, AMD had actually no reason to stopping support on Fury Lines as Polaris its essentially polaris at the same way essentially its Vega without the enlarged pipeline for higher clocks.. (as it was proven by reviews).. same reason AMD have no reason on stop optimizing Vega Series (including VII) just because of navi...
FuryX was set to best the GTX980 also with 4GB. And it largely did. Nvidia dropped the GTX980ti to counter it and it did its job well.
 
FuryX was set to best the GTX980 also with 4GB. And it largely did. Nvidia dropped the GTX980ti to counter it and it did its job well.
so... the 650$ fury X was launched in 2015 to best the 499$ GTX 980?.. when already had the 290X competing with it since 2014? having already in the market the Titan X? hardly believable, if everyone remember correctly Fury X was meant to rival the Titan X being the second Titan killer.. (first one was the 290X rivaling the vanilla GTX Titan before the 780ti was launched)

GTX 980 wasn't a 4K GPU, fury X was marketed as the best 4K GPU, in fact, at the Fury X launch event AMD also launched the refresh of the Hawaii GPUS r9 390 and 390X to compete better against gtx 970 and 980, marking those gpus also as 4K oriented segment and only for that usage... which also was weird as r9 390 series all had 8GB vRAM while Fiji only had 4GB, yeah at that time the misleading marketing was at full blast telling everyone how 4GB HBM was different than GDDR5 sizes as it was some kind of cache nasa space technology lol haha.. anyone remember?

Note that HBM and GDDR5 memory sized can’t be directly compared. Think of it like comparing an SSD’s capacity to a mechanical hard drive’s capacity. As long as both capacities are sufficient to hold local data sets, much higher performance can be achieved with HBM, and AMD is hand tuning games to ensure that 4GB will not hold back Fiji’s performance. Note that the graphics driver controls memory allocation, so its incorrect to assume that Game X needs Memory Y. Memory compression, buffer allocations, and caching architectures all impact a game’s memory footprint, and we are tuning to ensure 4GB will always be sufficient for 4K gaming. Main point being that HBM can be thought of as a giant embedded cache, and is not directly comparable to GDDR5 sizes.

again, nope if you think fury x was meant to compete with gtx 980 while having the 12GB Titan X in the market you are wrong, remember. Fury X was meant to be a 850$ Titan X killer. the only reason while fury X launched at 649$ was because nvidia launched first the 980Ti at 649$ and fucked all AMD plans..
 
so... the 650$ fury X was launched in 2015 to best the 499$ GTX 980?.. when already had the 290X competing with it since 2014? having already in the market the Titan X? hardly believable, if everyone remember correctly Fury X was meant to rival the Titan X being the second Titan killer.. (first one was the 290X rivaling the vanilla GTX Titan before the 780ti was launched)

GTX 980 wasn't a 4K GPU, fury X was marketed as the best 4K GPU, in fact, at the Fury X launch event AMD also launched the refresh of the Hawaii GPUS r9 390 and 390X to compete better against gtx 970 and 980, marking those gpus also as 4K oriented segment and only for that usage... which also was weird as r9 390 series all had 8GB vRAM while Fiji only had 4GB, yeah at that time the misleading marketing was at full blast telling everyone how 4GB HBM was different than GDDR5 sizes as it was some kind of cache nasa space technology lol haha.. anyone remember?



again, nope if you think fury x was meant to compete with gtx 980 while having the 12GB Titan X in the market you are wrong, remember. Fury X was meant to be a 850$ Titan X killer. the only reason while fury X launched at 649$ was because nvidia launched first the 980Ti at 649$ and fucked all AMD plans..
Ya HBM missed the boat. Are you freaking out over marketing at the time or the reality of the situation. FuryX was designed to compete with the GTX980.
Yes AMD "fine wine" had the 290X/390X competing later on, but that wasn't the reality at the time. AMD was stuck in their marketing at the time. The tech rift that was DDR5 and HBM early adopter.
At the time the [H] review showed that 4GB wasn't too limiting at 4K. It did quite well surprisingly. It is all down to optimization of the drivers and apparently 4K on Fury on new games isn't where they place resources.
FuryX as a Titian killer? Don't know about that LOL! Not even the hottest head AMD fanboi pushed that shit.
Wasn't you right?

PS-sorry for saying "at the time" too many times.
 
Last edited:
If the Fury X had been equiped with more VRAM, it would have held up longer, but 4GB in 2015 was pretty decent at the time (early 4K).
Alas, it does not have more than 4GB VRAM, thus limiting it in just about everything now - less a limitation of the GPU and more a limit of the VRAM itself.

The GPU itself would still be limited now, though, since technology marches on, and to no one's surprise, a high-end GPU from 2015 barely competes with a mid-range GPU from 2019.
Same could be said with comparing GPUs from 2011 to GPUs from 2015.

And whatever happened to AMD's magic frame bufffer efficiency that was supposed to make 4GB HBR as good as 8GB DDR4?
Due to the higher memory data transfer rates of HBM, it was a bit more efficient that GDDR5 (at the time) and could push textures and data slightly better, making it more similar to a 5GB GDDR5 equiped GPU.
It was still VRAM limited in games like DOOM 2016 in nightmare mode, which required at least 5GB VRAM, which the Fury X did not have and thus could not push, at least not natively without digging into shadow/RAM.

Hot damn, AMD did state that 4GB HBM was just as good as 8GB GDDR5.
Thanks for the video, Araxie.

Also, AMD never said that, and I don't remember seeing anyone say that it was as efficient as 8GB VRAM (assuming you meant GDDR5 and not DDR4).
That's more than a bit of an embellishment.
 
Last edited:
At the time the [H] review showed that 4GB wasn't too limiting at 4K. It did quite well surprisingly.

[H] review show totally opposed conclusion to yours.

https://www.hardocp.com/article/2015/06/24/amd_radeon_r9_fury_x_video_card_review/11

FuryX as a Titian killer? Don't know about that LOL! Not even the hottest head AMD fanboi pushed that shit.
.

if you like to read there are 2 good threads about the Fury X Titan killer (that include being 980ti killer)

https://hardforum.com/threads/amd-r...ntly-faster-than-the-gtx-980ti-at-4k.1865880/

https://hardforum.com/threads/fury-x-benchmarked-4k-via-amd.1865813/page-2
 
Last edited:
Due to the higher memory data transfer rates of HBM, it was a bit more efficient that GDDR5 (at the time) and could push textures and data slightly better, making it more similar to a 5GB GDDR5 equiped GPU.
It was still VRAM limited in games like DOOM 2016 in nightmare mode, which required at least 5GB VRAM, which the Fury X did not have and thus could not push, at least not natively without digging into shadow/RAM.

Also, AMD never said that, and I don't remember seeing anyone say that it was as efficient as 8GB VRAM (assuming you meant GDDR5 and not DDR4).
That's more than a bit of an embellishment.

oh...

 
WTF do you expect for a card that is 5+ years old? Come on....still a decent card though. This is a troll post.
 
Hence why we don't see Huddy talking anymore since Ryzen/Vega came about.

Well, he was sort of right, in the short term. However, the maintenance and support required to maintain that ability was definitely not worth it and eventually, in the long term, would be limiting. That said, more is always better, anyways.
 
Benches a bit old (Nov 2017), but fwiw, TPU performance summary of 18 games @ 4k. I dont think the FuryX or 980ti show up in any recent TPU reviews.

I often reference TPU since they use they bench more games than other reviews, and give summary of perf across all resolutions. Seems FuryX doing OK with 4gb @ 4k. Of course, the 980ti has far greater OC potential.
Review link: https://www.techpowerup.com/review/nvidia-geforce-gtx-1070-ti/30.html


perfrel_3840_2160.png
 
I often reference TPU since they use they bench more games than other reviews, and give summary of perf across all resolutions. Seems FuryX doing OK with 4gb @ 4k. Of course, the 980ti has far greater OC potential.
Review link: https://www.techpowerup.com/review/nvidia-geforce-gtx-1070-ti/30.html
old review, testing 2016 games also not retesting games or cards .. the situation has changed considerably since 2017, 980Ti went full blast ahead ..



[BabelTech] AMD’s “fine wine” theory has apparently not turned out so well for the Fury X.

https://babeltechreviews.com/amds-fine-wine-revisited-the-fury-x-vs-the-gtx-980-ti/3/

[BabelTech] The GTX 980 Ti is now even faster than the Fury X for the majority of our games than when we benchmarked it about 2 years ago

https://babeltechreviews.com/the-gtx-1070-versus-the-gtx-980-ti/3/
 
Last edited:
old review, testing 2016 games .. the situation has changed considerably since 2017, 980Ti went full blast ahead ..



[BabelTech] AMD’s “fine wine” theory has apparently not turned out so well for the Fury X.

https://babeltechreviews.com/amds-fine-wine-revisited-the-fury-x-vs-the-gtx-980-ti/3/

[BabelTech] The GTX 980 Ti is now even faster than the Fury X for the majority of our games than when we benchmarked it about 2 years ago

https://babeltechreviews.com/the-gtx-1070-versus-the-gtx-980-ti/3/

pretty much No One here expected the FuryX to age super well....I skipped right past it to an 8gb frame buffer and never looked back. lol 7970-rx580-vega64
 
Negative, Ghost Rider. 9700 Pro owns that spot.
Sorry, 7970 is king. 40% oc on a blower with a modern card, that had amazing compute (even for this day for consumer full precision) and topped nvidia high end for the 2nd last time, grandfather of the almost as legendary Hawaii. It also was around when you could mine btc with a GPU.. The beginning of the mining days.
 
Your [H] link wont load. Your thread links are fanboi BS that no one in hind sight should give a shit about. But good job on pulling that waste of garbage up. The community thanks you.
 
Back
Top