DarkLegacy
[H]ard|Gawd
- Joined
- Dec 26, 2004
- Messages
- 1,097
Looks amazing on paper, ends up a bust in benches. Not surprised at all by this point.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Dude you need to get shop-runner it is free two day shipping and I have not paid them a penny for the 2years I have used shop-runner.
(I never even gave them my credit card info, it boggles the mind as to how that company still exists)
I don't think we need HBM overclocking, bandwidth isn't holding back performance, engine/GPU clock/performance is.
Well if voltage turn out to be a bust it flies in the face of Joe Macri saying it was an overclockers dream. I just wonder if the proximity of the ram to the gpu has made them cap the overclocking potential as hbm has been locked down.
We'll see what happens when the voltage gets unlocked but I don't have high hopes.Yes, this is the most disappointing thing about the Fury X...
I still have these words from AMD CTO Joe Macri echoing in my head:
“You’ll be able to overclock this thing like no tomorrow...”
“This is an overclocker’s dream.”
Watching recent AMD interviews/presentations (mostly Richard Huddy) reveals a lot of misleading or out-right false information.
Richard HuddyLinks to these?
so if we get voltage modification, seeing as the GPU is sitting @ 37 degrees, there is a lot of headroom with temperature, you'll be able to add some voltage! it could still be a crazy OCing card.
I have updated the Conclusion of the review with some preliminary overclocking testing.
http://www.hardocp.com/article/2015/06/24/amd_radeon_r9_fury_x_video_card_review/11
"If we just shuffle this memory around, the GPU does all its hard work in the working set, which is 4GB, and anything else we can swap in an out. So what we see is 4GB is far from a limit. What happens is that the amount of RAM you have in your PC, that effectively gets added to the frame buffer. The more you have, the faster your system runs, the better it handles those big and bulky games."
I just want to take a moment to tip my hat to Kyle and Brent. Thanks for being so active in this thread and really trying to respond to valid questions and ideas people have.
I realize plenty of people don't even understand the idea, but your engagement here really shows respect and earns it from those of us mature enough to appreciate it.
Thanks, guys.
What Huddy is saying in that statement could technically be possible *but* the game would have to be designed for it. And I just don't see developers creating multiple texture handling approaches for games. The game polls the driver for the amount of VRAM and that's that and won't change any time soon.
The other route would be if AMD shims it in their driver - presenting the virtualized VRAM pool to the game (8GB or 12GB) and then handling the swapping abstracted in driver. If they were planning to do that it's another thing they should've had ready at launch.
I think reviewers that tend to play a lot with those cards, can add a opinion point in the conclusions page about their experiences overtime in long gaming sessions with the cards, as I think there's a point where the card can not just not swap enough with RAM and PageFile and just start to stutter..
This is a good point, gameplay longevity can be affected by VRAM/RAM consumption over time and clock speed throttling when the GPU warms up. If time is not given to actually playing games over a long period of time, results can be skewed. This is another reason by benchmarks, or timedemos, or someone not playing for long, can have different results than us and not truly show what the gameplay performance is like.
No amount of overclocking can fix the lack of HDMI 2.0.
Maybe. It could be that if you overclock it high enough, it will open a portal to another dimension where the card DOES have HDMI 2.0.
I didn't catch it in your review or I just missed it, but do you know why the GPU voltage is locked? Also, do you know if it will be unlocked in the future?
That image was actually of the VRAM chips on the back of the Titan X getting hot, which makes it even worse as far as heat goes. On the Titan X there are 12 x 4Gb memory chips surrounding the core on both sides of the PCB. They have to be physically close to the core to decrease latency. The VRMs on the Titan X are mounted on the front of the PCB near the inside of the blower fan and are passively cooled.Is it normal for VRM's to be that isolated? I saw someone pasting THG's thermal image of the Titan X and they all seem to be around the GPU core, despite that it was registering higher than Fury X's VRM's.
No Problem I'm just a gamer with a passion for hardware and the gaming experience. Believe you me, we listen to your feedback. My next review was born out of the feedback and questions from this thread, I think it will answer a lot of questions about the Fury X and 4K, it will be no small feat.
Brent, there's been a lot of talk about VRAM usage. In particular, people saying VRAM usage != VRAM needed. The only way I really see around this is to have a card around like the 390x in both 4 and 8GB flavors, and in the cases where a game is using more than 4GB VRAM to then retest that game, once with the 4GB 390x and once again with the 8GB 390x and document the performance difference when the only variable is VRAM. Is this something your next article will be addressing?
I'm curious how the Fury X would have fared if the number of ROP's were increased to 96 or 128?
Could the card potentially throttle itself if the VRM's reach a certain temp due to oc or bad case airflow in a warm room? I have a cheap am3+ board without VRM heatsinks with an fx8320 for my htpc and it throttled when the VRM got past a certain temp when i tried benchmarking it for shits n giggles once.
yes it can throttle and even drop to 2D Mode, if the temps are still high then it can just shut-off the card...
So you are only taking 4k performance into account? What about the majority of users who are on 1080p, the lesser amount on 1440p, and the few who are on 2180p.... I guess observing results that make you feel you are getting more for your money is what you like.... If you are only looking at this card for 4k, then sure, its almost 40% faster than the 290x but then again, it gets beat by the 980Ti most of the time for nearly the same price.
I've not seen any indication that VRM temps are used as control elements.
I find the assertion that they will be able to optimize memory usage in drivers to prevent 4GB from becoming a bottleneck concerning. That is exactly what NVIDIA claimed once it was revealed that the 970 only has 3.5GB of full speed VRAM, and they claimed they had built algorithms/heuristics to allocate the memory intelligently so that only what was truly needed would be in the working set.
Problem is, optimizing on an individual game by game basis means that only the most important AAA titles will see any real work done, and you can see how that works out already with how CrossFire and SLI support is handled. Some games work great, some games don't or take a long time to fix, and less popular titles are ignored.
Not my preferred approach.
Obviously it's POSSIBLE, but swapping memory from system memory to VRAM is very slow. NVIDIA's 970 has noticeable issues with swapping memory from the 3.5GB pool to the 0.5GB pool on the card itself, which I believe still has more memory bandwidth than most dual channel DDR3 systems.but its possible in fact, nvidia are making that move with any card with asynchronous memory controller and asymmetrical memory configurations.. if nvidia can do that, i don't see a reason why AMD can't do it to keep fresh the memory pool.. bigger bus and faster bandwidth allow faster swap of textures which certainly can help, so basically is possible.. also I think not all games are so sensitive to 4GB vRAM so the work in a basis game to game can also work.. but we know slow are AMD in their driver development, don't work as good as nvidia in that aspect..
Obviously it's POSSIBLE, but swapping memory from system memory to VRAM is very slow. NVIDIA's 970 has noticeable issues with swapping memory from the 3.5GB pool to the 0.5GB pool on the card itself, which I believe still has more memory bandwidth than most dual channel DDR3 systems.
It's just not a good solution vs having more RAM on the actual card. It requires their driver team to optimize for any games that encounter problems and I'm not confident that would be done in a timely manner. I'm not blaming AMD here either, I wouldn't buy a 970 for the same reason.
At 1080p, the framerate is likely CPU bottlenecked since the 980ti and the Fury are so fast. However, the drivers for fury are just utterly, terribly bad at the moment and so the CPU overhead is enormous relative to the 980 ti at lower resolutions. Which is why the performance is so bad at 1080p and 1440p. I can't see any other explanation. At 4K the GPU becomes the bottleneck so it's a better test of pure GPU performance rather than how nicely the drivers play with the rest of the system.
I don't think I would buy fury either at the moment, especially with its lack of overclockability, but I'm willing to wait and see if performance improves over time.