AMD Radeon R9 Fury X Video Card Review @ [H]

You mean Vulkan? They pretty much gave it away to OpenGL to continue on and will be using the current Mantle as their testing API for new features.


https://community.amd.com/community/gaming/blog/2015/05/12/on-apis-and-the-future-of-mantle


Vulkan is using parts of Mantle, just parts of its specs, I'm sure there will be differences once Vulkan is ready for use for developers vs what Mantle was.

OpenGL will not use Mantle for testing, AMD will use Mantle for testing new features that's quite a bit of difference in statements don't you think?

Once you have a fork in software development from a fundamental specs point, it is pretty much not advisable to go back to an older version and start making changes there, because that just undermines what is being done for advancement.

http://www.pcworld.com/article/2894...ises-from-the-ashes-as-opengls-successor.html

But now we know why. AMD's Robert Hallock confirmed on a blog post that Mantle had, for the most part, been turned into the Khronos Group’s Vulkan API that would supersede OpenGL.
“The cross-vendor Khronos Group has chosen the best and brightest parts of Mantle to serve as the foundation for 'Vulkan,' the exciting next version of the storied OpenGL API,” Hallock wrote. “Vulkan combines and extensively iterates on (Mantle’s) characteristics as one new and uniquely powerful graphics API. And as the product of an incredible collaboration between many industry hardware and software vendors, Vulkan paves the way for a renaissance in cross-platform and cross-vendor PC games with exceptional performance, image quality and features.”

There will be large differences from Mantle is what I take from this statement.
 
Last edited:
Vulkan is using parts of Mantle, just parts of its specs, I'm sure there will be differences once Vulkan is ready for use for developers vs what Mantle was.

OpenGL will not use Mantle for testing, AMD will use Mantle for testing new features that's quite a bit of difference in statements don't you think?

Once you have a fork in software developement from a fundamental specs point, it is pretty much not advisable to go back to an older version and start making changes there, because that just undermines what is being done for advancement.

You took what I said completely wrong.

I said that AMD gave OpenGL team Mantle to use as its base, and that AMD would be continuing to use Mantle as their Test API.

Did you read the link? They even advise to not use Mantle 1.0 for development if you are just starting.
 
You took what I said completely wrong.

I said that AMD gave OpenGL team Mantle to use as its base, and that AMD would be continuing to use Mantle as their Test API.

Did you read the link? They even advise to not use Mantle 1.0 for development if you are just starting.


Sorry misunderstood what you were saying, Yes I agree with all that.
 
Do you realize that artwork is what takes a huge chunk of time in game development once an engine is done?

Do you know rendering out a set of 8k texture set takes around half a day, just to render out mind you, after that probably another 2 weeks to do final touches? That is just one texture set.

Increasing texture sizes is necessary to get all those beautiful levels you see and 8k is going to be a standard for next gen games.


If you think that is sloppy or lazy, Tell you favorite game development houses, to tone down the artwork just for you, better yet don't buy those types of games because you think they are sloppy and lazy.

The problem is they aren't lazy or sloppy as you so haply put at least most of them, going up one texture size is going up x4 the necessary memory, and to create those textures now 4 times the horse power from a pc standpoint. That's why we don't see the necessity to go up in vram quickly, but at some time there is a jump that is necessary when the processing power and time to create assets is there.

And most of the memory is used for textures, at least before we start looking into AA and AF modes, res increases the buffer sizes some but comparatively nothing takes up more memory then the textures.

Can I just put in a big, fat, DUH here? Maybe you could try reading my post, comprehending what I said, and give a response that actually matches the content. You know, like where I say "I understand increasing requirements, but..." And then go on to talk about games with seemingly similar graphical fidelity having a huge delta in VRAM usage. Not only that, but at pretty typical display resolutions. I also seem to recall, and correct me if I'm wrong, that I mentioned not wanting to halt progress, or cripple anything. I'm simply stating that for one AAA game that doesn't look much better than another AAA game to use significantly more VRAM at similar settings, maybe the use of that RAM should be re-evaluated. Seems to me that the higher tier devs used to learn new tricks, methods, etc. to do more with less. Now that we have more period, why not optimize more for it, and take it even further? Obviously there are some things that just plain take more resources, without much that can be done about it. Bus speed limitations when transferring from system RAM for example, segmented memory on a card, and yes, even just VRAM size itself, etc.

I'm sure some devs do. Can you seriously tell me that all of the "AAA" devs out there are using things to the maximum level of efficiency, optimization and leveraging the best skill sets the industry has to offer? You're fucking crazy if you do.

I see pleasant surprises out of left field every so often that tell me this ISN'T the case. Some game will just pop up out of Eastern Europe somewhere from a dev that isn't hugely known, have a pretty advanced engine that looks great, and run on very modest hardware. I guess I'm saying smart design over pure brute force is something that I think would go a long way with some devs.

If a game REALLY just has a huge set of assets, pushes an engine to its limits, while great care was taken to get the most out of the hardware it's running on then great! I applaud the effort. How often is that really the case though?
 
Last edited by a moderator:
Can I just put in a big, fat, DUH here? Maybe you could try reading my post, comprehending what I said, and give a response that actually matches the content. You know, like where I say "I understand increasing requirements, but..." And then go on to talk about games with seemingly similar graphical fidelity having a huge delta in VRAM usage. Not only that, but at pretty typical display resolutions. I also seem to recall, and correct me if I'm wrong, that I mentioned not wanting to halt progress, or cripple anything. I'm simply stating that for one AAA game that doesn't look much better than another AAA game to use significantly more VRAM at similar settings, maybe the use of that RAM should be re-evaluated. Seems to me that the higher tier devs used to learn new tricks, methods, etc. to do more with less. Now that we have more period, why not optimize more for it, and take it even further? Obviously there are some things that just plain take more resources, without much that can be done about it. Bus speed limitations when transferring from system RAM for example, segmented memory on a card, and yes, even just VRAM size itself, etc.

I'm sure some devs do. Can you seriously tell me that all of the "AAA" devs out there are using things to the maximum level of efficiency, optimization and leveraging the best skill sets the industry has to offer? You're fucking crazy if you do.

I see pleasant surprises out of left field every so often that tell me this ISN'T the case. Some game will just pop up out of Eastern Europe somewhere from a dev that isn't hugely known, have a pretty advanced engine that looks great, and run on very modest hardware. I guess I'm saying smart design over pure brute force is something that I think would go a long way with some devs.

If a game REALLY just has a huge set of assets, pushes an engine to its limits, while great care was taken to get the most out of the hardware it's running on then great! I applaud the effort. How often is that really the case though?

Alright you seem to bring up some interesting points worth talking about. But before that I think you need a bit more experience in actually making something in a game engine. Doesn't matter what it is because I don't think you seem to understand how things work in software development.

Why don't you try making mods on game engines and you will see what I mean, then take a look at what the difference is from a world class engine that you can actually market to other devs and sell. The there is a big difference between the quality of tools. How many of those eastern European companies are selling their engines or actually compete in the engine market successfully?

Taking great care to get out of you hardware, ok you want to talk about optimizations?

Tell me the process you would you to optimize lets say uber shader for multiple objects and different shaders that use the same uber shader. I want the process not the math since there could be many different examples. Since you are talking about smart design, lets talk instead of whimsical feelings.

wierd last part got cut off.

The uber shader lets use a simple example, lets say you want to make a multiplayer part of the game, and you want to have same skins but different team colors for players. Varitions can be metalness, diffuse, specular, roughness, etc. How would you go about by optimizing Vram consumption.
 
Last edited:
Alright you seem to bring up some interesting points worth talking about. But before that I think you need a bit more experience in actually making something in a game engine. Doesn't matter what it is because I don't think you seem to understand how things work in software development.

Why don't you try making mods on game engines and you will see what I mean, then take a look at what the difference is from a world class engine that you can actually market to other devs and sell. The there is a big difference between the quality of tools. How many of those eastern European companies are selling their engines or actually compete in the engine market successfully?

Taking great care to get out of you hardware, ok you want to talk about optimizations?

Tell me the process you would you to optimize lets say uber shader for multiple objects and different shaders that use the same uber shader. I want the process not the math since there could be many different examples. Since you are talking about smart design, lets talk instead of whimsical feelings.

I know quite a few game devs, and actually used to do hardware compatibility testing at MGS and a few other places. I'm not a dev though, so I leave THAT part to the experts in their field. Still from a subjective, end-user point of view, it's not difficult to tell an optimized game from a sloppy game. As far as what can actually be done in software, that's not up to me. (I design analog circuits for audio synthesis for fun, and software doesn't really click with me from a design perspective.) I also have no interest in actually developing a mod. I'm very arm-chair here, and make no apologies for it. Having worked in the field, and having played games, watched scene-demos, dabbled in a little ML on the C64, etc. I do appreciate the process though, and can easily make some educated guesses, and can clearly see the result of what a dev has done.

I can think of a few examples here that may be relevant though. Look at a studio like former Starbreeze. (I think the main group is now Machine Games.) I'm thinking about Enclave, and Riddick specifically. These games came out what a decade ago, and still loot pretty decent today. If I have my history right (difficult to look up game-related content here at work) they were formerly the scene demo-group Triton. These guys could seriously code. They squeezed a LOT out of hardware, and it shows. Their games had incredible texture work, great effects, ran well on a wide range of hardware, had patches to take advantage of new technology, supported features that weren't quite mainstream yet, but their games could grow a bit with new advances. (and still ran great on existing mainstream to enthusiast level hardware)

I'm sorry, but I'm not seeing that level of work with many UE based games as another example. Not that great games, and even some visually impressive games aren't happening on UE, and also not to ignore the heavily modified versions of engines (BioShock comes to mind creating basically UE 2.5 at the time) etc. There are good examples of an off-the-shelf (for lack of better term) engines. Some devs can push them to the limits, and even beyond their original specs. Great.

I still contend that many (maybe even most) modern devs aren't really doing that though. It takes someone like the dev that made Hard Reset, coming out of left field to sit back and say "Wow, that looks great, and I never even heard of these guys." Not that the game was terribly in-depth, but it looked nice (if a bit "samey" over the course of the game...)

Anyway, getting a bit off-topic here, and I don't claim to be 100% correct in my assumptions, but there is definitely a difference between games lately, and games a generation or two back as far as a huge leap in memory usage, and it doesn't seem to correlate to the historical curve. There seems to be a huge leap in usage, and what I'm saying is that it's not necessarily translating into hugely better looking games. Just marginally better. I ask again, could this be trimmed back a bit, made more efficient use of, tuned better, or is there a bit of slop happening where "hey the memory is there, so why bother?" I'm sure in reality there is some mixture of these. Plenty of shades there, not black and white. It's just something I've been thinking about a bit lately, and with the 4GB of memory all of a sudden being a problem right now when even 6 month ago it didn't seem to be, it just doesn't follow my perceived historical curve.

Edit: Also, there are technologies (4K displays) becoming more common place right now coincidentally. That's why I made sure to call out that I wasn't referring to that. There are some changes happening, and requirements will go up because of them. I was mainly comparing the lower, more commonly used display resolutions 1080, 1440, etc. as that's where I think these comparisons are most interesting currently. With some games pushing VRAM usage over 4GB at these comparatively low resolutions, I'm getting curious about how it's used.

Edit Edit: Also, I realize my examples were pinpointed to make my point, and that there are other engines out there pushing more boundaries.
 
Last edited by a moderator:
I know quite a few game devs, and actually used to do hardware compatibility testing at MGS and a few other places. I'm not a dev though, so I leave THAT part to the experts in their field. Still from a subjective, end-user point of view, it's not difficult to tell an optimized game from a sloppy game. As far as what can actually be done in software, that's not up to me. (I design analog circuits for audio synthesis for fun, and software doesn't really click with me from a design perspective.) I also have no interest in actually developing a mod. I'm very arm-chair here, and make no apologies for it. Having worked in the field, and having played games, watched scene-demos, dabbled in a little ML on the C64, etc. I do appreciate the process though, and can easily make some educated guesses, and can clearly see the result of what a dev has done.

I can think of a few examples here that may be relevant though. Look at a studio like former Starbreeze. (I think the main group is now Machine Games.) I'm thinking about Enclave, and Riddick specifically. These games came out what a decade ago, and still loot pretty decent today. If I have my history right (difficult to look up game-related content here at work) they were formerly the scene demo-group Triton. These guys could seriously code. They squeezed a LOT out of hardware, and it shows. Their games had incredible texture work, great effects, ran well on a wide range of hardware, had patches to take advantage of new technology, supported features that weren't quite mainstream yet, but their games could grow a bit with new advances. (and still ran great on existing mainstream to enthusiast level hardware)

I'm sorry, but I'm not seeing that level of work with many UE based games as another example. Not that great games, and even some visually impressive games aren't happening on UE, and also not to ignore the heavily modified versions of engines (BioShock comes to mind creating basically UE 2.5 at the time) etc. There are good examples of an off-the-shelf (for lack of better term) engines. Some devs can push them to the limits, and even beyond their original specs. Great.

I still contend that many (maybe even most) modern devs aren't really doing that though. It takes someone like the dev that made Hard Reset, coming out of left field to sit back and say "Wow, that looks great, and I never even heard of these guys." Not that the game was terribly in-depth, but it looked nice (if a bit "samey" over the course of the game...)

Anyway, getting a bit off-topic here, and I don't claim to be 100% correct in my assumptions, but there is definitely a difference between games lately, and games a generation or two back as far as a huge leap in memory usage, and it doesn't seem to correlate to the historical curve. There seems to be a huge leap in usage, and what I'm saying is that it's not necessarily translating into hugely better looking games. Just marginally better. I ask again, could this be trimmed back a bit, made more efficient use of, tuned better, or is there a bit of slop happening where "hey the memory is there, so why bother?" I'm sure in reality there is some mixture of these. Plenty of shades there, not black and white. It's just something I've been thinking about a bit lately, and with the 4GB of memory all of a sudden being a problem right now when even 6 month ago it didn't seem to be, it just doesn't follow my perceived historical curve.

Edit: Also, there are technologies (4K displays) becoming more common place right now coincidentally. That's why I made sure to call out that I wasn't referring to that. There are some changes happening, and requirements will go up because of them. I was mainly comparing the lower, more commonly used display resolutions 1080, 1440, etc. as that's where I think these comparisons are most interesting currently. With some games pushing VRAM usage over 4GB at these comparatively low resolutions, I'm getting curious about how it's used.

Edit Edit: Also, I realize my examples were pinpointed to make my point, and that there are other engines out there pushing more boundaries.


Well that's the thing, the game is only going to be as good as the tools the engine has. UE4, Cry Engine 3 or 4, (not sure about frost bite), they were made to make multiple games in different genres. Unlike previous versions of those engines, so development of tools takes more time. Developers always have limitations based the engine and technologies, how they circumvent those limitations is what can make a game seem to appear the same on one engine vs another even though there is a limitation on one of those engines.

UE3, was a good engine but comparative to the tools that UE4 has there is a big difference.

The resolution doesn't matter when it comes to vram (buffer sizes don't take up much of the vram comparative to the textures) unless you are using AA and AF.

But the cut back is if you are using a lower resolution you don't need to use the highest detailed texture settings either. At 4k you will need 8k textures because you can actually see those extra pixels, it will look better. Now if you had the same FOV in a lower resolution settings, you won't see all those pixels.

Well to answer my optimization question, I would use masks for all texture channels to change the color on the fly, and use procedurally generated textures based off those masks as well.
 
Looks like a pretty solid choice for Crossfire and higher resolutions. Low minimums in some games is strange.
 
Well that's the thing, the game is only going to be as good as the tools the engine has. UE4, Cry Engine 3 or 4, (not sure about frost bite), they were made to make multiple games in different genres. Unlike previous versions of those engines, so development of tools takes more time. Developers always have limitations based the engine and technologies, how they circumvent those limitations is what can make a game seem to appear the same on one engine vs another even though there is a limitation on one of those engines.

UE3, was a good engine but comparative to the tools that UE4 has there is a big difference.

The resolution doesn't matter when it comes to vram (buffer sizes don't take up much of the vram comparative to the textures) unless you are using AA and AF.

But the cut back is if you are using a lower resolution you don't need to use the highest detailed texture settings either. At 4k you will need 8k textures because you can actually see those extra pixels, it will look better. Now if you had the same FOV in a lower resolution settings, you won't see all those pixels.

Well to answer my optimization question, I would use masks for all texture channels to change the color on the fly, and use procedurally generated textures based off those masks as well.

Yes, I agree that the frame buffer itself isn't the issue with higher resolutions. It's the required assets to make it look good. Either way, it will lead to the same limitations though as a byproduct.

Makes sense on the texture optimization. Q3A: Team Arena comes to mind. :) Or Red vs. Blue. :D

I can't directly speak to the tools aspect. I mean, obviously better tools for an engine leads to an easier time for a dev outside the engine dev. I think that's also where my example of Starbreeze wouldn't hold up, as it was their engine, and their games on top of it. The need for friendly tools diminishes if you do everything yourself. The point about them being very competent devs though stands. Maybe more studios need their own engine / developers in house. It would force them to know their product from the foundation up. (not realistic of course, but it would make things interesting)

If a video card I want has two options, one with more memory (and it isn't ridiculously priced, and the GPU is powerful enough to take advantage,) then of course I'll pick the one with more memory, and hope that the game I play will make use of it. I'd still prefer they made "good use" of it though, and not "lazy use". And I still question the rapid spike in perceived VRAM requirements 4K+required assets notwithstanding.
 
Different engines will probably reflect different memory requirements, but also the way the textures are used will impact the VRAM usage. Example, one game with all sorts of repeating patterns, textures re-used over and over, vs a game where more unique textures are used, and there's less repeat-use. A game like that will need more VRAM.

One example that comes to mind is Wolfenstein: New Order. That game looks great, and their Hideout looks really cool. I'm not sure there is a single repeat texture in the hideout. Rage engine uses texture streaming, which works much better these days compared to when Rage the game came out. Streaming the textures probably keeps the VRAM usage better managed.

Flat out just saying the game-dev's are lazy is somewhat an asinine thing to say, with the multitude of things that will affect Vram usage, some of which will be out of the hands of devs (such as engine choice).
 
Different engines will probably reflect different memory requirements, but also the way the textures are used will impact the VRAM usage. Example, one game with all sorts of repeating patterns, textures re-used over and over, vs a game where more unique textures are used, and there's less repeat-use. A game like that will need more VRAM.

One example that comes to mind is Wolfenstein: New Order. That game looks great, and their Hideout looks really cool. I'm not sure there is a single repeat texture in the hideout. Rage engine uses texture streaming, which works much better these days compared to when Rage the game came out. Streaming the textures probably keeps the VRAM usage better managed.

Flat out just saying the game-dev's are lazy is somewhat an asinine thing to say, with the multitude of things that will affect Vram usage, some of which will be out of the hands of devs (such as engine choice).

I wasn't implying that all game-dev's are lazy. Even the ones that make a sloppier port, or sloppy game in the first place may not be lazy exactly. However, let's look at Wolfenstein specifically. It's one of my favorite games in recent times. It's absolutely gorgeous. They use a completely different method of texturing, so your point stands, and make perfect sense. There are other differences to the engine, and many tradeoffs involved. (especially where lighting is concerned from everything I've read) It's got near 40GB of textures. The engine is fast as hell even on medium hardware, looks great (some of it may be down to taste, as I know many people prefer the look of other games/engines.) However, Machine Games (there's those Starbreeze guys again that I mentioned ;) ) really made good use of the engine, and made it shine. The game looks nearly the same on my 2GB 660Ti and 4GB 970. I can increase some settings on the 970 obviously, but they still made a beautiful game that runs on lower VRAM requirements.

We can argue all day about subjective aspects, texture decisions, artwork, etc. but the fact is the game looks competitive with just about everything out there give or take, and isn't pushing that VRAM requirement up. Apples and Oranges maybe, but obviously it can be done. Now if these same guys made a new game that pushed RAM requirements way up, I would be reasonably sure that they were making good use of it.

I'm not on a crusade against all game developers, but I do think some are better than others when it comes to this sort of thing. There are tons of factors at play, but the overall picture I'm getting is that we're in a period where people are relying on cards having more memory on them, instead of making better use of that memory. Maybe I'm not 100% right, but I am noticing a pattern, and a spike in what people perceive as good enough for current hardware. Obviously there's a slight skew here just because of the nature of this site. Not faulting that either. I love hardware, love advancements in hardware, love to see people push the envelope, etc. I just want to see it pushed in all areas and for the right reasons. Maybe it is, and my perspective is off. I'm perfectly willing to acknowledge that this might be the case. But it also might not be.
 
Last edited by a moderator:
Looks like a pretty solid choice for Crossfire and higher resolutions. Low minimums in some games is strange.

I'm thinking its due to driver overhead as AMD has had many issues with that in the past. I'm trying to get some people to test out CPU usage with it but no responses so far.
 
I'm thinking its due to driver overhead as AMD has had many issues with that in the past. I'm trying to get some people to test out CPU usage with it but no responses so far.

Maybe it is because people are desperate to find any reason as to why their favorite product isn't meeting expectations. So far I've heard

  1. it is nVidias fault for whatever reason you feel like
  2. they were rushed to market
  3. it isn't their fault people took the marketing out of context
  4. it is MS's fault for forcing such high driver overhead
  5. it is Intel's fault because their high end CPU's cost so much
  6. it is the consumer's fault for not willing to realize that the HBM is worth the performance hit this time around.
  7. it is the game developers fault for not spending more time with their hardware on their own dime.
  8. it is the reviewers fault for not choosing older games which show the product in its best light
  9. it is the consumers fault for not using them in Crossfire which shows near 100% gains
  10. the card is worth $650, nVidia should be selling the 980 Ti at a higher price.
  11. people shouldn't consider overclocking gains when choosing video card
  12. people shouldn't consider power draw when choosing a video card
  13. water is the only correct way to cool a video card and thus vendors using air are not servicing the consumer fairly
  14. the water cooler is why it costs $650

The list probably goes on and on and on.
 
It performs on par with the 980 TI - losing sometimes, winning against the Titan in others.

It performs much better @ 4K vs lower - which means something is hindering its performance (CPU overhead ? AMD has been known to have worse drivers)

It does run cooler and much quieter than a 980 TI - performs similar and is priced the same, just came out a few weeks later and so its a failure somehow...

Ignore marketing, focus on reality.
 
It performs on par with the 980 TI - losing sometimes, winning against the Titan in others.

It performs much better @ 4K vs lower - which means something is hindering its performance (CPU overhead ? AMD has been known to have worse drivers)

It does run cooler and much quieter than a 980 TI - performs similar and is priced the same, just came out a few weeks later and so its a failure somehow...

Ignore marketing, focus on reality.


The lower performance in lower resolutions is because of geometry bottleneck.

How about total power consumption? The "cooler" running doesn't matter in the real world, its the power consumption that does, because it is a 1:1 relationship with heat generated.

Running cooler benefits AMD since that will allow them to use less power due to less leakage, which in this case they needed it pretty badly, otherwise it wouldn't have been any where near competitive on the power front, even now its still higher.

yes this is focusing on reality :)
 
Ignore marketing, focus on reality.
When Nvidia does something, people look for all the ways they can praise it.
When AMD does something, people look for all the ways they can criticize it.

Even if we're talking about the same thing. Slap the Fury X's reference watercooler on the reference 980 Ti and the fanboys would go wild.

At this point it's subconscious. Most people don't even realize they're doing it. I can't blame them -- we've been programmed to hate AMD for the last decade. Always a winner; always a loser. Someone has to be the punching bag.

AMD is currently fighting against 80% marketshare. It's like a Republican going to the DNC and trying to win votes. Just isn't going to happen.
 
When Nvidia does something, people look for all the ways they can praise it.
When AMD does something, people look for all the ways they can criticize it.

Even if we're talking about the same thing. Slap the Fury X's reference watercooler on the reference 980 Ti and the fanboys would go wild.

At this point it's subconscious. Most people don't even realize they're doing it. I can't blame them -- we've been programmed to hate AMD for the last decade. Always a winner; always a loser. Someone has to be the punching bag.

AMD is currently fighting against 80% marketshare. It's like a Republican going to the DNC and trying to win votes. Just isn't going to happen.

the thing is the 980ti doesn't need water cooling to match the Fury X, not to mention it can actually over clock considerably with air cooling, will know more once the voltages are unlocked for the Fury X.
 
It performs on par with the 980 TI - losing sometimes, winning against the Titan in others. Please elaborate on what games that people are actually playing in reasonable numbers?

It performs much better @ 4K vs lower - which means something is hindering its performance (CPU overhead ? AMD has been known to have worse drivers) So what? it's 4k performance still isn't good enough by any stretch of imagination. AMD promised us the Fury X would provide great 4k gaming! It did not deliver. That is reality.

It does run cooler and much quieter than a 980 TI - performs similar and is priced the same, just came out a few weeks later and so its a failure somehow.. Die temp does not correlate to power draw. I could cool the die to -50 deg C and still turn your room into a sauna with the exhaust from the heat exchanger. Furthermore, why isn't their an air version of the Fury X? Do you really think AMD told it suppliers "we are only going to do liquid cooled because we want to ensure it only serves a small market of enthusiasts with the cases that can support it?"

Ignore marketing, focus on reality. I am focusing on our current reality, not alternate reality. That was the point of my post..and you pretty much added to my list. The fact that the PR from AMD has pretty much gone silent should be something that should weigh in on your reality.

The heart of the matter is what was posted in the summary from Anand (probably the only site that really really tries to find the best in everything)...The Fury X is a nice tech demo that comes up just short. Coming up "just short" on every reasonable benchmark other than acoustics isn't helpful. Will have to see what happens on July 14th. There could be some vindication...I can only hope.
 
the thing is the 980ti doesn't need water cooling to match the Fury X
FX doesn't need watercooling either.
You literally just did the exact thing I posted.

When Nvidia does something, people look for all the ways they can praise it.
When AMD does something, people look for all the ways they can criticize it.

At which point did a watercooler become a BAD THING?
 
FX doesn't need watercooling either.
You literally just did the exact thing I posted.



At which point did a watercooler become a BAD THING?

Stupid EVGA pushing their stupid watercoolering AIO

Oh wait, people are eating that shit up even though its an extra $100 for their 980 series cards.
 
FX doesn't need watercooling either.
You literally just did the exact thing I posted.



At which point did a watercooler become a BAD THING?


Without water cooling the chip temps will be what I think around 30 degrees higher, there is a linear affect of temp to leakage, and power usage, that means 30 more watts will have to be used at the same frequency, guess what it now shows that Fiji is much lower then it already is when it comes to performance to watt, right now with water cooling its at 20% to 35% difference in that metric when looking at different resolutions. That would put it 50% to 65% difference. That would not have been good for reviews would it?

I did post three different articles in another thread that explains how this occurs. And this is something that engineers look into when designing chips.

Its not a bad thing that it has an AIO but it was a design necessity, other wise it would have been like what happened with the gtx 480. Actually a bit worse then that because at least the gtx 480 was faster then anything else out there, just used a ton more power.
 
Last edited:
Stupid EVGA pushing their stupid watercoolering AIO

Oh wait, people are eating that shit up even though its an extra $100 for their 980 series cards.

Yep I ordered 2x Ti hybrids straight after I saw Fury X reviews when it was released. :)
 
When Nvidia does something, people look for all the ways they can praise it.
When AMD does something, people look for all the ways they can criticize it.

That's absolute horseshit and we all know it. How many returned their 970's after the 3.5GB segmented VRAM debacle? How many were disappointed with the 500 series? The 400 series? The 6000 series? The 5000 series (specifically calling out the 5700 Ultra :rolleyes: )? Certain driver releases blowing shit up? Hell, how many are disappointed that the 980ti is yet another way overpriced 28nm 250W offering? Etc, etc, etc...

Even if we're talking about the same thing. Slap the Fury X's reference watercooler on the reference 980 Ti and the fanboys would go wild.

Nope. WC'd GPU's are too much of a niche market within a niche market. Those that are happy about the FuX having a mandatory WC are in the same miniscule category of the overall enthusiasts segment that would be happy if the 980ti had a mandatory WC.

At this point it's subconscious. Most people don't even realize they're doing it. I can't blame them -- we've been programmed to hate AMD for the last decade. Always a winner; always a loser. Someone has to be the punching bag.

Most people started to bag on AMD "consciously" when Bulldozer came out and disappointed, well, 99.9% of everyone.
As for their GPU's, they release something that can eek out a win against their price segment competitor's products, only to get an answer back from those competitors in relatively short order. Then it takes AMD a very long time to release a successor. Rinse, repeat.

AMD is currently fighting against 80% marketshare. It's like a Republican going to the DNC and trying to win votes. Just isn't going to happen.

They put (and have kept) themselves in that position from not having worthy performing products since Intel released their post-Netburst processors, save for some awesome Phenom X6's and recent APU's. But too little, too late, unfortunately.


FX doesn't need watercooling either.

Until there are reviews published about air cooled FuX samples and retail stock available for purchase, there is no absolute proof of a non-WC'd FuX at this time. It's vaporware right now.

At which point did a watercooler become a BAD THING?

The microscopic size of the WC segment compared to the entirety of the enthusiast market.


Stupid EVGA pushing their stupid watercoolering AIO

Oh wait, people are eating that shit up even though its an extra $100 for their 980 series cards.

The 980ti is primarily available with air cooling because it doesn't require WC and can still be overclocked like crazy without it, unlike the FuX which is only available with WC and can't OC for shit (may be subject to change with future changes, but I don't think anyone sane is holding their breath).
 
Why are we arguing performance:watt on a flag ship GPU? No one who buys a $650 GPU cares about power consumption unless it's going in a SFF box, and AMD will have another card for that very soon.

They could have air-cooled Fiji. They chose not to. I'm sure part of it was the thermal requirements of the Fiji GPU, but part of it was also the massive negative consumer response to the 290X stock cooler. They fixed that problem brilliantly. The fact that the card comes stock with water cooling is not a negative for the target audience. Hell, my mITX box has a free 120mm fan port.

There are plenty of valid complaints about Fiji. The water cooling is subjective, to be sure, but I think most people will view it as a positive rather than a negative.
 
The microscopic size of the WC segment compared to the entirety of the enthusiast market.
I don't think it's as microscopic as you think. Closed loop AIO coolers have been selling well for years now and all of my enthusiast friends are using them on their processors. Except for very budget builds, most enthusiast builds I see on Reddit are using some form of AIO cooler now as well. Custom loops are a much smaller part of the market, but the Fury X is not using a custom loop. It's easy and painless to install, which is the big detractor for custom loops.

Within the market segment willing to spend $650 on a GPU, there are a lot who are willing to use a AIO water cooling system.
 
I don't think it's as microscopic as you think. Closed loop AIO coolers have been selling well for years now and all of my enthusiast friends are using them on their processors. Except for very budget builds, most enthusiast builds I see on Reddit are using some form of AIO cooler now as well. Custom loops are a much smaller part of the market, but the Fury X is not using a custom loop. It's easy and painless to install, which is the big detractor for custom loops.

Within the market segment willing to spend $650 on a GPU, there are a lot who are willing to use a AIO water cooling system.

You would be wrong. Proof is in the pudding: NVidia is the dominant sales volume brand right now and have placed a lot of focus on increasing performance per watt which negates the need for a mandatory WC of any fashion, which keeps installation easy and painless.

CPU's, yes - the segment is growing for AIO's. If it wasn't, then there wouldn't be so many available from a variety of brands.

Problem for a GPU with a mandatory AIO and high power consumption is that the small segment it caters to (the "knowledgeable folks") know exactly what the install requirements are plus how to install properly, and that in itself is the limiting factor - it is not appealing to a much larger audience, which AMD absolutely needs right now.

Other problems are convincing a large portion within that small group of knowledgeable folks to take on the (albeit small) risk of a WC failure/leak, make room in the chassis for the radiator, even have to change their chassis to be able to accommodate the radiator, and maybe spend even more money for an adequate PSU if they don't have one with enough power output....same problems with CPU AIO's and CL's.
 
Problem for a GPU with a mandatory AIO and high power consumption is that the small segment it caters to (the "knowledgeable folks") know exactly what the install requirements are plus how to install properly, and that in itself is the limiting factor - it is not appealing to a much larger audience, which AMD absolutely needs right now.
That's kind of my point. This isn't a mass market product. This is AMD's Titan X. NVIDIA controls most of the GPU market, yes, but the volume of sales even to enthusiasts is primarily in the $250-350 range with the 960 and 970. At the $650 price point, I don't think water cooling is a problem.
 
I wouldn't call water-cooling a problem exactly, but I personally would never require it across a whole product like AMD just did. I personally like a balance of performance, power, temperatures, etc. but I do it all withing the air cooling realm. I salute those with crazy tweaked out water cooled machines, but personally I view it as a headache that I just don't need. One more thing to think about. It may not be THAT big of a deal, but one less item on my list of things is good for me. I simply wouldn't buy this card because of that cooler. I'm a knowledgeable person who's been building PCs since 4.66MHz XT processors, and it's just not something I have any interest in doing. I get the most I can from air cooling, and call it a day. I imagine that there are a lot of people like me out there. People who want high performance, but are willing to sacrifice a little bit for simplicity's sake.

I'm sure there is plenty of market for this card, but I would agree with DejaWiz, that its a lot smaller than the market for traditional air cooled cards.

I am also of the opinion that they didn't put that cooler on there as a value add. Seriously. It may have its benefits, but you can be sure they'd have stuck with air if it was an option, or at least made liquid the optional sku. It wasn't for their health, the broadening of the industry, the elevation of human understanding. It was because they plain old had to.

That doesn't make it a bad card, but be reasonable about why they did it.

I wouldn't try to talk anyone out of getting one. By all means, if it appeals to you buy it, enjoy it, lie with it, cuddle with it, I don't care. It seems pretty solid overall. There do seem to be some pretty glaring puzzles to the design though.
 
the $650 price tag isn't a deal breaker for me but the clc is. i'm the buyer who tryically picks up two at the same time. i'm also the buyer who swaps the cards in and out of a several builds. the clc makes that process a bigger nuisance than i'd like it to. so until i see a fan cooled fury x, amd isn't getting any money out of me.

i actually liked the reference coolers for the hawaii cards. you needed to upgrade the tim to help with throttling. but i appreciated those coolers the longer i owned the cards. i appreciate those blowers more now that my only available cooling option for a high end amd card is the clc.
 
That's kind of my point. This isn't a mass market product. This is AMD's Titan X. NVIDIA controls most of the GPU market, yes, but the volume of sales even to enthusiasts is primarily in the $250-350 range with the 960 and 970. At the $650 price point, I don't think water cooling is a problem.

It is a big problem when that $650 GPU compared to the competitor's $650 GPU...
1. Requires the AIO, vastly narrowing the market appeal.
2. Has less VRAM.
3. Has less performance pretty much across the board.
4. Draws more power.
5. Has almost no OC capability.
6. Has a lack of features, like DisplayPort 2.0

If this is AMDs halo "Titan X", then it's an even more poorer showing considering everything above wasn't even comparing it to the real Titan X.

AMD can't afford to cater to small-volume niche areas right now.
 
It is a big problem when that $650 GPU compared to the competitor's $650 GPU...
1. Requires the AIO, vastly narrowing the market appeal.
2. Has less VRAM.
3. Has less performance pretty much across the board.
4. Draws more power.
5. Has almost no OC capability.
6. Has a lack of features, like DisplayPort 2.0

If this is AMDs halo "Titan X", then it's an even more poorer showing considering everything above wasn't even comparing it to the real Titan X.

So don't buy it. Drivers will get better performance and OC seems to be OK once voltage is unlocked per a couple other sites. Not everybody wants to game at 4K with the card. People still want the card. This shit is getting old.
 
So don't buy it. Drivers will get better performance and OC seems to be OK once voltage is unlocked per a couple other sites. Not everybody wants to game at 4K with the card. People still want the card. This shit is getting old.

Facts get old? Umm, wow.
 
Facts get old? Umm, wow.

You act like this card doesn't perform well. Sorry to say it but it does. Its AMD's fastest video card. What you think of as value compared to the competition means jack shit to some people. Some want it for the compact size, some want to try the AIO watercooling, some want to try the new HBM and some just want to buy something other than Nvidia. If that bothers you (and it seems to) then I guess you'll just have to get over it at some point.

PS: Who says it doesn't OC well? We don't even know yet except for a couple sites that say it does with voltage unlocking?
 
So don't buy it. Drivers will get better performance and OC seems to be OK once voltage is unlocked per a couple other sites. Not everybody wants to game at 4K with the card. People still want the card. This shit is getting old.

don't be blindly stubborn, first GCN is old, is already mature, have already matured drivers.

second, the card only work to game at 4K so if you are going to buy this card is because you will be gaming at 4K if not then you have to be astonishing fanboy to buy it over a 980TI.
 
AMD can't afford to cater to small-volume niche areas right now.
Sure they can, Fury X is being produced in small-volume niche quantities.
It's sold out everywhere. As far as AMD is concerned Fury X is wildly successful.

It's still an abysmal card though. But when they can only make so few of them (presumably due to HBM) then it doesn't really matter, enough people will buy it anyway.
 
You act like this card doesn't perform well. Sorry to say it but it does. Its AMD's fastest video card. What you think of as value compared to the competition means jack shit to some people. Some want it for the compact size, some want to try the AIO watercooling, some want to try the new HBM and some just want to buy something other than Nvidia. If that bothers you (and it seems to) then I guess you'll just have to get over it at some point.

My problem isn't with the people buying them, it's with AMD providing another lacking narrow-scoped product to market when they need as much market share as possible right now. You need to understand that I want AMD to succeed as much as possible, but this FuX isn't going to do it for them.
 
don't be blindly stubborn, first GCN is old, is already mature, have already matured drivers.

second, the card only work to game at 4K so if you are going to buy this card is because you will be gaming at 4K if not then you have to be astonishing fanboy to buy it over a 980TI.

I own an Nvidia card. If it works great at 4K then clearly at lower resolutions driver updates will fix those performance issues. Talk about being blind?
 
My problem isn't with the people buying them, it's with AMD providing another lacking narrow-scoped product to market when they need as much market share as possible right now. You need to understand that I want AMD to succeed as much as possible, but this FuX isn't going to do it for them.

That I can agree with! :)
 
Back
Top