AMD Radeon R9 Fury X Video Card Review @ [H]

My problem isn't with the people buying them, it's with AMD providing another lacking narrow-scoped product to market when they need as much market share as possible right now. You need to understand that I want AMD to succeed as much as possible, but this FuX isn't going to do it for them.
They could price Fury X at $100 and it wouldn't do anything for their marketshare because they only shipped 1,000 of them to the US.

Where do they really stand to gain marketshare? In the sane pricing tiers, obviously... $200, $300 mostly. And they have their shitty rebrands there. Apparently the R9 390 is the new "Performance per dollar king" but it's still just a rebrand. And being 'better' isn't enough for AMD to gain marketshare, they need to perform a miracle.

I don't see Jesus ascending from the heavens just yet, so -- 20% marketshare it is! Maybe 16nm will change things. Probably not.
People just need to accept the fact that AMD will never be competitive enough to appease the community. There's not enough R&D money in the world to make some fanboys change teams.

I could spend the rest of my life drinking protein shakes and pumping iron but I will never be Mr. Universe. I was born small and weak, I will forever be that way. It's better if we just learn to accept who we are for who we are. AMD is a budget brand, that's who they are.
 
I own an Nvidia card. If it works great at 4K then clearly at lower resolutions driver updates will fix those performance issues. Talk about being blind?

no, not always, you have to understand first how architecture works, is a thing from AMD since GCN 1.0. when the 7970 was launched it performed great at 2560x1440/2560x1600 it had a great advantage over the GTX 680 at that resolution but at 1920x1080/1920x1200 lagged behind the 680. to the moment the 7970 was able to game fine at 1920x1080/1920x1200 a next gen of cards was already in the market.

the history repeated with the 290X and GTX 780/Titan/780TI those performed much better at 1920x1080/1920x1200 and 2560x1440/2560x1600 than the 290X. and the 290X was able to outperform those cards at 4K.. simply kepler was bad at 4K..

Same history is repeating right now with the Fury X and we can be sure of something, to the moment the Fury X will be able to perform equal to a 980TI at 2560x1440/2560x1600 will be because it will not have enough power for 4K with newer games, yes as 290X and HD7970 and we will have already pascal in the market crushing the Fury X..

Anyone willing to spend 650$ in a Fury X to play at 2560x1440 have to be called Absolutely Fanboy.
 
It is a big problem when that $650 GPU compared to the competitor's $650 GPU...
1. Requires the AIO, vastly narrowing the market appeal.
2. Has less VRAM.
3. Has less performance pretty much across the board.
4. Draws more power.
5. Has almost no OC capability.
6. Has a lack of features, like DisplayPort 2.0

If this is AMDs halo "Titan X", then it's an even more poorer showing considering everything above wasn't even comparing it to the real Titan X.

AMD can't afford to cater to small-volume niche areas right now.
I'm not arguing any of that :) I converted to a 980 Ti from my 290X CF setup because I think it's the best card at $650. I just don't think point #1 is nearly as much of a problem as points #2-6.

If the Fury X had come out right now and blew the doors off of the Titan X, no one would care that it's water cooled. In fact I would bet money most would praise AMD for doing so. It's just that the card doesn't beat the 980 Ti or the Titan X, so it seems like a waste instead of a benefit.
 
second, the card only work to game at 4K so if you are going to buy this card is because you will be gaming at 4K if not then you have to be astonishing fanboy to buy it over a 980TI.

Yeah well I guess that would be me. Well other than the fact that I don't much care for getting involved in stupid pissing contests based on branding.

If it wasn't for AMD and the K6-2 I would not have been able to have a computer when I was in the military as an E-nothing making 100 and nothing a month.

That's why AMD will always be my preferred hardware supplier as long as they remain competitive.

The FuryX is competitive.
 
no, not always, you have to understand first how architecture works, is a thing from AMD since GCN 1.0. when the 7970 was launched it performed great at 2560x1440/2560x1600 it had a great advantage over the GTX 680 at that resolution but at 1920x1080/1920x1200 lagged behind the 680. to the moment the 7970 was able to game fine at 1920x1080/1920x1200 a next gen of cards was already in the market.

the history repeated with the 290X and GTX 780/Titan/780TI those performed much better at 1920x1080/1920x1200 and 2560x1440/2560x1600 than the 290X. and the 290X was able to outperform those cards at 4K.. simply kepler was bad at 4K..

Same history is repeating right now with the Fury X and we can be sure of something, to the moment the Fury X will be able to perform equal to a 980TI at 2560x1440/2560x1600 will be because it will not have enough power for 4K with newer games, yes as 290X and HD7970 and we will have already pascal in the market crushing the Fury X..

Anyone willing to spend 650$ in a Fury X to play at 2560x1440 have to be called Absolutely Fanboy.

Sorry but the Fury has new features that drivers will improve upon. HBM itself AMD is learning how to optimize better. Look at HardOCP's own pic of updated specs to prove my point. Stop with the fan boy crap and forum engineering.

http://www.hardocp.com/image.html?image=MTQzNTEwODU5MTlTMEhPT1prR0FfMV82X2wuZ2lm
 
Buying a product because of what it might do down the road instead of what it is capable of right now is really stupid and likely a waste of money.
 
Sorry but the Fury has new features that drivers will improve upon. HBM itself AMD is learning how to optimize better. Look at HardOCP's own pic of updated specs to prove my point. Stop with the fan boy crap and forum engineering.

http://www.hardocp.com/image.html?image=MTQzNTEwODU5MTlTMEhPT1prR0FfMV82X2wuZ2lm

And those features will do nothing to be improved via driver. the most important features of Fiji is the Tessellation performance and lossless delta color compression and those features have been already matured and experimented with the Tonga R9 285. Again GCN is old and already have very matured drivers, Im open to speak about the card engineering if you prove to have the enough knowledge about =).

Buying a product because of what it might do down the road instead of what it is capable of right now is really stupid and likely a waste of money.

+1.
 
And those features will do nothing to be improved via driver. the most important features of Fiji is the Tessellation performance and lossless delta color compression and those features have been already matured and experimented with the Tonga R9 285. Again GCN is old and already have very matured drivers, Im open to speak about the card engineering if you prove to have the enough knowledge about =).

There's always room for improvement but please lets talk about your software driver writing experience for video cards? I'd like to hear your expertise in that area. I don't know a lot about it but I'll try and keep up with an expert like yourself. And as far as future improvements people have purchased equipment with a lot less faith in the outcome so that is nothing new.
 
And as far as future improvements people have purchased equipment with a lot less faith in the outcome so that is nothing new.
People do lots of really stupid things. How is that an argument? How often does that actually work out well?

Even if AMD comes out with drivers that improve performance - and I have no doubt that they will, over time - it's not like NVIDIA's software engineers are sitting around doing nothing. :p

Don't buy something because it "might" be better later.
 
Sure they can, Fury X is being produced in small-volume niche quantities.
It's sold out everywhere. As far as AMD is concerned Fury X is wildly successful.

It's still an abysmal card though. But when they can only make so few of them (presumably due to HBM) then it doesn't really matter, enough people will buy it anyway.

The Apple watch was sold out everywhere for a while and now Apple is not releasing the sales numbers when they usually take great pride in saying "look at how of our stuff people are buying". If AMD only sold a 1000 of those, it would be considered an epic and monumental disaster. That $650k probably is no more than $200k by time it hits their coffers (e.g. what they get for the chip). $200k barely pays for mid grade engineer for a year. Secondarily, if that is their premier product to get people excited about lower level mainstream offerings...that isn't helping either. Maybe the Fury X's little brother will help..but i'm not holding my breath.
 
People do lots of really stupid things. How is that an argument? How often does that actually work out well?

Even if AMD comes out with drivers that improve performance - and I have no doubt that they will, over time - it's not like NVIDIA's software engineers are sitting around doing nothing. :p

Don't buy something because it "might" be better later.

I don't but people do...sucks but it happens. Also, who the hell said anything about Nvidia drivers not improving? I'm talking about the "out of your ass bullshit claims" some people are spouting. Perfect example, aka drivers: AMD can't do it but Nvidia can? The fucking hate has gone full retard. Step back and look at it for what it is. Its a good card that is over priced.
 
I don't but people do...sucks but it happens. Also, who the hell said anything about Nvidia drivers not improving? I'm talking about the "out of your ass bullshit claims" some people are spouting. Perfect example, aka drivers: AMD can't do it but Nvidia can? The fucking hate has gone full retard. Step back and look at it for what it is. Its a good card that is over priced.
I think you should re-read what I wrote. Both companies can and regularly do introduce performance improvements with their drivers. I'm not praising one vendor over the other. My point was that while AMD will likely boost Fury X performance with upcoming drivers, NVIDIA will likely do the same for their cards and in the end the performance gap may not close any. So, to that end, you should buy products based on their current performance, not gamble on what it might be 4-6 months from now.
 
If the Fury X had come out right now and blew the doors off of the Titan X, no one would care that it's water cooled. In fact I would bet money most would praise AMD for doing so.

Price tag aside, the AIO sure didn't help the 295X2 sell worth a shit when it beat anything NVidia had out at the time, including the original Titan and Titan Black (both air cooled, btw). NVidia's answer? Delay their ultimate halo product for a couple/few weeks after 295x2's release then drop the dual GPU Titan Z (oh look, no narrow-scoped AIO...) complete with an even more fucking asinine price tag than the 780, 780ti, Titan, or Titan Black. ...Not that an AIO would have made any difference in market share for the Titan Z because, well, the price alone was enough to dissuade almost everyone. It was a purely classless "fuck you AMD" move on NVidia's part. Even after all that market drama, the AIO still didn't help the 295x2 sell like mad when the fire sale pricing mode kicked in...there were still some available (probably still are) even after Titan Z released not long ago.

That aside, the FuX is not a halo product at this price point. It is priced smack-dab at the upper range of the high-end/flagship segment. Even if the performance was even 5% better than the 980ti in real-world game testing, it is still limited by it's underwhelming specs compared to its $650 competitor. If it were a halo product, then that leaves the rebranded 290X in the form of the 390X as the flagship, which makes it nothing but a big joke and a slap in the face to everyone shopping in flagship GPU realm.
 
Buying a product because of what it might do down the road instead of what it is capable of right now is really stupid and likely a waste of money.

Tell that to those that bought their 7970. I couldn't afford one when I wanted it, really couldn't find one to not afford really. But man it is impressive that 7970 owners are able to ride it out so far.
 
Tell that to those that bought their 7970. I couldn't afford one when I wanted it, really couldn't find one to not afford really. But man it is impressive that 7970 owners are able to ride it out so far.
The 7970 was an excellent product at release, that's why everyone bought it.
 
That aside, the FuX is not a halo product at this price point. It is priced smack-dab at the upper range of the high-end/flagship segment. Even if the performance was even 5% better than the 980ti in real-world game testing, it is still limited by it's underwhelming specs compared to its $650 competitor. If it were a halo product, then that leaves the rebranded 290X in the form of the 390X as the flagship, which makes it nothing but a big joke and a slap in the face to everyone shopping in flagship GPU realm.
Okay so now we are arguing semantics. It's AMD flagship GPU showcase, it's the most expensive single GPU they've ever released - that's why I said "halo card". I meant it's the top tier card that AMD sells. I am aware that NVIDIA has cards priced the same and even higher as well. It's still a very small market, dual-GPU users even more so. At this price range, really I don't think that people willing to drop $650 or more are going to be dissuaded by an AIO water cooler, assuming the performance is to match. The problem right now is that it's just not priced right versus the competition, but as has been pointed out, they're selling through inventory anyways so that's not going to change.
 
^ I misunderstood. I was not trying to argue semantics.

I tend to categorize GPU tier offerings as entry level, low end, midrange, mid-upper, upper range, and flagship for the mainstream parts, then halo for the product(s) that are there purely for epeen stretching.
 
That's absolute horseshit and we all know it. How many returned their 970's after the 3.5GB segmented VRAM debacle? How many were disappointed with the 500 series? The 400 series? The 6000 series? The 5000 series (specifically calling out the 5700 Ultra :rolleyes: )? Certain driver releases blowing shit up? Hell, how many are disappointed that the 980ti is yet another way overpriced 28nm 250W offering? Etc, etc, etc...
KitGuru: While retailers and add-in-card vendors do not want to share a lot of details about the amount of customers wishing to return their graphics cards following the scandal with incorrect specifications and inability to use more than 3.5GB/s of memory, they also confirmed that the amount of returned GeForce GTX 970 is very low. According to some estimates, the return rate of the GTX 970 because of the aforementioned issues is between one and two percent.

The 480 was 6 months late. Even then AMD (then ATI) only managed to barely pull ahead of nVidia in discrete market share. As soon as the 480 released that trend began to reverse. By the time the 580 launched nVidia had taken back any lost market share it lost.

No horseshit. People just love their nVidia. No matter how poor the product or attitude towards the market place from them.



Nope. WC'd GPU's are too much of a niche market within a niche market. Those that are happy about the FuX having a mandatory WC are in the same miniscule category of the overall enthusiasts segment that would be happy if the 980ti had a mandatory WC.
They're just new. There was a time not too long ago that all motorcycles were air cooled. It's considered a budget product these days. Liquid cooling is simply superior. Now that AMD has released a reference product with it it'll build momentum.



Most people started to bag on AMD "consciously" when Bulldozer came out and disappointed, well, 99.9% of everyone.
As for their GPU's, they release something that can eek out a win against their price segment competitor's products, only to get an answer back from those competitors in relatively short order. Then it takes AMD a very long time to release a successor. Rinse, repeat.
It was before Bulldozer. Bulldozer is just the rallying cry for AMD haters now. Anytime someone wants to kick sand at AMD they just bring up Bulldozer. Even if it has nothing at all to do with the current topic.



They put (and have kept) themselves in that position from not having worthy performing products since Intel released their post-Netburst processors, save for some awesome Phenom X6's and recent APU's. But too little, too late, unfortunately.
We're talking AMD vs. nVidia. I know that nVidia fans love to compare nVidia to Intel, but they aren't Intel. Shift AMD's or nVidia's tech to Intel's manufacturing process and the other wouldn't last a year.




Until there are reviews published about air cooled FuX samples and retail stock available for purchase, there is no absolute proof of a non-WC'd FuX at this time. It's vaporware right now.
We're not likely to see an air cooled Fury X. From what we've been told it's going to be reference designs only. We are going to see an air cooled Fury (non X) though. Those are right around the corner. That will stop all this "Fury needs liquid cooling" FUD finally. I don't know how anyone can post anything like that and not realize people are going to see them for the shills/fanbois they are.



The microscopic size of the WC segment compared to the entirety of the enthusiast market.
Again, it's new, not niche.




The 980ti is primarily available with air cooling because it doesn't require WC and can still be overclocked like crazy without it, unlike the FuX which is only available with WC and can't OC for shit (may be subject to change with future changes, but I don't think anyone sane is holding their breath).
Fury's O/C limits have nothing to do with the cooling. How can you even try and say that. The card is running less than 60° while reference Titan X/980 ti are running mid 80°s.
 
So don't buy it. Drivers will get better performance and OC seems to be OK once voltage is unlocked per a couple other sites. Not everybody wants to game at 4K with the card. People still want the card. This shit is getting old.

Amen. It's the same stuff repeated ad nauseum. They just hope people will get tired of disputing it and it'll become "fact".

Facts get old? Umm, wow.

Rhetoric gets old.
 
The 480 was 6 months late. Even then AMD (then ATI) only managed to barely pull ahead of nVidia in discrete market share. As soon as the 480 released that trend began to reverse. By the time the 580 launched nVidia had taken back any lost market share it lost.

That is incorrect, the gtx 480 series lost market share. It wasn't till the mid range cards of Fermi release did they recover somewhat. Please look below for link.

No horseshit. People just love their nVidia. No matter how poor the product or attitude towards the market place from them.
hmm nope

http://www.game-debate.com/blog/images/_id1429521203_343178.jpg


They're just new. There was a time not too long ago that all motorcycles were air cooled. It's considered a budget product these days. Liquid cooling is simply superior. Now that AMD has released a reference product with it it'll build momentum.
It depends on design of the silicon, if water cooling is a necessity or not. No company gives something for free (or at no cost to them), it burns their margins.


We're not likely to see an air cooled Fury X. From what we've been told it's going to be reference designs only. We are going to see an air cooled Fury (non X) though. Those are right around the corner. That will stop all this "Fury needs liquid cooling" FUD finally. I don't know how anyone can post anything like that and not realize people are going to see them for the shills/fanbois they are.
Yes and why is that, is it because AMD likes to cut their margins by giving something away?

Again, it's new, not niche.
Ok its not a niche it is a necessity to remain competitive on the power usage, and performance per watt.

Fury's O/C limits have nothing to do with the cooling. How can you even try and say that. The card is running less than 60° while reference Titan X/980 ti are running mid 80°s.
Well it seems you really don't understand what temps do to power usage, and how voltage works either.
 
Last edited:
I agree with you Razor1.
He lacks the basic understanding to comprehend the situation.
 
why is this still a thread? the review came out, the card falls short of usurping the 980ti. end of story.
 
Well that's the thing, We still don't know if Dx 12 was already being worked on at the point Dice was making the specs for Mantle, all we know is many developers were saying they wanted certain things for Dx12.

If I remember correctly Mantle was first really talked about to partners and general public in Nov of 2013 by Dice's TD at a conference, Dx12 was first shown off in a game in GDC March 2014.

So either Mantle was given to MS and in 6 months Microsoft was able to create Dx12 and then have a developer modify their game, all the while drivers were made too in the same time frame, its doesn't sound right.

All this stuff doesn't matter anymore, Mantle as we currently know it is pretty much gone.


and this is what I was talking about as Mantle as we know it now is gone

http://www.pcgamer.com/amd-halts-mantle-optimizations-for-current-and-future-graphics-cards/
 
and this is what I was talking about as Mantle as we know it now is gone

http://www.pcgamer.com/amd-halts-mantle-optimizations-for-current-and-future-graphics-cards/

Really back on Mantle?

https://community.amd.com/community/gaming/blog/2015/05/12/on-apis-and-the-future-of-mantle

They gave Mantle to Kronos for Vulkan's base and are working with Microsoft on DX 12.

Did you know that the Khronos Group has selected Mantle to serve as the foundation for Vulkan, a low-overhead PC graphics API that works on multiple OSes and hardware vendors?

...

The Mantle SDK also remains available to partners who register in this co-development and evaluation program. However, if you are a developer interested in Mantle "1.0" functionality, we suggest that you focus your attention on DirectX® 12 or GLnext.

They've been saying for months now that they aren't working on it much except as an API for testing new features. Nvidia didn't want to jump on board with it and with DX 12 having similar features why should they spend time developing it? So they gave it to OpenGL since OpenGL was going through a re-write anyway and now they don't have to spend extra time (money) developing it.
 
Yep, I like and prefer AMD. However, this is just another promised technology that died on the vine. And before someone tries to come in here and be a Nostradamus, no, you did not accurately predict that this would happen whoever you are. :p:rolleyes:

Mantle lives on in spirit. Khronos saw the writing on the wall and jumped - Vulkan is their gift to us. Microsoft saw the writing on the wall and jumped - DX12 is their gift to us. The development and introduction of Mantle has solely steered the future of these APIs. And AMD is to praise for it.
 
Last edited:
Well it seems you really don't understand what temps do to power usage, and how voltage works either.

Seems like you either love to move goal posts or suffer badly in reading comprehension. I have better things to do than have to post 6 responses to your replies that have little or nothing to do with what I posted. Enjoy! :)
 
Seems like you either love to move goal posts or suffer badly in reading comprehension. I have better things to do than have to post 6 responses to your replies that have little or nothing to do with what I posted. Enjoy! :)


I didn't move it at all, you just don't know how these things work, opps did you see how the Fury with disabled parts and lower frequency uses the same power as Fury X. It just shows that if Fury X didn't have water cooling its going to use much more power then the 275 TBP, probably north of 300 watts.

Being blind to something things and being open to all the marketing BS that AMD spewed out is just vilipending yourself. I have stated from the launch of Fury X, that watercooling on it was a necessity to be competitive with the market that its in, the 980 ti and everything thus far has proven that. NO COMPANY would cut margins to give things like a more expensive cooling solution for free. And if we look at AMD and the trouble they are in currently they would have tried everything to cut down manufacturing costs to increase margins if they could have.

Overclocking is not just about temperature, if it was any one could be the next Kingpin. Its about temperature, voltages, power usage, amperage and how the parts of the board and GPU react to all of these variables.

And don't forget AMD was the one that started the whole performance per watts things with the 4xxx and 5xxx series of cards where they were dominate, now the tables have turned. It goes to show you which company pushed the engineering envelope on many fronts, not just performance and features. If you have better things to do other then reply don't post at all that would make the thread a lot cleaner without inaccurate posts.
 
Last edited:
Really back on Mantle?

https://community.amd.com/community/gaming/blog/2015/05/12/on-apis-and-the-future-of-mantle

They gave Mantle to Kronos for Vulkan's base and are working with Microsoft on DX 12.



They've been saying for months now that they aren't working on it much except as an API for testing new features. Nvidia didn't want to jump on board with it and with DX 12 having similar features why should they spend time developing it? So they gave it to OpenGL since OpenGL was going through a re-write anyway and now they don't have to spend extra time (money) developing it.

Not my intention to rehash it, it was just the first publication of AMD not pushing Mantle 1.0 or any iterations of Mantle anymore to general public for PC development.
 
Tell that to those that bought their 7970. I couldn't afford one when I wanted it, really couldn't find one to not afford really. But man it is impressive that 7970 owners are able to ride it out so far.

indeed, i am surprised at just how comfortable my 7970 Lightning is at GTA5 @ 5760x1200.
probably around 30fps, and pretty steady at that.
 
I didn't move it at all, you just don't know how these things work, opps did you see how the Fury with disabled parts and lower frequency uses the same power as Fury X. It just shows that if Fury X didn't have water cooling its going to use much more power then the 275 TBP, probably north of 300 watts.

This is an incorrect assumption. You can look at Intel chips, i3, and i5, and i7 and similar MHz ratings yield similar TDP. The fact is it's the FuryX and Fury are the same chip with execution units disabled. This is similar to how i3, i5, and i7 are all a similar die with execution units disabled. But they contain a similar amount of circuits overall, even if they aren't executing. And that is what eats your power. (When they are running full tilt)

I will grant you power efficiency drops the warmer the GPU/Memory gets.
 
This is an incorrect assumption. You can look at Intel chips, i3, and i5, and i7 and similar MHz ratings yield similar TDP. The fact is it's the FuryX and Fury are the same chip with execution units disabled. This is similar to how i3, i5, and i7 are all a similar die with execution units disabled. But they contain a similar amount of circuits overall, even if they aren't executing. And that is what eats your power. (When they are running full tilt)

I will grant you power efficiency drops the warmer the GPU/Memory gets.

Hmm you want to write out the thermal envelopes on each of those Intel chips and specs or should I?

All I3 gen4 are from 15 watts to 54 watts, depending on frequency and number of cores
http://ark.intel.com/compare/?ids=8...608,76609,75110,75989,75107,75988,77480,77769

All i5 25 watts to 84 watts
http://ark.intel.com/compare/?ids=7...608,76609,75110,75989,75107,75988,77480,77769

All i7 35 to 115 watts.
http://ark.intel.com/compare/?ids=7...608,76609,75110,75989,75107,75988,77480,77769

All of these chips have variable frequencies, number of cores and wattage based on frequency and cores.

They are not the same. The lower power consumption CPU's are clocked much lower and have less cores. I'm using Gen 4 cause you mentioned i3 to i7

Granted the dead silicon does use up some power but its negligible compared to amount of power saved by shutting off those parts.

Now Die size

They are all about the same ~180 nm.

Also temperature of the CPU's change based on the amount of power they draw.
 
Last edited:
Thought I would bring this back. I predicted we would quickly need 8 GB vram just to play games at 1080p. Well, I guess that will come quicker. TPU shows Black ops 3 using 8.4 GB vram for 1440p. Since our GB/ megapixel is about .4 GB max, that means 1080p will still need nearly 8 GB vram.
http://www.techpowerup.com/217308/b...-and-gtx-980-ti-not-enough.html?cp=3#comments

.

Take for example Tomb Raider which uses 1.5 GB ram at 1080p and 3.1 GB with 4k
Now use 2.077 (megapixels) for 1080p and 8.3 for 4k
Lets call 'x' the amount of vram needed / megapixel
and 'y' this vram overhead i mentioned earlier that is not affected by resolution.

Starting with 4k:
8.4x + y = 3.1 .... and now 1080p:
2.07x + y = 1.5 .... using subsitution,
8.4x + 1.5 - 2.07x = 3.1 ...reduce to x and
x=.253 or in other words .253 GB needed for each Megapixel
now we can solve for y:
2.07x + y = 1.5
y = 1.5 - 2.07x ....replace x with .253 and
y=.977

Now lets test with 1440p or 3.7 Megapixels which was said to use 1.94 GB

3.7x + y = 1.91 ---> pretty damn close!!

Just for fun, I tested again first using ME:SoM and i came up with
x = .1 GB and y = 4.56 where again x is GB/megapixel and y is the "bullshit overhead"
I tested on 1440 p and got 4.93 GB vs. 4.97 GB posted by tweaktown.

Then Metro Last light gave me:
x = .115 and y = 1.06
testing formula with 1440p gave me 1.49 GB vs. 1.46 GB that the recorded

What was really fascinating is that the VRAM hog ME:SoM actually required LESS
VRAM/megapixel than Tomb Raider!!!

Does anybody else see what a crock of shit this all is?? So in a couple of years, when you
run a game a 1080p, you will need 8.2 GB of vram simply because the developer decided
to set this y-factor overhead at 8.0 GB
Good time to invest in samsung if you ask me
.

Last one I promise - Far Cry 4. This time i used 8.3 mp for 4k and 2.07 for 1080p (thanks rumartinez)
x gave me .43 GB/ megapixel (highest so far!) with y being 2.17 GB

Again testing on 1440p:
3.7x + 2.17 = 3.76 GB wait for it..... tweaktown reported 3.77 GB :)

You can't just make up a y factor when using 3 points - in this case 1080p, 1440p and 4k.

I decided to go back in time to cement my case.

It took alot of digging but here is a link to vram usage in crysis 1:
http://hardforum.com/showthread.php?t=1456645

defaultluser reports .31 GB@ .48 MP, .36GB@ .79 MP .45GB@ 1.31 MP
and .575 GB at 1.9 megapixels

using 1.9x + y = .575 and .48x + y = .310
.....using substitution....

x = .187 Yep, thats right. Still the same GB/ megapixel as today
y = .22 GB much more reasonable. This was 4xAA btw.
Checking with .79 megapixels or 1024x768 .....
.79x + y = .368 compared to .36 GB that they recorded
 
I think you missed the last sentence.

TPU said:
What's even more interesting is its video memory behavior. The GTX 980 Ti, with its 6 GB video memory, was developing a noticeable stutter. This stutter disappeared on the GTX TITAN X, with its 12 GB video memory, in which memory load shot up from maxed out 6 GB on the GTX 980 Ti, to 8.4 GB on the video memory. What's more, system memory usage dropped with the GTX TITAN X, down to 8.3 GB.

On Steam Forums, users report performance issues that don't necessarily point at low FPS (frames per second), but stuttering, especially at high settings. Perhaps the game needs better memory management. Once we installed 16 GB RAM in the system, the game ran buttery-smooth with our GTX 980 Ti.

So yet another shitty COD game that has memory management issues. Who knew?
 
Back
Top