HardOCP looking into the 970 3.5GB issue?

Status
Not open for further replies.
IMO If Nvidia and reviews had disclosed all of this at launch then I bet hardly anyone would have cared or passed on it. It was a damn nice card for 330 bucks at launch and that is what most people looked at. Hell if it was advertised as 3.5 GB from the beginning then people would be saying how stupid it is to worry about having the full 4GB of the 980.

Yeah if they had been honest at launch instead of lying I am sure people would not have cared.
I mean being honest would have only given consumers the information they ACTUALLY needed in order to make an educated purchase on expensive graphics hardware. Its not like inflating the numbers on the amount of VRAM would ever matter to people buying a GPU right?

will MSI Afterburner (or similar programs) only report a max of 3.5GB VRAM usage?...or will that separate allocation of 512MB show up in monitoring programs?
I doubt it, that would have made this entire situation easier to catch from the get go. It reports what Nvidia led everyone to believe was a true 4GB.
 
But they didn't. They instead lied about the specifications for months. And what you are saying is that most people who bought it wouldn't mind knowing they were lied to. I find that difficult to believe, but clearly you've established that I'm an outlier.
You seem to have trouble understanding a hypothetical situation. Of course Nvidia was shady by not having all this info out there.

AGAIN though if it would have been made clear at launch then I dont think it would have stopped hardly anyone from buying the card.

Not a single review of 970 or 970 SLI had any issues so people would have just looked at like the 470/570 where it gave up some memory to the flagship card.
 
More like thinking you bought a V8, you look under the hood and it looks like a V8, Chevy told you it was a V8, but when you're doing 90 miles an hour 2 cylinders stop firing.

No that's not accurate either. The GPU processor isn't slowing down or disabling units dynamically and it's not based on speed... It's based on usage of a capacity.

It's more like if you had a washer and if you fill it up all the way beyond the top 1/8 of the washer tub, the overall RPMs on one cycle slow down to 1/7 speed, but your clothes came out only 5% less clean. And the product you got was made significantly cheaper by the use of the tech that causes the problem.

Oh the horror :rolleyes:

You don't care about washer RPMs or how the sausage gets made, what you care about is that the washer cleans as you expected based on reviews, or that the sausage tastes as good as it should and comes in at the right price.
 
Last edited:
AGAIN though if it would have been made clear at launch then I dont think it would have stopped hardly anyone from buying the card.

Several people on this forum have already contested that, and I am going to join them. If you look at my post history you will notice I was considering getting a GTX970. My XFX7970 broke and XFX support left me out to dry with no support whatsoever. The 7970 is a 3GB card and I was looking to upgrade to a 4GB card at the minimum. Had I known the GTX970 was a 3.5GB card it would have never even been a consideration.

I do not know how you can go around parroting that having less memory than advertised would never have stopped consumers from making the purchase. It absolutely would have made people at least reconsider this purchase.
 
Several people on this forum have already contested that, and I am going to join them. If you look at my post history you will notice I was considering getting a GTX970. My XFX7970 broke and XFX support left me out to dry with no support whatsoever. The 7970 is a 3GB card and I was looking to upgrade to a 4GB card at the minimum. Had I known the GTX970 was a 3.5GB card it would have never even been a consideration.

I do not know how you can go around parroting that having less memory than advertised would never have stopped consumers from making the purchase. It absolutely would have made people at least reconsider this purchase.

Because we're the 1% apparently.
 
It's more like if you had a washer and if you fill it up all the way beyond the top 1/8 of the washer tub, the overall RPMs slow down by 5%.

What is the speed of the first 3.5GB of memory on the GPU in GBs/second.
What is the speed of the last .5GB of memory on the GPU in GBs/second.
Can you really say its a simple 5% decrease when its going from like the 200GBs/sec range to like 30GBs/sec....
 
will MSI Afterburner (or similar programs) only report a max of 3.5GB VRAM usage?...or will that separate allocation of 512MB show up in monitoring programs?

From the pcper report: "Some applications are only aware of the first 'pool' of memory and may only ever show up to 3.5GB in use for a game. Other applications, including MSI Afterburner as an example, do properly report total memory usage of up to 4GB. Because of the unique allocation of memory in the system, the OS and driver and monitoring application may not always be on the page."
 
Several people on this forum have already contested that, and I am going to join them. If you look at my post history you will notice I was considering getting a GTX970. My XFX7970 broke and XFX support left me out to dry with no support whatsoever. The 7970 is a 3GB card and I was looking to upgrade to a 4GB card at the minimum. Had I known the GTX970 was a 3.5GB card it would have never even been a consideration.

I do not know how you can go around parroting that having less memory than advertised would never have stopped consumers from making the purchase. It absolutely would have made people at least reconsider this purchase.

It doesn't have less memory than advertised. Wow some people live in an alternate reality.
 
What is the speed of the first 3.5GB of memory on the GPU in GBs/second.
What is the speed of the last .5GB of memory on the GPU in GBs/second.
Can you really say its a simple 5% decrease when its going from like the 200GBs/sec range to like 30GBs/sec....

You don't buy the GPU for clocks you buy it for performance which is thoroughly documented across countless reviews. It's like freaking out because NVIDIA doesn't run their GPUs at full clocks all the time, and instead scales them based on usage. You paid for full clocks but you're not getting them!!!! Panic!!!!!

The overall output from a washers capacity is clean clothes, so let's say in that case your clothes are up to 5% less clean.

And yes I should've said if you load up the top 1/8 of the tub, RPMs dropped to 1/7 the speed in one cycle, but your clothes were only 5% less clean at the end. Oh and you paid less for the washer than you otherwise would've because the technology that causes the problem enabled them to make a cheaper product.
 
Several people on this forum have already contested that, and I am going to join them. If you look at my post history you will notice I was considering getting a GTX970. My XFX7970 broke and XFX support left me out to dry with no support whatsoever. The 7970 is a 3GB card and I was looking to upgrade to a 4GB card at the minimum. Had I known the GTX970 was a 3.5GB card it would have never even been a consideration.

I do not know how you can go around parroting that having less memory than advertised would never have stopped consumers from making the purchase. It absolutely would have made people at least reconsider this purchase.
How does that make any sense? I said if it would have been advertised as 3.5 GB card.

I think some of you are just full of crap acting like you would have not bought the 970 if it was listed as 3.5 GB at launch. Again people bough the crap out of 470/570 cards. If those would have been advertised as 1.5 GB cards and you found out later they were just 1.25 then some would be making the excuse that if they had known it was only 1.25 they would not have bought which is mostly nonsense.

MOST people buy cards based on how they perform for the price. The 970 is still a fast card and MUCH better value than the 980. Knowing up front that the card really only had 56 ROPs and 3.5 GB of vram would have changed none of that.
 
It doesn't have less memory than advertised. Wow some people live in an alternate reality.

Why dont you get back to this reality and answer the question I asked you earlier?

Enlighten me as to why anyone should be ok with having two separate memory speeds for a single 4GB of memory on their GPU.

Inform me as to when this has ever been ok in the GPU Industry.

Preach to me the benefits of having slower VRAM on a GPU and how its such a benefit to the consumer when they are trying to make a purchase for a 4K or future gaming ready rig.

Explain to me why Nvidia wouldn't correct the misinformation for 4 months.

Teach me about the benefits of having a 3.5GB GPU versus a 4GB GPU.
 
You don't buy the GPU for clocks you buy it for performance which is thoroughly documented across countless reviews. It's like freaking out because NVIDIA doesn't run their GPUs at full clocks all the time, and instead scales them based on usage. You paid for full clocks but you're not getting them!!!! Panic!!!!!

The overall output from a washers capacity is clean clothes, so let's say in that case your clothes are up to 5% less clean.

And yes I should've said if you load up the top 1/8 of the tub, RPMs dropped to 1/7 the speed in one cycle, but your clothes were only 5% less clean at the end. Oh and you paid less for the washer than you otherwise would've because the technology that causes the problem enabled them to make a cheaper product.

Your washer analogy is flawed, because once the 970 dips into the slow 500MB segment the games hitch and stutter and are no longer playable. So if you wanted to make an accurate analogy, it would be more like once you load up to the top 1/8 of the tub, your clothes are 95% clean, but there's this 5% unclean dirt spot in the most visible place on your shirt, so you either have to wash again or not wear that shirt because it'd just look ridiculous.

How does that make any sense? I said if it would have been advertised as 3.5 GB card.

I think some of you are just full of crap acting like you would have not bought the 970 if it was listed as 3.5 GB at launch. Again people bough the crap out of 470/570 cards. If those would have been advertised as 1.5 GB cards and you found out later they were just 1.25 then some would be making the excuse that if they had known it was only 1.25 they would not have bought which is mostly nonsense.

MOST people buy cards based on how they perform for the price. The 970 is still a fast card and MUCH better value than the 980. Knowing up front that the card really only had 56 ROPs and 3.5 GB of vram would have changed none of that.

Actually, given how optimization for PC games are getting shittier by the day and vram requirements are bloating like crazy, amount of vram would absolutely have made me think twice before jumping on the 970.

Also just because we have a different opinion from yours doesn't make us "full of crap". We're arguing opinions here ffs :rolleyes:
 
Your washer analogy is flawed, because once the 970 dips into the slow 500MB segment the games hitch and stutter and are no longer playable. So if you wanted to make an accurate analogy, it would be more like once you load up to the top 1/8 of the tub, your clothes are 95% clean, but there's this 5% unclean dirt spot in the most visible place on your shirt, so you either have to wash again or not wear that shirt because it'd just look ridiculous.

No it's not, there is zero proof of this other than a bunch of people going through placebo effect. Where was this hitching and stuttering in the reviews? Where was it for the countless users over the last 4 months?
 
And yes I should've said if you load up the top 1/8 of the tub, RPMs dropped to 1/7 the speed in one cycle, but your clothes were only 5% less clean at the end.

No, it's as if the first 7/8th of the tub spun at full RPM and the top 1/8th of the tub spun at 1/7th the speed. This analogy sucks because I don't get how a video card is like a clothes washer.
 
No it's not, there is zero proof of this other than a bunch of people going through placebo effect. Where was this hitching and stuttering in the reviews? Where was it for the countless users over the last 4 months?

It doesn't matter that the card performed well in some reviews. Most reviews only publish average frame rates and don't take into account things like stuttering, which can break immersion without dramatically slowing FPS (something that is not easily quantified on a bar graph). And reviews from the time that the GTX 970 came out didn't include newer games like Far Cry 4 or Shadow of Mordor that people are now having game-breaking stuttering in with the GTX 970.

Nvidia falsely advertised the specs of the GTX 970 for months. The card cannot access all of the advertised VRAM at the advertised bus speed, and people are not getting the advertised memory bandwidth.

You can try to spin that however you want, but you're wrong.
 
Last edited:
No it's not, there is zero proof of this other than a bunch of people going through placebo effect. Where was this hitching and stuttering in the reviews? Where was it for the countless users over the last 4 months?

Do you seriously not understand the part where THE GAME HAS TO ACTUALLY NEED >3.5GB VRAM before this hitching occurs due to hitting the vram wall?

If the games that reviews tested did not meet that requirement obviously they won't hitch and stutter, well not because of vram issues anyway. Does that make it clear?

Also, average FPS means fuck all when it comes to stuttering, you need frametimes to be able to quantify that beyond a vague "it doesn't feel smooth" statement. Far Cry 4 is a perfect example of this.
 
You don't buy the GPU for clocks you buy it for performance. It's like freaking out because NVIDIA doesn't run their GPUs at full clocks all the time, and instead scales them based on usage. You paid for full clocks but you're not getting them!!!! Panic!!!!!

The performance in reviews in 2014 are not indicative of the performance of the GPU in the long run. Some of us try and make purchases that will give us the most longevity out of our hard earned money. When I purchased my 7970 its 3GB of VRAM was good enough for 2012 but its 2015 now and high end GPU buyers expect 4GB minimum.

How does that make any sense? I said if it would have been advertised as 3.5 GB card.

I think some of you are just full of crap acting like you would have not bought the 970 if it was listed as 3.5 GB at launch. Again people bough the crap out of 470/570 cards. If those would have been advertised as 1.5 GB cards and you found out later they were just 1.25 then some would be making the excuse that if they had known it was only 1.25 they would not have bought which mostly nonsense.

MOST people buy cards based on how they perform. The 970 is still fast card and MUCH better value than the 980. Knowing up front that the card really only had 56 ROPs and 3.5 GB of vram would have changes none of that.

It makes sense because I need 4GB and knowing that a card has less than 4GB means its not even a consideration for me. The honesty of telling people it is a 3.5GB card would have saved their reputation because they wouldn't have lied. People would have purchased the card knowing EXACTLY what they were getting.

I am not full of crap for not wanting to buy this GPU now that its been confirmed that it cannot effectively utilize its full memory pool. Are you blind to the 4k resolutions and multi monitor setups...Are you blind to the fact that games are going to require more VRAM each year as resolutions are increased and textures increased.

People make a decision on how they perform both in the short and long term. The amount of Video ram is an indicator to the longevity of the card. More video ram gives a GPU more headroom for future games or monitor upgrades.


HE17cZl.jpg
 
Wow there is a lot of repetition in the thread....

Summary: yes it was not good that nvidia did that, but it is still a good card for the money.
 
Last edited:
Enlighten me as to why anyone should be ok with having two separate memory speeds for a single 4GB of memory on their GPU.

Inform me as to when this has ever been ok in the GPU Industry.

Two separate memory speeds was okay in the 2GB GTX 660 Ti, one of the more beloved Nvidia video cards for its price/performance ratio.
 
The L2 cache discrepancy is really the only one that would have any impact, since the ROP difference doesn't mean jack in this case. You can get the L2 cache value by querying the card's information, so that info was available to reviewers and consumers from day one and wasn't being hidden.

Technically, yes. If you have the correct tools.
But....
Ryan Smith said:
The kicker is that even if you know what registers to poke, all 4 master ROP/MC partitions are still online. So if that's what you're reading it would appear to be a fully enabled GPU.
 
anyone who hasn't read the AnandTech article on the issue should really do so immediately...really well written...explains the impact and architecture perfectly...I feel totally confident in my 970 after reading it...

http://www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation

"The use of heuristics to determine which resources to allocate to which memory segment, though the correct solution in this case, means that the real world performance impact is going to vary on a game-by-game basis...If NVIDIA’s heuristics and driver team do their job correctly, then the performance impact versus a theoretical single-segment 4GB card should only be a few percent...Even in cases where the entire 4GB space is filled with in-use resources, picking resources that don’t need to be accessed frequently can sufficiently hide the lack of bandwidth from the 512MB segment

the GTX 970 should still be considered as great a card now as it was at launch. In which case what has ultimately changed today is not the GTX 970, but rather our perception of it
 
Do you seriously not understand the part where THE GAME HAS TO ACTUALLY NEED >3.5GB VRAM before this hitching occurs due to hitting the vram wall?

If the games that reviews tested did not meet that requirement obviously they won't hitch and stutter, well not because of vram issues anyway. Does that make it clear?

Also, average FPS means fuck all when it comes to stuttering, you need frametimes to be able to quantify that beyond a vague "it doesn't feel smooth" statement. Far Cry 4 is a perfect example of this.

Obviously I understand this. But I know enough about the system architecture to tell you it simply doesn't happen. There is no frame to frame variance caused by having data in slower memory. How could you possibly think that? It will read from that data the same way every frame, with the same performance penalty. It's a consistent penalty, not stutter inducing.

Think about how textures are read per frame and you might start to understand my point.

Also, even on a normal memory configuration, the application has to *need* 3.5GB to have that allocated. In Windows and DX, creation of a resource doesn't actually stick anything in video memory. Once you have used it it will be in video memory.

Lastly, you mistakenly believe that reviews and users at launch were incapable of hitting >3.5GB. People have been running 4K and everything below, texture packs, high MSAA, and DSR. You'd have to be high to think that a general stutter problem coming from this would just be showing up now.
 
Last edited:
Wrong. You can access all 4GB just like on any other GPU. Plain and simple. You're rejecting reality. Anandtech and PCPer confirmed it.

Not at the expected and advertised performance and speed.
And while possibly a red herring, there are users that are stating their max VRAM allocation is 3.5Gb.
 
Obviously I understand this. But I know enough about the system architecture to tell you it simply doesn't happen. There is no frame to frame variance caused by having data in slower memory. How could you possibly think that? It will read from that data the same way every frame, with the same performance penalty. It's a consistent penalty, not stutter inducing.

Think about how textures are read per frame and you might start to understand my point.

Also, even on a normal memory configuration, the application has to *need* 3.5GB to have that allocated. In Windows and DX, creation of a resource doesn't actually stick anything in video memory. Once you have used it it will be in video memory.

Lastly, you mistakenly believe that reviews and users at launch were incapable of hitting >3.5GB. People have been running 4K and everything below, texture packs, high MSAA, and DSR. You'd have to be high to think that a general stutter problem coming from this would just be showing up now.

I think misterbobby would beg to differ on this.
 
Two separate memory speeds was okay in the 2GB GTX 660 Ti, one of the more beloved Nvidia video cards for its price/performance ratio.

Ok to whom? I do not believe that anyone was cheering Nvidia on that decision, in fact the only reason it was okay is because it was divulged and they didn't lie about it.
Had Nvidia been clean about the GTX970 perhaps it could be a non-issue like it was for the GTX660Ti.

Anand Tech
The best case scenario is always going to be that the entire 192bit bus is in use by interleaving a memory operation across all 3 controllers, giving the card 144GB/sec of memory bandwidth (192bit * 6GHz / 8). But that can only be done at up to 1.5GB of memory; the final 512MB of memory is attached to a single memory controller. This invokes the worst case scenario, where only 1 64-bit memory controller is in use and thereby reducing memory bandwidth to a much more modest 48GB/sec.

Toms
Our first comparison benchmark doesn't employ any MSAA, but instead is run with FXAA enabled. The finishing order goes: GeForce GTX 670, Radeon HD 7950, GeForce GTX 660 Ti, and Radeon HD 7870.
1920_2_FXAA.png

The results start changing as soon as we apply 2x MSAA. The GeForce GTX 670 and Radeon HD 7950 are still on top, but the Radeon HD 7870 now beats Nvidia's GeForce GTX 660 Ti.
1920_4_4xAA.png

Wcq4OBo.gif
 
Last edited:
And I thought only Kepler users were f*cked (780 ~= 7970GE/280x in recent AAA titles o_O )
Welcome to the club 970-Maxwellers :)
 
I've Been thinking about the 970 on and off since launch.not anymore.I'm just gonna have to wait on amd.
Oh well.i'd like to try freesync out anyways.#maybenexttime
 
Now I feel Mislead by Nvidia Damn I spent $400 on a G1 Gigabyte GTX 970 GPU that can only use 3.5GB VRAM?!


Gundamit people, you still HAVE 4gigs of VRAM, it DID NOT just dissapear on your 970's

The "problem" is that Nvidia "forgot/lied" about specs indicating that accessing the last 500megs is technically "slower" then the first 3.5gigs vs the 980 which has all SMM pathways enabled as a result of a "higher quality" die from the manufacturing process :rolleyes:

Or did everyone convinently "forget" that all 970 dies were "flawed/imperfect" 980 dies and were rebadged for 970 model usage

Up until last week, not ONE OF YOU was bitching about a sudden "slow down" in your games with the 970. Now its the placebo effect in full action.
 
That slower section of ram is dramatically slower to the point of crippling this card into a 3.5GB card for all intents and purposes.
GPU dies being flawed and rebadged is normal but typically they explain the architecture in full and give out the correct specifications for the card as well.
Segmented memory is a nice way of saying you have one functional section of ram and one section thats so freaking slow its better to just not use it.
 
The "problem" is that Nvidia "forgot/lied" about specs indicating that accessing the last 500megs is technically "slower" then the first 3.5gigs vs the 980 which has all SMM pathways enabled as a result of a "higher quality" die from the manufacturing process :rolleyes:

Or did everyone convinently "forget" that all 970 dies were "flawed/imperfect" 980 dies and were rebadged for 970 model usage.

That's not 100% correct. It isn't the SMM pathways... It is the L2/MC pathway to/from the Crossbar.

Just because a "GTX970" has disabled units doesn't mean they were "flawed/imperfect." Depending on yields and bins (power/speed characteristics) there could most certainly be GPUs that are uncrippled until Nvidia disables the units.
There are also times of supply constraints that higher/fully functional bins have been cutdown and sold as lower bins.
 
That slower section of ram is dramatically slower to the point of crippling this card into a 3.5GB card for all intents and purposes.
GPU dies being flawed and rebadged is normal but typically they explain the architecture in full and give out the correct specifications for the card as well.
Segmented memory is a nice way of saying you have one functional section of ram and one section thats so freaking slow its better to just not use it.
The worst thing is, even the 3.5GB is slower than previously advertised, 196GB/s. Every review at release told me it was 224GB/s. If I had known this back then, I might have waited and gone for the 980. Well, nvidia got my money now, so mission accomplished I guess.

I'm still happy with my 970, it's a fast card. But my suspicion is the resell value has gone down considerably. I will no longer be able to sell it as a 4GB card. :mad:
 
Looks like my 970 sli plans I had have gone out the window. Now I have to spend more for 980 sli
 
Gundamit people, you still HAVE 4gigs of VRAM, it DID NOT just dissapear on your 970's

The "problem" is that Nvidia "forgot/lied" about specs indicating that accessing the last 500megs is technically "slower" then the first 3.5gigs vs the 980 which has all SMM pathways enabled as a result of a "higher quality" die from the manufacturing process :rolleyes:

Or did everyone convinently "forget" that all 970 dies were "flawed/imperfect" 980 dies and were rebadged for 970 model usage

Up until last week, not ONE OF YOU was bitching about a sudden "slow down" in your games with the 970. Now its the placebo effect in full action.

I was playing Shadow of Mordor the other week with my GTX 970 G1 Gigabyte GPU, i5 4690K all in stock clock (except for the tweaked-out-of-the-box Gigabyte 970), settings were all in full ultra but I noticed the mountains and far shapes and objects were a bit delayed on being rendered, FC4 on NVIDIA settings (no DSR) framerates were like mid 40 FPS to 50 FPS only (my approximation)..could it be that this could be a real world sample of that slow 500MB VRAM out of 4GB?
and Got dang it, I thought this Corsair AX760 was reliable, I experienced two abrupt shutdowns and decided to RMA this beach!:mad:

Speculating here LOL :D
 
The worst thing is, even the 3.5GB is slower than previously advertised, 196GB/s. Every review at release told me it was 224GB/s. If I had known this back then, I might have waited and gone for the 980. Well, nvidia got my money now, so mission accomplished I guess.

I'm still happy with my 970, it's a fast card. But my suspicion is the resell value has gone down considerably. I will no longer be able to sell it as a 4GB card. :mad:

Yeah, i could definitely see that resale value taking a hit. Most people are gonna raise an eyebrow at this whole thing and look at these 9xx cards as neutered beasts. Compound all of that with the high baseline prices of these cards and you're looking at a really rotten deal for the original owners who want to change out GPUs to something newer.

Perhaps the silver lining of this all is that NVidia will learn a serious lesson here and disclose more details about their architectures in the future.
 
Y'all are hilarious. This is good for a laugh every once in a while. Now, back to gaming! (on my neutered horrible v6 of a card, haha)
 
No one is saying the card is horrible. We are saying that it was sold on some lies and thats pretty much fact at this point.
 
Status
Not open for further replies.
Back
Top