GTX 970 flaw

Status
Not open for further replies.
I feel that AMD would be roasted for something like this, so I'm glad some folks are getting pissed and shouting about it.
No matter WHAT they say, it is MARKETED as a 4GB card, and enthusiasts look at that very number to gauge whether it will have enough ram to run at high res/textures without issues...

That number isn't arbitrary, nV and AMD know exactly what the deal is when they state stuff like that, and IMHO nV misled everyone, because it's really a 3.5GB card at full speed(and that's all that matters- 10X slower doesn't count for nothing but stutters).
It should have been listed/marketed as a 3.5GB card, and all would be perfectly fine....but their marketing team/Engineers know that PROSPECTIVE BUYERS are looking for 4GB of ram, and I just feel they deceived everyone with this BS.

Folks can excuse them all they want, I've been reading all about this around the 'net, and the excuses and garbage the nV Shills and Ultra-Fanboys have been shoveling is just bullshit.

Is it MARKETED/ SOLD as a 4GB card, or a 3.5GB card?
How much ram can you actually use before it becomes a stutterfest? 4GB, or 3.2-3.5gb?

Facts are much better than excuses.
 
I feel that AMD would be roasted for something like this,
....snip......

No doubt. nVidia went as far as releasing their own tools for FCAT when it was apparent that AMD had a stuttering problem in crossfire.

How much ram can you actually use before it becomes a stutterfest? 4GB, or 3.2-3.5gb?

Thats really what it all comes down to. We all know that exceeding our VRAM will turn a game into a slideshow - so we need to know exactly where that line is.


Something interesting from nVidia's press release today:
When a game needs less than 3.5GB of video memory per draw command then it will only access the first partition, and 3rdparty applications that measure memory usage will report 3.5GB of memory in use on GTX 970, but may report more for GTX 980 if there is more memory used by other commands. When a game requires more than 3.5GB of memory then we use both segments.

What I find strange here is people were comparing the 970/980 at identical settings and the 980 was still using more. So is nVidia doing something sneaky to keep the 970 in bounds when right at the edge of exceeding the threshold?
 
I see excuses being made, that rep was probably stating facts, and it probably doesn't matter a whole lot to 98% of the people buying this card, but what about the other 2%?
How many buyers did they steal with false advertising?

Is it able to use ~4GB of ram before real issues or 3.2-3.5?

Don't think marketing 3.5Gb vs 4GB makes a difference?

Think Again, all ya have to do is peruse the enthusiasts forums like this and you will see a lot of discussion regarding ram on a card...prospective BUYERS mostly....forget about Joe Blow on the street that knows nothing, he just sees 4GB on 1 card, and 3.5GB on the other card, and snags the 4GB card almost invariably.

Bah
 
Apparently people think the last 512MB of VRAM being slower than the first 3.5GB is class action worthy :p

I like how you don't think this is a big issue. If this was AMD, everyone would be raising hell with their pitchforks ready to burn them at the stake.

Dem double standards

Then again, I shouldn't be surprised. Reading through that Nvidia thread and the amount of people unwilling to believe it despite the fact that Nvidia themselves have admitted to it, some of you overzealous maniacal fanboys are ridiculous.
 
Apparently people think the last 512MB of VRAM being slower than the first 3.5GB is class action worthy :p

weather they win or lose they will more than likely get the class action lawsuit, a lot of lawyers out there want make a name and smell money and they are the only ones that make money off these type of class action lawsuits

and win or lose it's going to cost nv a lot and them little just the way it works

but anyway so far my two 980 Poseidon's and two 290x dcii's are working fine :p, glad I'm planning going from 1600p to 4k soon so got the 980's
 
One guy on the 'net(PCPER I think) came up with a scenario that talks about compromises made during engineering- Time to Market and Power usage decisions....so is the fact that it can't really use 4GB, and uses LESS power because of these types of decisions another sneaky thing?

Do folks in forums like this(and Joe blow too) look at Vram on the card when making a buying decision? Absolutely
Do folks in forums like this look at power draw when making buying decisions? Absolutely

Probably nothing on the power, but I don't know......
 
I like how you don't think this is a big issue. If this was AMD, everyone would be raising hell with their pitchforks ready to burn them at the stake.

Dem double standards

Then again, I shouldn't be surprised. Reading through that Nvidia thread and the amount of people unwilling to believe it despite the fact that Nvidia themselves have admitted to it, some of you overzealous maniacal fanboys are ridiculous.

I may be an overzealous maniac myself but I really do not yet see how the outrage on this is proportionate to the problem. In fact just trying to get a grasp of the practical issue from those that are most outraged seems rather difficult.

I would expect people to say "look at how bad performance is on game x under condition y" and have some reasonable proof to go on here. If it's like 1-3% in edge cases it doesn't seem worthy of even a footnote. If it's a universal problem making games unplayable then yeah, it's a big issue.

But just me personally, I'm going to need to see a whole lot more than Nai's benchmark to get riled up.
 
Going to just quote myself from the other thread... it is absolutely a major issue resulting in large and obvious motion disruption. On my 4k monitor I was seeing this regularly on demanding games, and I'm able to replicate it by pushing my 1440p screen up to that level of usage even though the framerate is showing 65-70fps via Afterburner/etc.

Wow... I actually literally set up an RMA for my Acer B326HK 4k monitor because I thought something was just wrong with it due to motion issues and seeming frameskipping, for a refund since I couldn't figure out why it was happening and thought it was a defect. I'd not gotten a ton of time to play with games on it since I got it back at the end of November, until around a week ago (maybe ~15-20 hours total and I was playing much less demanding games that weren't hitting up past ~2-2.5gb of vram allocation let alone usage).

I am really wondering if this whole 3.5gb issue is why I was seeing problems lately when I tried dialing up settings that went to high VRAM levels on my GTX 970 SLI oc'd, now... while the average framerate isn't necessarily impacted an extreme amount it seems to be having a VERY big impact on frametimes and it's very irritating to play with. It was driving me nuts trying to figure out why I was seeing these problems, but it seems consistent with what we're hearing now. I actually tossed my X-Star DP2710 back on the desk the other day because I was chalking it up to the monitor... it seems it may actually be this issue instead :eek: :(. My RMA window is only going to allow me about a week from today to get it out in the mail if I'm not keeping the B326HK, which I'd really like to keep otherwise.

Watching news on this like a hawk now...

EDIT: Honestly though, it would be risky to keep it because it's unclear if they intend to fix this, can fix it via software at all even, or would replace the cards if they make a hardware fix. Even at 1440p 100hz using DSR though I'm going to want the extra memory as advertised and be able to use it without major hitching/stuttering (I actually just tried pumping settings up to consume 3.7gb in BF4 on the 1440p panel and got the exact same problem to show up (judders/obviously not looking smooth at all on it despite the framerate average reading 65-70 still!))... nvidia needs to make this right, and it's already going to be a huge stain on their reputation.

I'm ticked. As further games come out over the next year+ I'm going to want to be able to use more than around the 3.2gb of VRAM people are quoting the issue as starting at, and I bought cards advertised to provide a full 4GB of usable vram and 256-bit interface properly.

Nvidia needs to fix this yesterday, and they need to get GTX 970 buyers replacement hardware if it cannot be fixed via a BIOS update or driver update in full. These cards were sold, advertised, and bought under the premise that they were 256-bit & 4GB, not "?-bit and ~3.2gb-3.5gb before it has a major hardware flaw causing it to become essentially unusable despite having enough framerate to power it".
 
Geforce.com guys are shredding this, guys stating that they now KNOW why they have stuttering ~ 3.5GB

"Buy ME, I'm 4GB, I have enough fast vram to keep you stutter free!"

ChaChing!
 
My upgrade plan was a 970 but now I'll be holding off awhile. Glad all this came out now and hopefully they will make it right for the people like golden
 
Even I was looking into a GTX970 to replace this faulty and unsupported XFX 7970 but now I will be waiting for AMDs new card or just picking up a R290X.
 
GTX 970 is a 1080p card. They told us from the beginning. We were silly heads for thinking we could cheat 980s and run them @ 4K. I pretty much figured they would try to sweep it under the rug. Double-talk and propaganda. woot!
 
I thought GTX960 was the 1080p card....
Also I do not think its silly to believe that your GPU can access all of its VRAM at the same speeds...
 
Going to just quote myself from the other thread... it is absolutely a major issue resulting in large and obvious motion disruption. On my 4k monitor I was seeing this regularly on demanding games, and I'm able to replicate it by pushing my 1440p screen up to that level of usage even though the framerate is showing 65-70fps via Afterburner/etc.



Nvidia needs to fix this yesterday, and they need to get GTX 970 buyers replacement hardware if it cannot be fixed via a BIOS update or driver update in full. These cards were sold, advertised, and bought under the premise that they were 256-bit & 4GB, not "?-bit and ~3.2gb-3.5gb before it has a major hardware flaw causing it to become essentially unusable despite having enough framerate to power it".

Honestly, after reading Nviida's response, it sounds like a redesign may be needed. Unless a BIOS tweak can somehow change the way the card's cut down? Pretty disappointing really. My co-worker just bought one because I told him of the great performance/dollar this card has. Now I feel a little guilty for suggesting it to him. Ugh... Guess I'm going to have to point him to the articles... :(
 
sorry guys that i am jumping in to this thread, just like that ..

Recently i got MSI 980 GAMING, this is not the case?

What would be the best way to test it?

Would EVGA OC Scanner X do the job? which settings (Monitor is 27" , 2K resolution)

Big thanks
 
Pasting my post from the other thread here:

Copied from another forum: http://www.overclock.net/t/1537725/...-gtx-970-3-5gb-memory-issue/140#post_23453148

It's not a 3.5GB GPU. If it were, your performance would be worse because for any single command buffers that reference an amount over 3.5GB, you'd be texturing out of system memory, which is way slower than the last 512MB.

The driver has smarts to avoid using the last 512MB unless it has to, and it is intelligent about putting the least important resources in there.

So anyone seeing just 3.5GB available, that's just a reporting issue. In truth, if a single command buffer of a normal game workload (mixed resources) references more than 3.5GB of resources, then there is no choice but to make some of that data resident in the last 512MB. If a game creates more than 3.5GB of resources over time but doesn't reference them all in a single command buffer, the OS would rather evict something to system memory to make room for newly created resources in the first 3.5GB than place a potentially important resource into the last 512MB.

In short, when necessary, the memory will be used, and it's much better than coming from system memory.

And in any real world game, when you have enough resources referenced in a single command buffer to start to want to use the last 512MB, there is always some subset of those resources that you can put there that will ensure a fairly insignificant overall performance impact by having them there.

The performance seen in all reviews is valid, so that's what you should buy your GPU by. Going forward it might be useful to ensure you look at review settings for new games that place the game between 3.5GB and 4GB though to ensure good performance.
 
Also copying this:

The CUDA benchmark seen on all the forums showing just a few GB/s is not reflective of the performance of the 512MB section. The test was flawed, given the memory architecture of 970. Above 3.5GB, it was measuring bandwidth to system RAM, which is bottlenecked by PCIE. Real games use the local 512MB of extra memory for less important resources, which is much faster than the system memory result shown in the CUDA benchmark.

I.E. Game performance isn't really affected by this, and it's very likely that CUDA will be updated to allocate out of that other memory section before spilling to system memory.

The alternative could've been a $300 3.5GB GPU, so it could be said that you got an extra 512MB of GPU memory for free, and it's still way better than spilling out to system memory.

Another way to see it is that structuring the memory this way got you a cheaper GPU. So it's not like you paid for something but didn't get it... The price was determined by its cost to produce and it's capability.

You got the performance that you see in reviews.
 
Last edited:
The driver has smarts to avoid using the last 512MB unless it has to, and it is intelligent about putting the least important resources in there.

My issue with this is that the entire 4GB is important. I do not want a GPU that markets 2GB,4GB,6GB, etc but has a small section of vram that is below the speeds advertised. I do not want a GPU with a good section of resources and a "least important" section of resources.
 
My issue with this is that the entire 4GB is important. I do not want a GPU that markets 2GB,4GB,6GB, etc but has a small section of vram that is below the speeds advertised. I do not want a GPU with a good section of resources and a "least important" section of resources.

It's fine to prefer that, but it doesn't really affect real world performance in games, and it very likely allowed for the 99.9999999% use case to be significantly cheaper for you to buy. So you have to start asking tough questions about what is more important to you.

The 512MB couldve been excluded altogether and you spill graphics resources to system memory instead. The CUDA benchmark (unintentionally) showed how slow that would be :)
 
Last edited:
Then why do folks get stuttering right when they get to ~3.5GB on the 970?
I guess it's because the game is faulty.......
 
I agree with GoldenTiger. This was advertised and sold to me as a 4GB card meaning all 4GB of DDR5 can be used at full speed. There were no fine print notes saying only 3.5GB were useable without a serious penalty in speed. I gave them perfectly good money that is valued at the exact dollar figure I paid, not money that is defective and worth 12.5% less in value. Whether or not it makes a difference performance wise is irrelevant, they still gave me a product that is 12.5% defective by design without any indication prior to sale that it is like that. In almost any other product you couldn't get away with crap like this. This is an absolutely recall worthy defect if it cannot be fixed via a bios update. Otherwise it is absolutely a clear case of fraudulent advertising on Nvidia's part.
 
sorry guys that i am jumping in to this thread, just like that ..

Recently i got MSI 980 GAMING, this is not the case?

What would be the best way to test it?

Would EVGA OC Scanner X do the job? which settings (Monitor is 27" , 2K resolution)

Big thanks

This is not an issue with the 980, only the 970. To keep it very simple, the 970 is a cut-down 980, and the issue is caused by one of the disabled parts.
 
Why don't they run FCAT and post their own results?

Everyone here knows why, even the Shills and Fanboys!
If it were AMD they'd be barking up the tree and supplying everyone with FCAT and telling them to test it....bah
 
the 4GB VRAM was the top reason why I purchased the card to begin with...so if this issue is real I want a replacement card
 
the 4GB VRAM was the top reason why I purchased the card to begin with...so if this issue is real I want a replacement card

This is one of the reasons I previously stated, folks look at VRAM when making purchasing decisions(OF COURSE!), and they know it.

False advertising at it's finest.
 
the 4GB VRAM was the top reason why I purchased the card to begin with...so if this issue is real I want a replacement card

To be fair, 4GB is 4GB. It was advertised at 4GB and it it has 4GB even if some of it is slower. Now the part where it may be false advertising that they do not market what speed the last 512MB is at. We were under the assumption that it would all be at the advertised memory clock (assuming that determines the speed of the VRAM :p).

Does this issue affect all games under normal circumstances? Are there real world frame rates drops as a result of this? Obviously hitting a VRAM limit will cause some stuttering. But if this stuttering starts propping up at 3.5GB rather than 4GB then there might be a real issue here.

Personally I don't care about benchmarks. I am interested in knowing if it effects real games or programs people actually use.
 
To be fair, 4GB is 4GB. It was advertised at 4GB and it it has 4GB even if some of it is slower. Now the part where it may be false advertising that they do not market what speed the last 512MB is at. We were under the assumption that it would all be at the advertised memory clock (assuming that determines the speed of the VRAM :p).

Does this issue affect all games under normal circumstances? Are there real world frame rates drops as a result of this? Obviously hitting a VRAM limit will cause some stuttering. But if this stuttering starts propping up at 3.5GB rather than 4GB then there might be a real issue here.

Personally I don't care about benchmarks. I am interested in knowing if it effects real games or programs people actually use.

Well I have run AC Unity @ 1440p DSR @ 3.8GB Vram with no stuttering @ 30FPS running on my TV using a wireless 360 controller. Could it be game specific? I am very sensitive to stutter/tearing, etc and I haven't noticed any except in FC4 before it was patched.

What would be a good game to run that uses over 3.5GB VRam that doesn't kill FPS? I mean naturally if you run games at 8xMSAA or 4k on a single 970 it will effect FPS a lot. Is there anyway to simply overload the VRAM without making the game also more demanding? Skyrim Modded with 8k textures would probably do it, but Skyrim really isn't the best test scenario because it is naturally prone to stutter.
 
To be fair, 4GB is 4GB. It was advertised at 4GB and it it has 4GB even if some of it is slower. Now the part where it may be false advertising that they do not market what speed the last 512MB is at. We were under the assumption that it would all be at the advertised memory clock (assuming that determines the speed of the VRAM :p).

Does this issue affect all games under normal circumstances? Are there real world frame rates drops as a result of this? Obviously hitting a VRAM limit will cause some stuttering. But if this stuttering starts propping up at 3.5GB rather than 4GB then there might be a real issue here.

Personally I don't care about benchmarks. I am interested in knowing if it effects real games or programs people actually use.

Slower ram is OK huh? LMFAO what a load of shit.

Here ya go, and there are many folks on many different forums saying the same thing.

Enjoy:

http://www.overclock.net/t/1535502/gtx-970s-can-only-use-3-5gb-of-4gb-vram-issue/500
 
It's kind of sad after this, every future Nvidia card will have to undergo rigorous VRAM testing during reviews to ensure the card actually has full use of the advertised amount. Obviously that means AMD cards will have to pass the same tests since this just gave them some ideas, too.

Oh who am I kidding, the "GTX 1070" will have 8 GB VRAM, with 5 GB usable. The additional 3 GB will be provided by way of External USB flash drive.
 
IANAL, but the card actually has 4 GB on it therefore you probably can't sue.
Nobody ever said it all had to run at the same speed.
 
Yea I dont see a class action lawsuit. They said 4gb on the card, and thats what is has.

What do I see is a HUGE dent in Nvidia's reputation to some people. This is worse then AMD cheating in drivers WAY back in the day when [H] caught them. Or the horrible reference cooler on the 290/290x's. Or the horrible drivers AMD used to have (Nvidia is just as bad now).

Anyway my 0.02c
 
I don't see any huge backlash. Once the weekend is over and tech sites can run more comprehensive FCAT benchmarks, and the real world performance impact is revealed to be minimal, only the same few with stuttering due to unrelated issues (i.e. the "I turned all the settings up the their highest possible level and it's not performing well, it must be due to this thing I read on a forum!" crowd) will continue to moan about it.
 
Not a fanboi either way, but this is an issue. (I own a GTX970.) Now, have I noticed it in my gaming? Well, I don't know. I've got one indie game (Combat Mission) which has been hard-locking my machine since I got the 970. (Not reliably, and only in one particular user-made battle which is far larger than any other ever attempted.)

I have been getting TDR "saves" intermittently during some gaming.

Could it be the partitioned memory is causing this? Shrug.

What I -do- know is the reaction to this issue is interesting to me. I wonder if the "it's no big deal" crowd would be as nonchalant about it had this been uncovered on AMD's 290/290x series? Not that I care. (Yes, I also run AMD cards.)

I'm interested in seeing what NVidia will do next.
 
IANAL, but the card actually has 4 GB on it therefore you probably can't sue.
Nobody ever said it all had to run at the same speed.

The official specs state the bandwidth that is to be expected. But that seems to drop by 75% when using the last 512MB.

Nvidia GTX 970 Official Specs

GTX 970 Memory Specs:
7.0 Gbps Memory Clock
4 GB Standard Memory Config
GDDR5 Memory Interface
256-bit Memory Interface Width
224 Memory Bandwidth (GB/sec)
 
Yeah my HDD's have "expected speeds" as well but that doesn't mean I always get it.
They also advertise capacity in base 10 which means I lose a few hundred GB in formatting for Windows.

We oughta sue those bastards.
 
Status
Not open for further replies.
Back
Top