GTX 970 flaw

Status
Not open for further replies.
So, not that many of us are going to buy one, but doesn't the 960 have the same memory setup as the 970? What is it limited to (if it is...), 1.5GB? 1.75?
 
Oh, you're right! I forgot that companies like EVGA, MSI and ASUS just slap some components on a board mated with this GPU and don't do any testing. Why would they bother making sure the thing actually runs up to spec? Nope. Ain't nobody got no time fo dat! Boots? Yup! Slap it in a box and ship it out!

It's not just the GPU, but a combination of components that make up a video card and it capabilities.

Yall are delusional if you think those companies didn't know. :p
You should really stop talking. Pretty much every comment you have made in this thread has been nonsense and shows that you are truly ignorant on this topic.
 
So, not that many of us are going to buy one, but doesn't the 960 have the same memory setup as the 970? What is it limited to (if it is...), 1.5GB? 1.75?
If you have been following the issue here you will see the problems with the 970 are because it's a cut down GPU. As far as we know there is nothing cut down on GM206.
 
If you have been following the issue here you will see the problems with the 970 are because it's a cut down GPU. As far as we know there is nothing cut down on GM206.

I believe NVIDIA has stated that GM206 is a fully functional part.
 
I believe NVIDIA has stated that GM206 is a fully functional part.

They also stated the GTX 970 had a 256-bit bus and 224GB memory bandwidth. :p

I kid, but in all seriousness, this whole thing is pretty shady. I realize the performance hasn't changed from what the reviews show between yesterday and today. Along with the performance of my 970. Nothing has changed and I'm happy with the performance. But then again, I'm not hitting 3.5GB+ of memory usage either. Even though I game at 1440p, I've had limited time to fully run the card through the ringer and I cringe at future titles that will make use of it all.

I do find it hard to believe someone at nVidia didn't realize the specs reviewers were quoting was incorrect. This whole thing could have been averted if they were proactive. Then again, it's like any corporation. Until someone makes a stink about something, they refuse to bother and go on their merry way.

At this point, I'm debating whether or not to even try to get a refund. Though, I think I'd be happier with some sort of "step-up" on the cheap to a 980 but the chances of that happening seem slim.
 

One game and "almost 3.6 GB" RAM usage. Hardly conclusive for this issue, other than maybe showing how much the card is "fighting" to go over 3.5 GB. It would be interesting to see how much RAM does 980 use under same settings.

We need more 3.5-4 GB usage results and with more games, single and SLI configurations, to get a proper picture.
 
Companies WOULD learn if consumers reacted en masse, which Capitalism pretty much requires to run smoothly. Even if we only had two companies, both of which are problematic in this way, if each time we bought a card, we bought from whichever company had recently been more moral, they would start learning that being and STAYING moral actually pays off for the company. But when you get a bunch of idiots that make excuses and refuse to use the power of their wallet to punish a company, the companies really will never care.

If, every time Nvidia did this, consumers swayed to AMD for the next year or two until AMD screws them over, it would make Nvidia think twice about continuing to screw customers because it would lose them market share, even if only temporarily. Same way with AMD - if they realized that screwing customers over lost them market share, even if temporary, they would also make an effort to do it as little as possible.

The world would be a better place without these apologists. They think they are some sort of voice of moderation, but in actuality they are the voice of apathy (and thus indirectly the voice of corruption). They refuse to care about anything. Corporate immorality. Political corruption. Whatever it is, they are there to not only not care about it, but to make you look like some sort of demon for caring about it.

Agree 100%. Voting with your wallet is now a fundamental civic duty.
 
One game and "almost 3.6 GB" RAM usage.

Yeah they really weren't pushing the issue with <3.6GB. They didn't think that through very well at all. They they go along and justify it...

Remember, Ultra HD = 4x 1080P. Let me quote myself from my GTX 970 conclusions “it is a little beast for Full HD and WHQD gaming combined with the best image quality settings”, and within that context I really think it is valid to stick to a maximum of 2560x1440 as 1080P and 1440P are is the real domain for these cards. Face it, if you planned to game at Ultra HD, you would not buy a GeForce GTX 970.

Good idea Guru3D, don't bother running that at 4K to see how it really compares against a GTX 980...
 
I called shop today, because according to my country law, selling something that has wrong specs is against law. Can't return the card, because I bought it in September, but according to law if the "flaw is meaningful, the sale can be cancelled'.

It seems hoever that shops do realise the problem with NVIDIA, because I got mail, that they would be grateful if I could wait few days, because the are in talk with Asus and NVIDIA about how to resolve the whole situation. So I take it, the same is done in US and retailers do have words about NV. I can't blame shop because they didn't know what NV had in store, so I'll update you guys what's the solution on our side of the pond.

But with much better consumer protection laws in EU, NVIDIA will get in serious trouble for this kind of shit.
 
One game and "almost 3.6 GB" RAM usage. Hardly conclusive for this issue, other than maybe showing how much the card is "fighting" to go over 3.5 GB. It would be interesting to see how much RAM does 980 use under same settings.

We need more 3.5-4 GB usage results and with more games, single and SLI configurations, to get a proper picture.

We need to see any evidence that this is a real issue. And I mean games that are SEVERELY affected by it.

I haven't seen anything conclusive other than cuda tests.
 
Yeah they really weren't pushing the issue with <3.6GB. They didn't think that through very well at all. They they go along and justify it...

Remember, Ultra HD = 4x 1080P. Let me quote myself from my GTX 970 conclusions “it is a little beast for Full HD and WHQD gaming combined with the best image quality settings”, and within that context I really think it is valid to stick to a maximum of 2560x1440 as 1080P and 1440P are is the real domain for these cards. Face it, if you planned to game at Ultra HD, you would not buy a GeForce GTX 970.

Good idea Guru3D, don't bother running that at 4K to see how it really compares against a GTX 980...

Wow! The "you are using it wrong" addage.
 
The stress of Shadows of Mordor results:
http://www.guru3d.com/news-story/middle-earth-shadow-of-mordor-geforce-gtx-970-vram-stress-test.html

"Utilizing graphics memory after 3.5 GB can result into performance issues as the card needs to manage some really weird stuff in memory, it's nearly load-balancing. But fact remains it seems to be handling that well, it&#8217;s hard to detect and replicate oddities. If you unequivocally refuse to accept the situation at hand, you really should return your card and pick a Radeon R9 290X or GeForce GTX 980. However, if you decide to upgrade to a GTX 980, you will be spending more money and thus rewarding Nvidia for it. Until further notice our recommendation on the GeForce GTX 970 stands as it was, for the money it is an excellent performer. But it should have been called a 3.5 GB card with a 512MB L3 GDDR5 cache buffer.

The solution Nvidia pursued is complex and not rather graceful, IMHO. Nvidia needed to slow down the performance of the GeForce GTX 970, and the root cause of all this discussion was disabling that one L2 cluster with it's ROPs. Nvidia also could have opted other solutions:
&#8226;Release a 3GB card and disable the entire ROP/L2 and two 32-bit memory controller block. You'd have have a very nice 3GB card and people would have known what they actually purchased.
&#8226;Even better, to divert the L2 cache issue, leave it enabled, leave the ROPS intact and if you need your product to perform worse to say the GTX 980, disable an extra cluster of shader processors, twelve instead of thirteen.
&#8226;Simply enable twelve or thirteen shader clusters, lower voltages, and core/boost clock frequencies. Set a cap on voltage to limit overlclocking. Good for power efficiency as well.

We do hope to never ever see a graphics card being configured like this ever again as it would get toasted by the media, for what Nvidia did here. It&#8217;s simply not the right thing to do. Last note, right now Nvidia is in full damage control mode. We submitted questions on this topic early in the week towards Nvidia US, in specific Jonah Alben SVP of GPU Engineering. On Monday Nvidia suggested a phonecall with him, however due to appointments we asked for a QA session over email. To date he or anyone from the US HQ has not responded to these questions for Guru3D.com specifically. Really, to date we have yet to receive even a single word of information from Nvidia on this topic.

We slowly wonder though why certain US press is always so much prioritized and is cherry picked &#8230; Nvidia ?"

And yes, loks like the articles at Toms or Ananddid not go into the depth Guru3D done. That's proper tech journalism, and how it should be done in all the proper INDEPENDENT tech sites.
 
The stress of Shadows of Mordor results:
http://www.guru3d.com/news-story/middle-earth-shadow-of-mordor-geforce-gtx-970-vram-stress-test.html

...snip...

And yes, loks like the articles at Toms or Ananddid not go into the depth Guru3D done. That's proper tech journalism, and how it should be done in all the proper INDEPENDENT tech sites.

Do you realize we have already been discussing that article right here in this very thread? Because Guru3D pretty much did the most shitty job of any site right now.
 
We need to see any evidence that this is a real issue. And I mean games that are SEVERELY affected by it.

I haven't seen anything conclusive other than cuda tests.

I agree that we need evidence to show if there is a real issue or not. There have been gaming test done by review sites as well as multiple user conducted ones. We will know for sure when sites run (proper) extensive frametimes tests. Guru3D did a poor job, I hope they don't plan on leaving it at that.
 
"PeterS@NVIDIA
84 total posts"

Poor guy, I suspect he never thought his inbox would be as popular as it is right now :p Stinks when someone makes themselves THE contact guy for (potentially) thousands of people.

The 84 posts is his number of public posts to their forums since he joined in 2007. Doesn't seem like that many, but I bet the number of PMs is through the roof.
 
its rather amazing aint it, amd or nivdia, ANY company that sells products that turn to be technically different than advertised is liable




you have been trained well

Nope, I use both side of the isle, red and green..Not loyal to any brand!
 
Did a little testing of my own using afterburner's frametime readings and other monitoring tools... it's not FCAT but it's very accurate regardless. Here's what I got...

PHaofek.png


So yeah, using SLI GTX 970's to drive high-res high-settings will result in massive, massive frametime issues, even if the framerate over a given second remains reasonable. It is basically an unplayable mess at that point when using 3.7-4.0gb of VRAM. If you can stay around/below 3.5gb of actual usage, which it does its best to do, frametimes are consistent and tight as you would expect. The framerate averaged around 38, meaning in a perfect world the frametimes would be right around 26.3ms for each frame.

As an interesting aside, when finding my settings to test with I noticed it would literally, over the course of several seconds, try to work its way back down to below 3.5gb of usage if it went over, until I set things high enough that it couldn't and would just stick at 3.7-3.8gb+ the whole time. Otherwise it would fight and keep pingponging from ~3.4gb directly to ~3.7gb and back repeatedly before finally settling at ~3.4gb. That's probably the drivers at work, there.
 
Hmm, that's interesting.

Curious; (supposing) if the drivers are working hard swapping information in/out main vram memory to compensate, is this using any additional CPU resources? Or is there too much fluctuating already to measure?
 
more news from nv forums

NVGareth said:
dbb4eva said:
Newegg told me they are taking returns but will charge 15% restocking! Ouch!

PM me the contact info / case number of whoever you're talking to. I can't promise anything, but will certainly send them a message explaining the error in the technical specs and requesting that they help you out as much as possible.

That goes for anyone else pursuing a return and running into difficulties, too. We stand behind the GTX 970 as a fantastic card and the best performance to be found in its price range. But at the same time, I understand the frustration over the error, and if anyone would rather have something else, I'll lend whatever weight I can as an NVIDIA employee to request the retailer make an exception to their normal policies.
 
Hmm, that's interesting.

Curious; (supposing) if the drivers are working hard swapping information in/out main vram memory to compensate, is this using any additional CPU resources? Or is there too much fluctuating already to measure?

I couldn't say, honestly, but my cpu usage isn't more than ~70% average on the cores during gameplay typically so presumably there's plenty of headroom on that front.

Just to elaborate a little (copying another forum post I wrote) even with a similar framerate, frametimes get completely torpedo'd once you pass the 3.5gb threshold. For example that graph was ~38fps, but if you get below the 3.5gb mark outright with your settings a ~50fps gameplay has consistent frametimes with little variance, bouncing between ~15-25ms of render time as you'd expect, sometimes a little more or less.

The ~38fps though passing the 3.5gb vram mark, however, ends up having times constantly going between ~35ms to 150ms of time to render each frame, with many spikes over 200ms.
 
Hmm, that's interesting.

Curious; (supposing) if the drivers are working hard swapping information in/out main vram memory to compensate, is this using any additional CPU resources? Or is there too much fluctuating already to measure?

Don't think CPU is involved (much), probably some sort of DMA transfer going on.
 
I checked some of the logs I had from when I was using the 4k monitor (I unfortunately didn't have frametime logging enabled at that point), and wasn't finding sessions I had topped the 3.5gb mark in during gameplay with my normal settings to hold 60fps+ in BF4, so at least as far as the issues that had been annoying me with general motion there I doubt it was due to the cards but rather the small input lag and 60hz refresh, etc. (in comparison to my X-Star DP2710 oc'd). So those annoyances at least I am pretty certain weren't due to this whole 970 segmentation problem... a bit of a comfort all things considered, that I didn't dump the monitor for reasons having nothing to do with it :).
 
I couldn't say, honestly, but my cpu usage isn't more than ~70% average on the cores during gameplay typically so presumably there's plenty of headroom on that front.

Just to elaborate a little (copying another forum post I wrote) even with a similar framerate, frametimes get completely torpedo'd once you pass the 3.5gb threshold. For example that graph was ~38fps, but if you get below the 3.5gb mark outright with your settings a ~50fps gameplay has consistent frametimes with little variance, bouncing between ~15-25ms of render time as you'd expect, sometimes a little more or less.

The ~38fps though passing the 3.5gb vram mark, however, ends up having times constantly going between ~35ms to 150ms of time to render each frame, with many spikes over 200ms.

Hmm, any extra CPU use from the driver would get hidden in the jumble unless it was considerable I guess.

So a very, very rough calculation of the secondary memory segment at 1/7 speed puts the 25ms variance * 7 speed penalty = at +/- 175 ms which is approx what you are seeing. Looks spot on.
 
Don't think CPU is involved (much), probably some sort of DMA transfer going on.

Curious where the heuristics to determine what gets swapped out of main memory gets calculated or is it just the oldest information.
 
Curious where the heuristics to determine what gets swapped out of main memory gets calculated or is it just the oldest information.

That could possibly take some cycles, but with that I think the issue is more of getting it right for each specific scenario and practically reducing 1/8 of GPU RAM to a form of L3 cache instead of true usable vram.


I checked some of the logs I had from when I was using the 4k monitor (I unfortunately didn't have frametime logging enabled at that point), and wasn't finding sessions I had topped the 3.5gb mark in during gameplay with my normal settings to hold 60fps+ in BF4, so at least as far as the issues that had been annoying me with general motion there I doubt it was due to the cards but rather the small input lag and 60hz refresh, etc. (in comparison to my X-Star DP2710 oc'd). So those annoyances at least I am pretty certain weren't due to this whole 970 segmentation problem... a bit of a comfort all things considered, that I didn't dump the monitor for reasons having nothing to do with it :).

Silver linings :)
 
Last edited:
Did a little testing of my own using afterburner's frametime readings and other monitoring tools... it's not FCAT but it's very accurate regardless. Here's what I got...



So yeah, using SLI GTX 970's to drive high-res high-settings will result in massive, massive frametime issues, even if the framerate over a given second remains reasonable. It is basically an unplayable mess at that point when using 3.7-4.0gb of VRAM. If you can stay around/below 3.5gb of actual usage, which it does its best to do, frametimes are consistent and tight as you would expect. The framerate averaged around 38, meaning in a perfect world the frametimes would be right around 26.3ms for each frame.

As an interesting aside, when finding my settings to test with I noticed it would literally, over the course of several seconds, try to work its way back down to below 3.5gb of usage if it went over, until I set things high enough that it couldn't and would just stick at 3.7-3.8gb+ the whole time. Otherwise it would fight and keep pingponging from ~3.4gb directly to ~3.7gb and back repeatedly before finally settling at ~3.4gb. That's probably the drivers at work, there.

Wouldn't running at 30fps naturally cause stutters and un-smooth gameplay? I mean it has for me for any card I have ever used. What I mean is are we not just saying " hey guys if you crank this game up to 4K and use over-the-top amounts of AA, it will really get laggy and lower your FPS!''. Or is it something entirely different?
 
Wouldn't running at 30fps naturally cause stutters and un-smooth gameplay? I mean it has for me for any card I have ever used. What I mean is are we not just saying " hey guys if you crank this game up to 4K and use over-the-top amounts of AA it will really get laggy and lower your FPS!''. Or is it it something entirely different?

30FPS will cause stuttering because there is no "perfect world" (per the quote you yourself quoted) where frame times are all the same, average value.

Average framerate is nearly useless in determining this. We should care about the worst-case scenarios, the individual frames where frame rendering time is highest.
 
Hey guys.. I got the MSI 970 GTX 4GB Golden Edition last week along with a brand new 4790k CPU + 16GB Ram. The Entire system cost me around $1600.... I was testing this rig running 3D mark and noticed during the Firestrike scene, (during the fight) my PC kind of slows down and becomes sluggish... Granted I dont have a new Monitor, since it's only a 23 inch DELL LCD 1080p panel... But could that slowness be caused due to the card addressing the other % of the memory? I'm just wondering about that since I just read this post.. Thanks

Also have horrible coil whine in Ice storm scene and some on Cloud Gate.. The others are fine
 
Did a little testing of my own using afterburner's frametime readings and other monitoring tools... it's not FCAT but it's very accurate regardless. Here's what I got...

PHaofek.png


So yeah, using SLI GTX 970's to drive high-res high-settings will result in massive, massive frametime issues, even if the framerate over a given second remains reasonable. It is basically an unplayable mess at that point when using 3.7-4.0gb of VRAM. If you can stay around/below 3.5gb of actual usage, which it does its best to do, frametimes are consistent and tight as you would expect. The framerate averaged around 38, meaning in a perfect world the frametimes would be right around 26.3ms for each frame.

As an interesting aside, when finding my settings to test with I noticed it would literally, over the course of several seconds, try to work its way back down to below 3.5gb of usage if it went over, until I set things high enough that it couldn't and would just stick at 3.7-3.8gb+ the whole time. Otherwise it would fight and keep pingponging from ~3.4gb directly to ~3.7gb and back repeatedly before finally settling at ~3.4gb. That's probably the drivers at work, there.

Do not test in SLI as MultiGPU frametimes are inconsistent due to developer coding or SLI/Crossfire support. This is already a known issue with many titles and SLI/Crossfire. Test single GPU and see what you get for frametimes.
 
Do not test in SLI as MultiGPU frametimes are inconsistent due to developer coding or SLI/Crossfire support. This is already a known issue with many titles and SLI/Crossfire. Test single GPU and see what you get for frametimes.

It's an issue well-known to some but IMO it is worth bringing up now and then because there will always be someone considering SLI who doesn't know about this.

The vast majority of users shouldn't even be considering SLI and should instead be going for a GTX 980 or whatever. Too many people don't realize how complex SLI/CF are (meaning there are issues like these) and think it is a "perfect" way to spread out your GPU purchase by buying one mediocre card now and another later instead of just buying the card that suits your needs now. Even if they can't afford a GTX 980 now, they should instead buy what they can afford now, SELL that card when they can afford better, and buy a better card at that time. 10 years ago, if someone came on here and told us they were having CPU performance issues, we'd have told them to upgrade their PC, not run some Beowulf cluster.
 
Last edited:
Do not test in SLI as MultiGPU frametimes are inconsistent due to developer coding or SLI/Crossfire support. This is already a known issue with many titles and SLI/Crossfire. Test single GPU and see what you get for frametimes.

But if it was only an SLI issue there would be no such difference between < and >3.5GB.
 
Status
Not open for further replies.
Back
Top