HardOCP looking into the 970 3.5GB issue?

Status
Not open for further replies.
Referring to the GTX 970 customers as "teh innernetz" is annoying and relating it to your GTX 570 issue is not the same. This has blown up because Nvidia rushed these cards out and tried to patch up the issue with a quick fix.

They released a GTX 970 with dual 6 pin power. TONS of customers were complaining about the video cards down clocking under stress because the GPU isn't getting the power that is needed, especially the customers that spent extra on the factory overclocked versions. So they release a new version of the GTX 970 with a 8 pin and a 6 pin connector to allow more power phases to the GPU to alleviate the power throttle issue. Which are now pretty much "out of stock" from most retailers.

Now, we have people that can't fully utilize the 4GB of memory without it stuttering. It could be something to do with the new power phases... It could be something with quite a few batches of VRAM chips... or it could be something with the inductor coil whine that we are hearing so much about. That coil whine could be a sign of cheap inductors or some other poor quality counter part that is causing it. I really don't know... but I do know one thing... I have having to troubleshoot my hardware and tweak things just to get it to work as intended when I spent a considerable amount of cash to play my video games and have fun... Not to be stressed out wondering if my hardware is going to get a firmware, driver or replacement fix.

Looks like the red team just dropped prices again across the board. You can now get a Sapphire Tri-X or MSI Lightening R9 290X for $350. They are definitely taking advantage of this situation. My Sapphire R9 290X Tri-X pretty much stays at the same temps as the GTX 970's I had... and the performance is solid and drivers are too. I'm talking with my wallet now. Nvidia may be in my future... later on, but for now, they are off my spending list.

People complain about everything. There hasn't been a single video card released by NVIDIA or AMD in the past 300 years that somebody hasn't found something to complain about.

I don't fault anybody for returning their 970s and going AMD. I just think we're barely looking at a molehill here and people are getting spun up. But if a 290X works better for your needs, then by all means go ahead. I just think you shouldn't try to convince people who are otherwise satisfied that they're sitting on defective hardware unless you have some kind of reasonable proof.


In this forum -- here, on H -- we have heard people calling for class action lawsuits, product recalls (recalls are for like fatal baby cribs, guys), and congressional investigations.... all because NVIDIA made a hardware decision to segment GPU memory, even though at this time we have yet to ascertain if there is any actual relevant impact from this design.

Don't you think there could be an over-reaction here? And isn't there a concerted effort out there to fan these flames to try to turn a spark into a raging fire?
 
all because NVIDIA made a hardware decision to segment GPU memory, even though at this time we have yet to ascertain if there is any actual relevant impact from this design.

Don't you think there could be an over-reaction here? And isn't there a concerted effort out there to fan these flames to try to turn a spark into a raging fire?

if Nvidia made a hardware design decision then they needed to let customers know the impact prior to release...otherwise it is implied that the product is working as with all previous cards...you cannot put the gas pedal where the brake pedal is and not tell anyone
 
They released a GTX 970 with dual 6 pin power. TONS of customers were complaining about the video cards down clocking under stress because the GPU isn't getting the power that is needed, especially the customers that spent extra on the factory overclocked versions. So they release a new version of the GTX 970 with a 8 pin and a 6 pin connector to allow more power phases to the GPU to alleviate the power throttle issue.
This is utter nonsense. The only cards available at launch were using OEM PCB designs (mostly PCBs reused from the 770). The only 'reference' designs available are manufactured by Manli on 980 PCBs. Those also use dual 6-pin power connectors.
I really don't know
To be blunt: speculating on the potential impact of a hardware design decision without knowing what you're talking about is not helpful, as is repeating incorrect information you have not verified yourself.


I suspect there's going to be a lot of surprise when people realise how many GPU cores have an uneven distribution of component functional blocks, and just how little impact this has on real-world performance.
 
People complain about everything. There hasn't been a single video card released by NVIDIA or AMD in the past 300 years that somebody hasn't found something to complain about.

I don't fault anybody for returning their 970s and going AMD. I just think we're barely looking at a molehill here and people are getting spun up. But if a 290X works better for your needs, then by all means go ahead. I just think you shouldn't try to convince people who are otherwise satisfied that they're sitting on defective hardware unless you have some kind of reasonable proof.


In this forum -- here, on H -- we have heard people calling for class action lawsuits, product recalls (recalls are for like fatal baby cribs, guys), and congressional investigations.... all because NVIDIA made a hardware decision to segment GPU memory, even though at this time we have yet to ascertain if there is any actual relevant impact from this design.

Don't you think there could be an over-reaction here? And isn't there a concerted effort out there to fan these flames to try to turn a spark into a raging fire?

Its only an over reaction if we find out the end user result doesn't suffer due to this, and I am not convinced of that yet. There is plenty of conflicting info out there. I think Nvidia never mentioning this before is very bad, and they deserve to be hit for it. As for how the 970 itself is, lets see a good, trusted review showing frame skips / FMAC and see what the result really is in a real world scenario.
 
Absolutely wrong dude, sorry.

Sorry, but I'm absolutely correct.

They're showing that 980, which doesn't have this "special" type of memory config,

I'm not so sure about that.

drops performance by the same amount as the 970 when settings are turned up to use above 3.5GB

Indeed, but they're also showing that the performance of the 970 drops and that is on what I was commenting. I could have put the link in a 980 thread with much the same comment.

Both cards have problems with > 3.5 GB, but this thread is about the 970, not the 980.
 
People complain about everything. There hasn't been a single video card released by NVIDIA or AMD in the past 300 years that somebody hasn't found something to complain about.

I don't fault anybody for returning their 970s and going AMD. I just think we're barely looking at a molehill here and people are getting spun up. But if a 290X works better for your needs, then by all means go ahead. I just think you shouldn't try to convince people who are otherwise satisfied that they're sitting on defective hardware unless you have some kind of reasonable proof.


In this forum -- here, on H -- we have heard people calling for class action lawsuits, product recalls (recalls are for like fatal baby cribs, guys), and congressional investigations.... all because NVIDIA made a hardware decision to segment GPU memory, even though at this time we have yet to ascertain if there is any actual relevant impact from this design.

Don't you think there could be an over-reaction here? And isn't there a concerted effort out there to fan these flames to try to turn a spark into a raging fire?

I guess the lawsuit that Red Bull lost about "giving you wings" was a mystical story in wonderland and they never had to actually pay 13 million to it's consumers... You may think it's ridiculous, but things like that are happening because marketing is getting out of control with their claims. False advertising does get punished.
http://www.nbcnews.com/business/consumer/red-bull-drinkers-can-claim-10-over-gives-you-wings-n221901


This is utter nonsense. The only cards available at launch were using OEM PCB designs (mostly PCBs reused from the 770). The only 'reference' designs available are manufactured by Manli on 980 PCBs. Those also use dual 6-pin power connectors.
To be blunt: speculating on the potential impact of a hardware design decision without knowing what you're talking about is not helpful, as is repeating incorrect information you have not verified yourself.


I suspect there's going to be a lot of surprise when people realise how many GPU cores have an uneven distribution of component functional blocks, and just how little impact this has on real-world performance.

Oh really now? I guess this isn't a dual 6 pin card :rolleyes:
http://www.evga.com/Products/Specs/GPU.aspx?pn=d8328514-f9bc-44aa-ae85-d50c9f433297

EVGA isn't the only company with those original dual 6 pin connectors on their aftermarket designs either.

If you want to play the fact game, start providing sources with links.

My speculation has no effect from an engineering standpoint, it's just idea's I'm tossing out there. I never said I was a PCB designer or claim to be. I just don't see why people are taking it personally, like Nvidia is part of their family... defending them like someone just insulted their mom.
 
if Nvidia made a hardware design decision then they needed to let customers know the impact prior to release...otherwise it is implied that the product is working as with all previous cards...you cannot put the gas pedal where the brake pedal is and not tell anyone

Huh? "Needed to?" By that logic, NVIDIA needs to divulge exactly how they improve architectural efficiency year after year at a low transistor level, opening the door to their intellectual property that they spent millions developing and testing to their competitor for free.

Just like Coca-Cola isn't obligated to tell you anything if they change their recipe slightly under the hood. In fact doing so can lead to placebo effect largely killing off sales unfairly.

NVIDIA really doesn't need to say anything. The impact you're concerned about would be captured in reviews. That is THE ENTIRE POINT of a review community existing... Find any and all flaws and dig deep with questions. No one caught it because it doesn't change the performance story.

Let's be honest here. What we get for our money is performance seen in the benchmarks made by reviewers. Any architectural details or implementation details or driver information beyond that is a courtesy provided by NVIDIA/AMD/Intel. It's often provided to show the value their engineers are capable of providing to the world in a general way.

It is not up to us as a customer to decide the best path forward. If we have problems, sure return the thing or sell it or whatever.

The gas pedal analogy is bogus... GPU video memory is not a manual, human-operated control that directly affects real world safety. Video memory is driver and OS controlled/managed and always has been.



Example of a similar situation with a more reasonable conclusion:
I recently wanted to buy a new Dyson vacuum. I wanted the top of the line, which they just released a new version of at CES. Turns out the new one actually has 25% less suction power due to a less powerful motor than my old one. That spec was buried somewhere on a spec sheet and not mentioned in marketing materials. If I were impulsive I would've sworn them off and looked elsewhere. Instead I did research and applied common sense.

The price was more than what I paid for my old one, and I found a thread with someone voicing similar concern about the thing having less suction power. Dyson pointed out that a new efficiency improvement removes the need for a pre-filter, and since a lot of suction is lost trying suck air through a pre-filter, for this filterless model they can have more effective suction wih less initial suction at the source. It ends up with net better cleaning power than the old version, despite less suction at the source.

I'd be an idiot to second guess this.... It makes perfect sense. They stated it has better cleaning power, and I am happy.
 
Last edited:
PCPer reports that Nvidia have responded.

00ce1968b9359963acc5da04de88ee18


Basically, the 970 suffers a major performance hit when using more than 3.5 GB RAM.

the chart shows that the 970 with the separate memory configuration suffers from pretty much the same performance hit as the 980 which doesn't have the separate 512MB segment...but why do both the 970 and 980 suffer from major fps drops when accessing more then 3.5GB of VRAM?
 
Sorry, but I'm absolutely correct.



I'm not so sure about that.



Indeed, but they're also showing that the performance of the 970 drops and that is on what I was commenting. I could have put the link in a 980 thread with much the same comment.

Both cards have problems with > 3.5 GB, but this thread is about the 970, not the 980.

Wow... Incredibly thick statement here man. Please be smarter than this.

You're now saying that 980 is affected with nobody seeing the same memory results in the CUDA benchmark, and no one complaining about stuttering or anything wrong with 980?

This is just plain wrong. When you turn up settings for graphical quality your framerate drops. That's what they changed to achieve the memory usage amounts above and below the 3.5GB threshold in question.

The GPUs are doing more work in the lower performing cases... Hence the lower performance for both 970 and 980. But adding this extra work *also* increased the memory load, which was the key thing they wanted to test. If 970 lost significantly more performance than the 980, it would be indicative of a problem with this memory configuration.

Since the 970 *didn't* drop any more when settings were turned up, there is no impact on 970 performance caused by its different memory configuration.

Do you understand now?
 
but why do both the 970 and 980 suffer from major fps drops when accessing more then 3.5GB of VRAM?

Increased resolution and/or settings = lower performance.

Isn't that why you buy new GPUs?

The key point is that the 970, which is in question, didn't drop any more performance than 980 when work was increased and the memory above 3.5GB was used.
 
I keep seeing that Official nV table, and can't help to think that BOTH the 70 AND 80 have issues with Ram >3500, and I have to laugh when folks that love nV say " See, there's the official answer from nV, nothings wrong"....as if they are without question being 100% truthful.

You can bet they have been under a fire drill over this fiasco since it broke, meetings/conference calls/etc...this little response is nothing, just a way to try and stem the tide while they figure out where they put that damn fire extinguisher.
 
I keep seeing that Official nV table, and can't help to think that BOTH the 70 AND 80 have issues with Ram >3500, and I have to laugh when folks that love nV say " See, there's the official answer from nV, nothings wrong"....as if they are without question being 100% truthful.

You can bet they have been under a fire drill over this fiasco since it broke, meetings/conference calls/etc...this little response is nothing, just a way to try and stem the tide while they figure out where they put that damn fire extinguisher.

So far only the 970 seems to have any issues. Unless I missed something and the 980 has issues too? If not - I would ask that you stop spinning this.

My guess is that there will be FCAT tests that show added frametime variance when using the last 500 MB of VRAM. I'm actually pretty disappointed that Nvidia, in their explanation, whipped out the average frame rate as proof there's nothing wrong with the cards; when a couple of years ago, they were the ones touting that frame rates don't tell the whole story, when they exposed AMD's Crossfire frame time issues.

My other guess is that if there is something wrong with the cards, then it's something that cannot be fixed with a BIOS update or a driver. Who knows? Maybe they'll enable yet another SMX in a second revision of the 970? I really hope [H] and other review sites do some serious FCAT testing to uncover what's going on here.
 
the chart shows that the 970 with the separate memory configuration suffers from pretty much the same performance hit as the 980 which doesn't have the separate 512MB segment...but why do both the 970 and 980 suffer from major fps drops when accessing more then 3.5GB of VRAM?

I'm guessing it's because Super Sampling is enabled. Even in 2015, enabling Super Sampling is no joke.
 
So far only the 970 seems to have any issues. Unless I missed something and the 980 has issues too? If not - I would ask that you stop spinning this.

My guess is that there will be FCAT tests that show added frametime variance when using the last 500 MB of VRAM. I'm actually pretty disappointed that Nvidia, in their explanation, whipped out the average frame rate as proof there's nothing wrong with the cards; when a couple of years ago, they were the ones touting that frame rates don't tell the whole story, when they exposed AMD's Crossfire frame time issues.

My other guess is that if there is something wrong with the cards, then it's something that cannot be fixed with a BIOS update or a driver. Who knows? Maybe they'll enable yet another SMX in a second revision of the 970? I really hope [H] and other review sites do some serious FCAT testing to uncover what's going on here.

Folks have tested both cards uses Nai's test, and I've seen 980 tests that show similar behavior after 3500, I was under the impression that ONLY the 970 had an issue and that the 980 didn't have any issue accessing ALL 4GB with the re-allocation/swap/whatever nonsense they are doing with the 970.
If for some reason the 980 does something similar I don't quite understand why it would.

Either way, I'm waiting for the FCAT, and I'm betting $ that top review sites already know the answer....they are waiting for nV to speak up 1st(or get explicit permission/wording/tables from nV), so they don't bite the hand that feeds them, obviously making it more difficult to do business.
I'm in management, I understand the process when something goes wrong....it's a damn fire drill.
 
so now that Nvidia has responded and explained things in a way that makes sense is this much ado about nothing or is there something more?...what happens in the future when games start to hit the 4GB VRAM threshold without turning up every advanced graphics setting?
 
Folks have tested both cards uses Nai's test, and I've seen 980 tests that show similar behavior after 3500, I was under the impression that ONLY the 970 had an issue and that the 980 didn't have any issue accessing ALL 4GB with the re-allocation/swap/whatever nonsense they are doing with the 970.
If for some reason the 980 does something similar I don't quite understand why it would.

Either way, I'm waiting for the FCAT, and I'm betting $ that top review sites already know the answer....they are waiting for nV to speak up 1st(or get explicit permission/wording/tables from nV), so they don't bite the hand that feeds them, obviously making it more difficult to do business.
I'm in management, I understand the process when something goes wrong....it's a damn fire drill.

Reread some of the posts because some folks here have explained and linked to other forums where people talk about the benchmark in question. Apparently (this is my non-technical stab at an explanation), CUDA is to blame for the benchmark results. Agreed on the FCAT results.

Does anyone else on these boards have access to such software, or is it stupid expensive to get?

EDIT: The software I'm asking about is FCAT. Does anyone on this board have a copy of FCAT and a 970, and some time on their hands? :)
 
Last edited:
so now that Nvidia has responded and explained things in a way that makes sense is this much ado about nothing or is there something more?...what happens in the future when games start to hit the 4GB VRAM threshold without turning up every advanced graphics setting?

The same thing that always happens when games use more RAM than cards have. You turn down the settings or upgrade.
 
so now that Nvidia has responded and explained things in a way that makes sense is this much ado about nothing or is there something more?...what happens in the future when games start to hit the 4GB VRAM threshold without turning up every advanced graphics setting?

They have not explained things completely. All they showed was a chart that showed that the both the 970 and 980 have similar performance loss when the settings are cranked up, as measured by the frame rates. But as they (Nvidia) have touted in the past - frame rates don't tell the whole story. Just like AMD Crossfire showed good frame rates but still appeared to be stuttering, such is the same as the 970 when pushed to extreme settings.

And honestly, it's in this "explanation" that things don't smell right to me. Because I'll emphasize again - Nvidia used FCAT to expose Crossfire's deficiency in frame times and used it as a way to sell the superiority of their SLI solution. In all fairness, SLI at the time was definitely smoother than Crossfire. And again - they kept repeating that frame rates don't always tell the whole story.

Now, when users are claiming the same kind of stuttering (more or less) occurs on the 970, Nvidia just whips up a few frame rate numbers and says "See? It's working right!" When we all know that you can have shitty frame time variance while showing high frame rates.
 
so now that Nvidia has responded and explained things in a way that makes sense is this much ado about nothing or is there something more?...what happens in the future when games start to hit the 4GB VRAM threshold without turning up every advanced graphics setting?

Games are hitting the 3.5G memory usage en masse now with 4K screens.

Simple fix: Turn the resolution down to 640x480. Problem solved. :D

NV will have a software fix soon though. I wonder if there will be a [H] investigation of this?
 
The same thing that always happens when games use more RAM than cards have. You turn down the settings or upgrade.

You're not supposed to experience micro-stuttering when you're still within the physical limit of the on-board RAM.
 
EDIT: The software I'm asking about is FCAT. Does anyone on this board have a copy of FCAT and a 970, and some time on their hands? :)

I thought FCAT was open source? But doesn't it require some kind of special hardware?

Either way, PCPer says they're going to be doing an FCAT analysis and will be working on it tomorrow.
 
Reread some of the posts because some folks here have explained and linked to other forums where people talk about the benchmark in question. Apparently (this is my non-technical stab at an explanation), CUDA is to blame for the benchmark results. Agreed on the FCAT results.

Does anyone else on these boards have access to such software, or is it stupid expensive to get?

I've read them, probably 80% of what's going on in the top forums, that's how I saw the 980 results.....if it's a weird quirk then fine, but this being brought to light has made many folks test vigorously now, and are coming to conclusions on WHY they were(already) getting awful stuttering.
As I said, maybe the intent/bench was flawed after 3500, but it showed a flaw that folks weren't aware of, so it was actually a revealer of sorts in the end.
Guys were ALREADY having issues with their cards, yet most folks said "Turn down settings" blah blah blah....for what, when there is ram left, and the GPU still has enough horsepower, there shouldn't be a stutterfest.

That's what it seems to me at least, and I eagerly await the results that nV is going to produce/allow others to produce.

You can bet that AMD is paying attention and running tests, I'm wondering when they will dip their toe in and tell reviewers to use "This" to test like nV did to them regarding the Frametime issue with XF.
Maybe they won't, but they should, IF there really is an issue.

I love shit like this, this stuff has been going on for years, and most of the time it's pretty amusing watching fanboys scramble to eradicate the folks that DARE to say anything UNWORTHY of their GOD!

I bashed the fuck outta AMD for the Brilinear garbage many years ago, and the Bulldozer/8150(and to a lesser extent their PD chips- which I won't buy).... shit was entertaining as hell. HAHAHAHA
 
I'd be an idiot to second guess this.... It makes perfect sense. They stated it has better cleaning power, and I am happy.

No, you would not be an idiot to second guess it. Many things that 'make perfect sense' are not true. After using the new vacumm cleaner does it INDEED clean better or not? THAT is what is important.
 
I love shit like this, this stuff has been going on for years, and most of the time it's pretty amusing watching fanboys scramble to eradicate the folks that DARE to say anything UNWORTHY of their GOD!

I bashed the fuck outta AMD for the Brilinear garbage many years ago, and the Bulldozer/8150(and to a lesser extent their PD chips- which I won't buy).... shit was entertaining as hell. HAHAHAHA

Yes, you definitely strike me as somebody who loves to bash things out of ignorance. But I'm not sure why you're bragging about it and why you're dismissing people who choose to actually engage their brains as mere "fanboys".

To reiterate, we do not yet know what the performance impact of this segmented memory is. Even though there are people running around out there with their hair on fire insisting that memory access to the 0.5 GB portion is slower than PCIe. Just think about it for a moment... are we really to believe that it is faster to make a request across PCIe, to the CPU and then to the RAM and then transfer that data all the way back, than it is to just access the 7000 MHz memory that's a few centimeters away on the PCB? Does that make any sense at all?

I do find it believable that that smaller memory portion could be slower due to the design... maybe 5, 10 or 15% slower than the rest. But if it's that little, I would expect a negligible difference in performance.

But I don't know. Nobody knows. You don't know. We need to find facts. But you seem to be interested in racking up points against nvidia, for whatever reason.
 
The all new Nvidia GTX 970 4GB. Now with only 3.5GB at the 4GB price.
qWwh3IR.jpg
 
Yes, you definitely strike me as somebody who loves to bash things out of ignorance. But I'm not sure why you're bragging about it and why you're dismissing people who choose to actually engage their brains as mere "fanboys".

To reiterate, we do not yet know what the performance impact of this segmented memory is. Even though there are people running around out there with their hair on fire insisting that memory access to the 0.5 GB portion is slower than PCIe. Just think about it for a moment... are we really to believe that it is faster to make a request across PCIe, to the CPU and then to the RAM and then transfer that data all the way back, than it is to just access the 7000 MHz memory that's a few centimeters away on the PCB? Does that make any sense at all?

I do find it believable that that smaller memory portion could be slower due to the design... maybe 5, 10 or 15% slower than the rest. But if it's that little, I would expect a negligible difference in performance.

But I don't know. Nobody knows. You don't know. We need to find facts. But you seem to be interested in racking up points against nvidia, for whatever reason.

LMFAO- FANBOY ALERT FANBOY ALERT!

Listen man, I'm here having fun posting shit that is ALL over the web, if you don't like it TOO FKN BAD, don't bash me because you have constipation over this.
There are MANY people having issues with their 970's, and were complaining vigorously BEFORE this testing came to the front, only to be told the old standby "Turn the settings down", which is bullshit when there is Ram AND GPU power left.

I just told you I bashed AMD when they screwed the pooch, isn't that good enough for you, or do you need my credit card # so you can order a Titan to relieve your constipation?

You crack me up man.....Booofknhoooooooooooo :p
 
I do find it believable that that smaller memory portion could be slower due to the design... maybe 5, 10 or 15% slower than the rest. But if it's that little, I would expect a negligible difference in performance.

I don't think it's just that simple though. If both blocks of RAM are running at the same clock speed and the second block is still slower than the first block, then something else on the hardware level is causing the second block to be slower. This means that the two blocks will always be out of sync with one another, and it could make the last 500 MB useless.

Nvidia's explanation about the driver is very telling. The driver intelligently puts everything that it can in the first 3.5 gigabytes of RAM. Now... Why would it do that? Unless there's a reason... A reason they're not telling us.

I'm no systems engineer. But can someone answer this question:

If the situation was different. If this was a 4GB video card with 2GB of fast RAM and 2GB of slow RAM - what would need to happen so that I don't see any micro stuttering? Would the fast RAM need to be clocked down to match the slow RAM, so that there's no timing issues?
 
I don't think it's just that simple though. If both blocks of RAM are running at the same clock speed and the second block is still slower than the first block, then something else on the hardware level is causing the second block to be slower. This means that the two blocks will always be out of sync with one another, and it could make the last 500 MB useless.

Nvidia's explanation about the driver is very telling. The driver intelligently puts everything that it can in the first 3.5 gigabytes of RAM. Now... Why would it do that? Unless there's a reason... A reason they're not telling us.

I'm no systems engineer. But can someone answer this question:

If the situation was different. If this was a 4GB video card with 2GB of fast RAM and 2GB of slow RAM - what would need to happen so that I don't see any micro stuttering? Would the fast RAM need to be clocked down to match the slow RAM, so that there's no timing issues?

I think that Nvidia did in fact use either slower or just does not have the bandwidth capability to fully support 4GB of vRAM on the 970. However, I believe from what Nvidia have said and what some intelligent users have found, it seems that the extra 512Mb are being used as a cache of sorts. Just like with your CPU, L1->L2->L3->Ln->RAM, the access times get slower as you go down the hierarchy in the memory subsystem.

What is really appalling though is the sheer level of ignorance and idiocy being posted on the "top" online forums. Too many people have no clue of what is going on and are completely oblivious to the real "issue".

For one, the CUDA benchmark does not function as intended, as stated by the creator himself (I speak German, that's what his post says, no translation errors). There seems to be a fault with the way that CUDA allocates the last 512Mb, always using system memory (RAM) vs the onboard vRAM.

Second, even though the 970/980 are powerful cards, if you crank up the settings they will still struggle - duh. Personally, I still think the industry is pairing too much vRAM with GPUs which are just not powerful enough to provide playable framerate in any situation that would warrant such quantities of vRAM (outside of SLI/crossfire). A good example of a GPU with this problem is the HD2900, massive memory bandwidth (anyone else recall those reviews? Insane memory bandwidth for the time); however, the GPU was too slow for that memory config to provide any benefit whatsoever.

moar vRAM != instant higher framerate across the board.
Basically, if you do not have enough vRAM, the game will run at ~1FPS, or whatever the latency required for the GPU to swap blocks back and forth from system memory.

-- As for your question, no, there's nothing which you could do, short of making the memory allocation algorithm more efficient/behave differently.
 
Sup with this? From a self confessed " nV Fanboy"

"I didn't have problems with mine right away, because the game that came with the card (Assassins creed unity) was played on my GF's comp. Till I tried it after I got past the refund date (30 days) and first saw the stuttering. Every other game that I was playing at the time ran perfectly, I figured I why would that game have issues? Wrong, my GF's comp uses my old GTX770 and runs PERFECT, flawless no stutters I'm so jelly of her. So deep down I am still a Nvidia fan boy, I just want this fixed."
#1360-

https://forums.geforce.com/default/...tx-970-3-5gb-vram-issue/post/4434237/#4434237

Driver issue, or something else?
 
Sup with this?

https://forums.geforce.com/default/...tx-970-3-5gb-vram-issue/post/4434261/#4434261

Why the stop at 3500/stutterfest? Incorrect software measurement....or something else?

This issue is all over the net, it's not just the Nai test, so folks can stop with that already.

This whole thing is turning into a big mess of confirmation bias results right now. People went ~4 months without a problem, then someone told them there was a problem, and now everyone in their mom just came out of the woodwork because they found a way to get their card to use just under 4GB of VRAM and OMFG_now_it_stutters_what_are_we_going_to_do?!?

Also Nvidia isn't going to fix this with an update. It is a hardware resource allocation issue, you can't fix it with a driver update. The decision for the 970s to work they way they do was made long before the cards ever came out, and they still managed excellent reviews from damn near every site and user. Nvidia already responded and basically told everyone to go pound sand, they aren't going to do anything else.
 
Last edited:
Did people skip over this post that provides some food for thought?

That is exactly what is occurring at the moment. Then only time that last 500MB segment will be used in active operation is if a specific command will need to use more than 3.5GB to execute (e.g. you have two 1.2GB megatextures you need to blend into a third 1.2GB megatexture for a total operation usage of 3.6GB). Needless to say, this is a very uncommon occurrence. Otherwise, the 3.5GB is the portion of VRAM that is being used, and that 500MB is used to hold data that either isn't being used in an operation (e.g. cached textures that can be moved into the 3.5GB portion for active use) or data that doesn't require very high access rates (e.g. physics calculation results).

We also don't actually know the real drop access rate to this 500MB portion. The CUDA benchmark everyone is running does not actually access that portion of vRAM, and instead is paging to system memory (hence the bandwidth to that portion looking suspiciously close the the PCI-E bus bandwidth), because it was never designed to test this scenario.

What EdZ is saying here is that because of the inherently segmented memory of the 970, and the unique driver tweaks for it, it's entirely possible Nai's benchmark simply can't access the last 500MB portion, and instead the numbers we're seeing are in fact due to swapping with system memory. In other words, Nai's benchmark may not be able to access that last 500MB vram on the 970 at all, so at this point we do not know yet the true performance of that last portion.

And yes I agree with everyone who mentioned FCAT and frametime analyses. These really need to be done and would tell so much more than just some average FPS numbers.
 
So they release a new version of the GTX 970 with a 8 pin and a 6 pin connector to allow more power phases to the GPU to alleviate the power throttle issue.

the 8 pin connector just has two additional ground wires... and your power supply is supplying DC current, there are no "phases" in your computer
 
[From Another Forum] Nvidia could have simply said "UP TO 4GB" like boost clocks. Problem solved. LOL.

You got me, I lol'd! :D

Anyway, I doubt very few of us around here are qualified to quantify the actual impact of the 970's memory hierarchy. Seems to me if you have one that's not crashing, it's a great 1080p card for reasonably high settings. It's annoying though like, did you really have to do it that way?
 
EVGA is way late to the game, Gigabyte already has EVGA beat, and it overclocks like a monster.

I'm running two of those Gigabyte 970s in SLI, and even with the goddamn SLI voltage bug I'm still able to get 1506/7700 without touching volts.
 
No, you would not be an idiot to second guess it. Many things that 'make perfect sense' are not true. After using the new vacumm cleaner does it INDEED clean better or not? THAT is what is important.

Of course, that's why I read reviews... I left that part out. Reviews said it cleaned better. Done.

Just like reviews agreed 970 performed as it does. Done. You don't need to speculate how this affects performance when you have already seen performance reviews from every site on the planet!

Supporting full speed access to the final 512MB obviously would've increased cost. Obviously a decision was made, and there was a trade off to get the thing into more people's budgets. A trade off that affects performance in a small percentage of situations, and maybe costs 0-2% to overall performance apparently.
 
Last edited:
Sup with this?

https://forums.geforce.com/default/...tx-970-3-5gb-vram-issue/post/4434261/#4434261

Why the stop at 3500/stutterfest? Incorrect software measurement....or something else?

This issue is all over the net, it's not just the Nai test, so folks can stop with that already.

I like how you conveniently ignore posts like this right next to the one you posted. It shows the way you lean regardless of facts:

https://forums.geforce.com/default/...tx-970-3-5gb-vram-issue/post/4434270/#4434270

Also, definitely loving how people just realized there was "something wrong" with their cards 4 months after it was released. It really shows how crazy human psychology is.
 
Status
Not open for further replies.
Back
Top