Petitioning for 970 Refund

I didn't sell my 970s and leave NVIDIA as much for performance (although I did see issues in 4k) as I did because I can't stand being lied to.
 
The game choosing to use what's available is directly related to the quality of programming that went into developing the graphical engine. So yes, I can blame the game for using what's available despite not needing to. I don't see half of my Steam library of 180 some odd games maxing out my VRAM just for shits and giggles - the other half I don't see I haven't launched since I bought them so I can't confirm or deny.

It's still pretty shit what nVidia did/is doing - but that only accentuates and perpetuated a problem which has been going on for years now - the PC gaming/enthusiast community is getting the 10 foot pole shoved up its ass all from all different angles.

I don't see how that's lazy programming or the sign of a bad port, it would take extra programming to use what's available instead of just what it needs and it will increase performance as long as the card doesn't have a gimped memory configuration. In what way is extra optimization that benefits other cards poor optimization?

At this point it's getting hard to separate fact from fiction. For instance I have a hard time believing the claims on reddit about it using system memory, but at this point who knows.
 
if nvidia doesn't release a driver/firmware update to make the 970 a 3.5 gb card, is there any chance a third party could?
How exactly would that help? You'd have the exact same problems if the extra 512MB was shut off. You still have to turn the settings down until you are under 3.5GB of VRAM even if they disable that RAM in a BIOS update. So just keep your RAM usage below 3.5GB and you'll have no issues.
 
And no, it doesn't matter those were unrealistically high settings for BF4. The article is proof enough that there will be huge issues when 970's RAM is filled and that is something to be more and more expected as newer games come out.

Actually it kind of does matter - nobody expects a pair of $325 graphics cards to drive 18.66 million pixels with that level of detail. Sure, it proves the point shit hits the fan when the VRAM is pushed above its limitation but tell me.....what fucking card doesn't shit the bed when its VRAM is exceeded.

Can somebody with a pair of Titan Blacks go do a frame time analysis at a 5760x3240 resolution and max settings to see if the frame times are still as bad - just so we can eliminate a GPU horsepower versus VRAM variable? Hell I'd even take 780 Ti's or 980s.


The article is proof enough that there will be huge issues when 970's RAM is filled and that is something to be more and more expected as newer games come out.

Yes, nVidia misrepresented the shit out of the 970 and it shits the bed 500MB earlier than advertised, the solutions have been offered - either go take part in one and vote with the wallet or stop the senseless rehashing/bitching of the same valid, but invalid points. My earlier point still stands valid though - newer games are memory hogs because of lazy, unoptimized programming and more of our "hardcore" PC master race community need to start saying "fuck you guys, give us a true PC title or we're done"

There are already games out there that show the very same thing at 1080p.

And those are the games we shouldn't be buying because it promotes lazy development in one way or another and forces us as consumers to care about VRAM - which is arguably one of the more expensive parts of a GPU and an easy way to justify a price hike.

I don't see how that's lazy programming or the sign of a bad port

It's a few dozen lines of well written code to have the architecture/subsystem seek out available resources and use what's not in use. It's a few hundred or a few thousands lines of well written code to actually use a smaller amount of resources more efficiently. It's night and day difference.

It's like blind firing a bazooka saying "eh we'll get something from the blast/shrapnel out of it even if we miss" versus pinpoint double tapping or burst fire action one target at a time. Both get the job done but the burst fire is a much cleaner and preferred method when you might wanna keep that bazooka (VRAM) in reserve for when the tanks come rolling in.

It's like programming a motor controller with just proportional control versus throwing in multiple feedback signals via integral and derivative controllers. You could never hit your mark dead on and always be compensating resulting in something barely capable of a sawing action or you could be Jonny-on-the-spot to the sixty-fourth of an inch and do open heart surgery remotely. The second options cost more up front resources but you get so much extra headroom to solve upcoming problems with should the need arise.

I urge you, go load up Crysis 3 or Guild Wars 2 and slowly bump the settings and compare quality to VRAM use. Then go play something like MechWarrior Online and find a happy place graphics wise. Then turn on DX11 and cry, cry like you did coming out of the womb trying to figure out how to get that 50% of your instantly lost FPS back and how many settings you have to tank to do it. "Well the game still works...it doesn't crash...so that's success on the DX11 implementation right?" Wrong, it's so unoptimized that what little improvements it could offer are severely drowned out by the huge and unnecessary performance hit because some code monkey didn't wanna earn his keep that day and instead browsed Reddit all day at work.
 
Last edited:
How exactly would that help? You'd have the exact same problems if the extra 512MB was shut off. You still have to turn the settings down until you are under 3.5GB of VRAM even if they disable that RAM in a BIOS update. So just keep your RAM usage below 3.5GB and you'll have no issues.

have you read the thread? there are games that don't have problems with cards with less vram but do with the 970 because they try to make use of all available vram even if it's not necessary.
 
How exactly would that help? You'd have the exact same problems if the extra 512MB was shut off. You still have to turn the settings down until you are under 3.5GB of VRAM even if they disable that RAM in a BIOS update. So just keep your RAM usage below 3.5GB and you'll have no issues.

Because the card advertises that it has 4GB of ram so games try to use it.
This exposes the game to the slow 0.5GB.
It would be ridiculous to tweak everything to avoid it, you would still encounter the issue at times.

If the card advertises only the faster 3.5GB that wouldnt happen.
 
Because the card advertises that it has 4GB of ram so games try to use it.
This exposes the game to the slow 0.5GB.
It would be ridiculous to tweak everything to avoid it, you would still encounter the issue at times.

If the card advertises only the faster 3.5GB that wouldnt happen.

This. I can live with a 3.5Gb card considering how much I paid. But I don't want to deal with those slow extra 512Mb and have (micro)stuttering and whatnot. Let the games see it as a 3.5Gb card and call it a day.
 
It's a few dozen lines of well written code to have the architecture/subsystem seek out available resources and use what's not in use. It's a few hundred or a few thousands lines of well written code to actually use a smaller amount of resources more efficiently. It's night and day difference.

It's like blind firing a bazooka saying "eh we'll get something from the blast/shrapnel out of it even if we miss" versus pinpoint double tapping or burst fire action one target at a time. Both get the job done but the burst fire is a much cleaner and preferred method when you might wanna keep that bazooka (VRAM) in reserve for when the tanks come rolling in.

It's like programming a motor controller with just proportional control versus throwing in multiple feedback signals via integral and derivative controllers. You could never hit your mark dead on and always be compensating resulting in something barely capable of a sawing action or you could be Jonny-on-the-spot to the sixty-fourth of an inch and do open heart surgery remotely. The second options cost more up front resources but you get so much extra headroom to solve upcoming problems with should the need arise.

I urge you, go load up Crysis 3 or Guild Wars 2 and slowly bump the settings and compare quality to VRAM use. Then go play something like MechWarrior Online and find a happy place graphics wise. Then turn on DX11 and cry, cry like you did coming out of the womb trying to figure out how to get that 50% of your instantly lost FPS back and how many settings you have to tank to do it. "Well the game still works...it doesn't crash...so that's success on the DX11 implementation right?" Wrong, it's so unoptimized that what little improvements it could offer are severely drowned out by the huge and unnecessary performance hit because some code monkey didn't wanna earn his keep that day and instead browsed Reddit all day at work.

The issue being discussed isn't games that require more VRAM which could be an optimization issue it's games that are designed to use more than they need if it's available. That's more optimization not less and it would increase performance if the 970 reported 3.5 gb or if the 4 gb was fully useable.

Your posts remind me of those by another screen name both in style and substance, I hope you're not using multiple accounts...
 
BF4_3840x2160_PLOT_0.png


Well, isn't that a nice graph. And no, it doesn't matter those were unrealistically high settings for BF4. The article is proof enough that there will be huge issues when 970's RAM is filled and that is something to be more and more expected as newer games come out. There are already games out there that show the very same thing at 1080p.

That is crazy....no fucking wonder I had horrible stuttering in DA:I and BF4 with 4k DSR going....

Jesus.....the 970's in SLI suck for 4k gaming period....
 
If one must replace their 970's, I would at very least choose to do Titan, 780 6GB or 290x 8GB versions, going for another 4GB card is, IMHO, foolhardy at this point, as one is already told (in your face kind of way) that 3.5GB isn't enough.

good point...the next mid-range/high-end cards from AMD/Nvidia will all be a minimum of 6GB...with 8GB+ becoming the norm very soon...as someone said earlier in this thread, the issue is more with game coding...games seem to using all available VRAM for caching purposes so it's hard to tell which games actually need the VRAM
 
Those cards aren't using anything close to the same FOV.

I already covered it few posts down. Irrelevant in grand scheme of things. He might have reduced the FOV to keep the FPS over 60 but still keep VRAM usage high so we can see the difference in smoothness even though FPS are the same in average. Dropping the GTX780 below 60fps would render the comparison irrelevant since the low FPS starts to reduce the smoothness

Point is that GTX970 starts to crap out the moment it goes beyond 3.5gb even though it should still have 512mb to work with. While GTX780, while pushed to its limits, keeps the game buttery smooth. Atleast until it gets to some large areas where it might simply get starved from lack of VRAM...

IMHO the card could almost be considered faulty. It might not affect every situation and in those its a great performer. But there is an obvious scenario where the card should still work fine it instead clearly craps out.

I think I'm done here.
 
The issue being discussed isn't games that require more VRAM which could be an optimization issue it's games that are designed to use more than they need if it's available. That's more optimization not less and it would increase performance if the 970 reported 3.5 gb or if the 4 gb was fully useable.

Your posts remind me of those by another screen name both in style and substance, I hope you're not using multiple accounts...

That's incorrect by the very definition of optimize - make the best or most effective use of (a situation, opportunity, or resource). At best you could call the bold/unlined/italicized statement inefficient - but certainly not optimized. There is nothing effective about using 4GB of VRAM to render the same quality image if it really only takes 2-2.5GB. Again, few dozen lines of code to tell the game to use all available resources...but not enhance the preset image quality; or a few hundred/thousand lines of code to use exactly what is needed and nothing more, keeping the additional resources for overhead. The latter certainly is not optimized from a development/business standpoint as that takes employee's time which costs Mr. CEO thousands of extra dollars - but from a programming perspective it is.
 
That's incorrect by the very definition of optimize - make the best or most effective use of (a situation, opportunity, or resource). At best you could call the bold/unlined/italicized statement inefficient - but certainly not optimized. There is nothing effective about using 4GB of VRAM to render the same quality image if it really only takes 2-2.5GB. Again, few dozen lines of code to tell the game to use all available resources...but not enhance the preset image quality; or a few hundred/thousand lines of code to use exactly what is needed and nothing more, keeping the additional resources for overhead. The latter certainly is not optimized from a development/business standpoint as that takes employee's time which costs Mr. CEO thousands of extra dollars - but from a programming perspective it is.

Game code is completely irrelevant to the "970 flaw" discussion. If a game or program decides to use the entirety of the VRAM for whatever reason it should perform as advertised when I bought the card. Using SLI or triSLI it's quite easy to max out 3.5GB
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
That's incorrect by the very definition of optimize - make the best or most effective use of (a situation, opportunity, or resource). At best you could call the bold/unlined/italicized statement inefficient - but certainly not optimized. There is nothing effective about using 4GB of VRAM to render the same quality image if it really only takes 2-2.5GB. Again, few dozen lines of code to tell the game to use all available resources...but not enhance the preset image quality; or a few hundred/thousand lines of code to use exactly what is needed and nothing more, keeping the additional resources for overhead. The latter certainly is not optimized from a development/business standpoint as that takes employee's time which costs Mr. CEO thousands of extra dollars - but from a programming perspective it is.

It's not better or more effective use of the VRAM to let it sit there unused when it can be used to improve performance which seems to be the reason devs are doing it, improving performance by utilizing the resources better is optimization by the definition you provided.
 
NV run msrp/map pricing
blame amazon / people retuning fully working cards

the card is just as fast as it was when it was reviewed
itt bunch of entitled kids


Entitled to get a card which actually has the hardware listed on the specification sheet/marketing material/reviews?

I hope so.
 
Entitled to get a card which actually has the hardware listed on the specification sheet/marketing material/reviews?

I hope so.

Agree with you 100% here. If Nvidia advertised it as such before purchase and changed it without ever advertising the change until people found out by themselves after purchase then thats clearly false advertising, bait and switch.

I mean for a lot of people who buy this card it wont matter, maybe down the line it seems it might. But for those who bought this hoping to keep it for awhile or use it for high res gaming it seems it does matter to them. Personally I think there are some shady stuff going on in Nvidia.
 
Nvidia actually had the balls to raise prices on the 980 after all this 970 controversy...it was $599.99 yesterday, now it's $626.99 :mad:

Sorry, but if you dump your 970s now and turn around and buy 980s, you're a sucker.
 
Agree with you 100% here. If Nvidia advertised it as such before purchase and changed it without ever advertising the change until people found out by themselves after purchase then thats clearly false advertising, bait and switch.

I mean for a lot of people who buy this card it wont matter, maybe down the line it seems it might. But for those who bought this hoping to keep it for awhile or use it for high res gaming it seems it does matter to them. Personally I think there are some shady stuff going on in Nvidia.

http://www.hardocp.com/article/2014...x_directcu_ii_video_card_review/#.VNDVzS7ClWQ

not seeing a 'bait and switch' you got what you payed for
unless you can prove EVERY reviewer got a golden card...
 
Sorry, but if you dump your 970s now and turn around and buy 980s, you're a sucker.

Hey Starrbuck!

Agreed 100%. You'll basically be rewarding nVidia. The best option I've seen is returning the 970's and going with a 290x. That will hurt nVidia where it counts - their bank account.
 
http://www.hardocp.com/article/2014...x_directcu_ii_video_card_review/#.VNDVzS7ClWQ

not seeing a 'bait and switch' you got what you payed for
unless you can prove EVERY reviewer got a golden card...

From Hard's Review:

"The GeForce GTX 970 is designed for 1600p gaming, and contains 1664 CUDA cores and 13 SM units. The memory consists of 4GB of 3GDDR5 modules, on a 256-bit bus that operates at 7GHz. The base clock a reference GTX 970 is 1050MHz and boost clock of 1178MHz. There are 64 ROPs on this video card, and 104 texture units. This video card has a TDP of 145W."

This is now proven to not be accurate with regard to Memory Bus and ROPs.

Hence, bait and switch.

Try again.
 
So, would a MSI Lightning r9 290x be a better replacement for the gtx 970?
 
If you absolutely MUST have 4gb of VRAM right now, and you feel that it will be good enough for the future (it probably won't with the way that devs are coding ports these days), then yeah, the 290x lightning is a beast and will be ok. Buying anything to bump up 500megs of VRAM seems like a waste now if you already have a 970, unless you know with absolute certainty that every game you play uses between 3.5 and 4gigs (but not more) at the settings you want to run. Otherwise, the decision comes down to how do you feel about NVIDIA lying about this, and do you want to switch brands because of it.

Unless you want to buy Titan's or 6gb 780's, there's nothing with increased VRAM worth buying right now.

In all likelihood, 970 owners are all just going to have to take a hit on their prospective resale values, and suck it up. Unless you REALLY want that last 500megs (which just seems silly now)

In the end, I wish they'd just have removed that last 500meg, so that there is no chance that the card would attempt to use it, and there could be no speculation that "oh my 4gb card is pushing over 3.5, so the performance is tanking". Instead, like all other cards, it would be "oh, I've asked for more vram than is available, I need to lower the settings."

Anyway, bring on the next generation of cards with 8gb or more, so we can handle these poorly optimized console ports :)
 
Agreed 100%. You'll basically be rewarding nVidia. The best option I've seen is returning the 970's and going with a 290x. That will hurt nVidia where it counts - their bank account.

That was the plan. It's the only way I could really make a statement. :)
 
the card is just as fast as it was when it was reviewed
itt bunch of entitled kids

Can we please stop regurgitating this line over and over? Saying it repeatedly doesn't make it any more relevant to this issue.
 
you play bf4 at 25 fps?

This. I think a lot of people are totally clowning on this, or they have no idea what they are doing. Who the hell would play BF4 at the settings needed to get 25FPS performance @ 4K then complains about stutter? People stutters is the least of your worries if you are only managing 25FPS. Also the GTX980 only gets 27 FPS (with less stutter) however that is still completely unplayable for many people. People don't even care that game is basically a slideshow, they are like OMG LOOK AT THOSE FRAME TIMES!

Is anyone really surprised a $300 card can't run 4K resolution with 1.5x DSR which means the car is actually rendering 5760 x 3240 or 18.66 million pixels? I don't think people even understand how DSR works.
 
Back
Top