Proposed Class Action Lawsuit Over NVIDIA GTX 970

well I'm not going to go any deeper into it than the reviews and follow-ups but the basic point is that it does use the same 256bit memory interface. in fact, that's a defining point of maxwell and why they're able to deliver such great performance at such a low price (by utilizing the same memory interface but cutting out some L2 cache).

claiming it had the same cache and ROPs as the 980 was the only part that could be called misleading. the memory interface is just straight up consumer misunderstanding and if you think the 970 should be every bit as good as a 980 then it's difficult to respond to your position on that: clearly one was $200 cheaper than the other and they're not the same card so, no, one can not reasonably expect that they'd behave the same...

If it used the same memory interface there wouldnt have been any need for NVidia to make a revised statement saying that they were wrong and that it doesnt use the same memory interface.
You quoted that very statement, if it helps:

As part of our discussion with NVIDIA, they laid out the fact that the original published specifications for the GTX 970 were wrong, and as a result the “unusual” behavior that users had been seeing from the GTX 970 was in fact expected behavior for a card configured as the GTX 970 was. To get straight to the point then, NVIDIA’s original publication of the ROP/memory controller subsystem was wrong
 
except that in all their corrections they retained the 256bit memory bus specification

dude, give it up and go to sleep lol
 
except that in all their corrections they retained the 256bit memory bus specification

dude, give it up and go to sleep lol
How does that change anything?
The problem still occurs from the other changes and are what this complaint is about and what peoples performance complaints are based on.
At this point you are clearly trolling.

If you want to make another thread talking about the 256 bit bus between the memory and the memory controllers, go ahead.
 
How does that change anything?
The problem still occurs from the other changes and are what this complaint is about and what peoples performance complaints are based on.
At this point you are clearly trolling.

If you want to make another thread talking about the 256 bit bus between the memory and the memory controllers, go ahead.
I'm not trolling, you just forgot why you started arguing with me.

I asked why someone quoted the 256bit memory bus and claimed it was false advertising and you responded:

Its not a 256 bit memory bus
It is and your statement was wrong. Instead of simply admitting that you misunderstood the relationship between the memory bus and memory bandwidth you decided to continually dig your heels in deeper with more misinformation and incorrect calculations.
 
I'm not trolling, you just forgot why you started arguing with me.

I asked why someone quoted the 256bit memory bus and claimed it was false advertising and you responded:


It is and your statement was wrong. Instead of simply admitting that you misunderstood the relationship between the memory bus and memory bandwidth you decided to continually dig your heels in deeper with more misinformation and incorrect calculations.

You must have missed the 2 pages of discussion after that because I clarified it fully.
My first post was a typo and this forum cant be edited.
I proved your points as either wrong or invalid.
We can go over them again if you wish?
 
Great, so the next GTX 990 (or whatever) will be that much more expensive so nvidia or the oems can pay for the costs of this lawsuit.

Great plan



I find the people that post things like this quite interesting to say the least.

I mean, it is obvious that you consider that getting reamed one way or another is A-OK and that your complain is now "Dang, because those guys over there didn't want to take it up the poop chute all of us that do accept that will now get it harder!", entirely disregarding the option of saying NO to the reaming from the start.


South Park's Stick of truth situation entirely.
 
Don't really care how the judge will rule. But if this keeps Nvidia more honest in the future, then everyone wins.
 
Nvidia's Jen-Hsun is going full Steve Jobs and calling the performance of the last 0.5GB "A feature"

http://blogs.nvidia.com/blog/2015/02/24/gtx-970/

Oh wow, yes he totally is.
I might be done with NVIDIA, this is pretty lame.


From the article:
We invented a new memory architecture in Maxwell. This new capability was created so that reduced-configurations of Maxwell can have a larger framebuffer – i.e., so that GTX 970 is not limited to 3GB, and can have an additional 1GB. - See more at: http://blogs.nvidia.com/blog/2015/02/24/gtx-970/#sthash.0mNYc0Qg.dpuf
Kind of funny how my GT430 (Fermi) from 2011 has 4GB of VRAM, yet ZOMG a GPU from 2015 now has access beyond 3GB!!!1 :rolleyes:

GTX 970 is a 4GB card. However, the upper 512MB of the additional 1GB is segmented and has reduced bandwidth. This is a good design because we were able to add an additional 1GB for GTX 970 and our software engineers can keep less frequently used data in the 512MB segment. - See more at: http://blogs.nvidia.com/blog/2015/02/24/gtx-970/#sthash.0mNYc0Qg.dpuf
I really hope AMD comes through on their next set of GPUs, I really really do.
If garbage like this isn't thrashed and dealt with immediately, I guarantee you that NVIDIA will cut more corners in the future, cheap bastards.
 
a savvy observer might rightfully wonder why none of the hardware reviewers "caught" this issue
 
a savvy observer might rightfully wonder why none of the hardware reviewers "caught" this issue
That last 1 GB of ram may not really matter for most practical use cases. This is why I like HardOCP's in-game testing methodology. They show you exactly what quality settings you can get away with per video card for decent game play and flaws like lack of memory or whatever show up in the results if they actually matter (for their selected games and resolutions of course).
 
a savvy observer might rightfully wonder why none of the hardware reviewers "caught" this issue

[H] did at least.
But they didnt have enough information at the time to suspect the real cause of why it started to slow down / become jittery.
Because they test for max playable, they provided game settings that will get you a good max performance from the card.
They also tried 4K in SLI and couldnt recommend it.

The downside is that [H] cant provide the max playable settings for all games.
Although that may not matter because it looks like NVidias latest driver has made it very hard to use more than 3.5GB and in recent reports, its very hard to even reach 3.5GB.
It can be done but you have to push the card a lot harder and it your swap file/memory use will grow.
This effectively makes it a 3.5GB or less card and may cause other performance issues if you run low on ram.
It makes getting max performance much more complex.

As long as the game doesnt mind not being able to see 4GB from a card that reports it has 4GB, this should eliminate some of the stuttering at high ram use.
I hope NVidia doesnt have to tweak their driver for every game otherwise the card will get old very fast.
 
I wonder sometimes about this sort of thing, if the likelihood is higher of them just making a goofy mistake, or them honestly thinking nobody would notice. Either seems improbable, but here it is.
 
This is about them re-using silicon for the GTX980 that isnt good enough to be a GTX980.
A normal process.

Where it went wrong is they tried to up the specs of the card by squeezing more memory onto the card than there are available connections.
So they ended up with 2x 0.5GB memory sections sharing a cache and link to the crossbar that are only designed for one 0.5GB memory section.
This was no mistake.
They designed the GTX970 to be a 4GB card knowing full well there would be performance issues,
The alternative was to make it a 3GB card which isnt so popular these days.
Its not possible they werent aware of what they are making.
They then advertised it as full performance and told review sites incorrect specs.

If they had made a mistake initially, they would have quickly realised because someone within NVidia would have been sure to raise the question.
It was only brought to light by owners of the cards and NVidias response has been shameful.
They tried to get away with it from the start and still are trying to get away with it.
 
I wonder sometimes about this sort of thing, if the likelihood is higher of them just making a goofy mistake, or them honestly thinking nobody would notice. Either seems improbable, but here it is.

I think they thought no one would really care given it's amazing price-to-performance ratio and low price. They should've bit the bullet and advertised 3.5 GB in the first place even though it would've been less elegant of a reveal explaining the piddling details of all that at that hyped-up launch event of theirs. I can guarantee you many of the protesters here would've been perfectly fine with that. I got a partial refund for the cards I have and I'm keeping then. I don't expect to run 4K at max antialiasing with them and never did.
 
pretty ballsy. I damn sure wouldn't have run the risk of riling the notoriously fickle PC hardware buying crowd. Bet they sold AMD a lot of 290's the last few weeks.
 
This is about them re-using silicon for the GTX980 that isnt good enough to be a GTX980.
A normal process.

Where it went wrong is they tried to up the specs of the card by squeezing more memory onto the card than there are available connections.
So they ended up with 2x 0.5GB memory sections sharing a cache and link to the crossbar that are only designed for one 0.5GB memory section.
This was no mistake.
They designed the GTX970 to be a 4GB card knowing full well there would be performance issues,
The alternative was to make it a 3GB card which isnt so popular these days.
Its not possible they werent aware of what they are making.
They then advertised it as full performance and told review sites incorrect specs.

If they had made a mistake initially, they would have quickly realised because someone within NVidia would have been sure to raise the question.
It was only brought to light by owners of the cards and NVidias response has been shameful.
They tried to get away with it from the start and still are trying to get away with it.

What performance issues?
 
How was it not advertised properly? It has 4GB. That's what was advertised. They could have gimped it with an 8 bit memory bus for all it matters - still has 4GB.
If EVGA could have sold me a 512Mb GeForce 6200 it doesn't matter.
 
Back
Top