Proposed Class Action Lawsuit Over NVIDIA GTX 970

Courtesy of GoldenTiger:

[!IMG]http://i.imgur.com/6v3hLnz.jpg[/IMG]

REVIEWER KIT and not officially advertised specs. I bet your dollars to donuts that there is a disclaimer in the kits saying "all things subject to change at any time for any reason"
 
b) The only other misleading claim that the plaintiffs alleged is that the ROPs and L2 cache were less. But those don't appear to be figures that are advertised directly?

The lawsuit is against Nvidia and Gigabyte. The singling out of one OEM isn't accidental. I don't know the details of the case but usually when they target one vendor specifically they are using it to bring forward evidence (such as one model of Gigabyte's 970 line having some mention of ROPs/L2) and to apply pressure to the prime defendant (Nvidia) by leveraging the OEMs. Basically, if Gigabyte is sued, then chances are other OEMs are at risk, so they'll likely form a consortium and make a separate deal which puts pressure on Nvidia. It's a tactical move that encourages a settlement even if Nvidia has the stronger case.
 
any "nice big fine" that people are hoping for will be passed on to the consumer in higher prices so in effect you are fining yourselves. ironic.

They're already charging what the market will bear, if they could charge more without losing market share they would. Any legal costs/awards/settlements will come out of their profits which will make investors unhappy and hopefully make Nvidia think twice before pulling this type of stunt again.

The people saying that the only big winners will be the lawyers are correct but most of them are missing the point. The purpose of class action suits isn't to make the wronged parties whole but to hold companies accountable for negative actions that are too small to bother with on an individual basis but become massive when combined.
 
That was a really dumb precedent for Amazon to set. Still mighty ridiculous to expect a percentage refund from a retailer. They're taking it in the A on that. Put the burden on the manufacturer, not the retailer.
Both Amazon and Newegg having been offering refunds or returns. I'd imagine each of them negotiated some sort of quiet refund deal with Nvidia simply because they've got the muscle to do so. Nvidia wouldn't want to open the floodgates to everybody but was forced to keep both retailers happy simply due to the sheer volume of Nvidia sales that flow through each company. Amazon and Newegg get a reputation for caring for their customers and Nvidia gets to keep selling their products at both sites.

That's my theory, anyhow.
 
I was shocked and disappointed over this site helping nVidia by closing multiple forum threads and totally ignoring this very important event.

You kept the softer titled thread around, but the threads which people were actually discussing this news were locked.

At least we got to see your loyalties, and how you regard your readership.
 
I see nothing on any of the exhibits that supports the false advertising claims...

It has 4GB of GDDR5. Sure some runs slower, but the RAM is still all 4GB of GDDR5. Also bet somewhere in the documentation is the phrase "up to 224 GB/s speed".
None of the specs sheets or boxes list the number of ROPs, unless I missed it, nor the L2 cache.

They did not ADVERTISE it falsele - it has what they list it has, review sites are the ones that add the extra details to it, not them, nor gigabyte.

It may have said that to reviewers, and possibly the review cards DID have those specs. But 99% of the time, review cards are also shipped with a disclaimer that the specs may change before the retail release.
It's peoples own damn fault for not getting information on the release card, and only on the reviewer cards that are shipped out generally before main production models are completed.
 
I see nothing on any of the exhibits that supports the false advertising claims...

It has 4GB of GDDR5. Sure some runs slower, but the RAM is still all 4GB of GDDR5. Also bet somewhere in the documentation is the phrase "up to 224 GB/s speed".
None of the specs sheets or boxes list the number of ROPs, unless I missed it, nor the L2 cache.

They did not ADVERTISE it falsele - it has what they list it has, review sites are the ones that add the extra details to it, not them, nor gigabyte.

It may have said that to reviewers, and possibly the review cards DID have those specs. But 99% of the time, review cards are also shipped with a disclaimer that the specs may change before the retail release.
It's peoples own damn fault for not getting information on the release card, and only on the reviewer cards that are shipped out generally before main production models are completed.

Oh dear. Read slowly please...

The technical specs for the card were provided to all review sites by nVidia. It's called a press kit, try googling the contents of the 970 press kit before spouting ignorant rubbish.
 
The lawsuit is against Nvidia and Gigabyte. The singling out of one OEM isn't accidental. I don't know the details of the case but usually when they target one vendor specifically they are using it to bring forward evidence (such as one model of Gigabyte's 970 line having some mention of ROPs/L2) and to apply pressure to the prime defendant (Nvidia) by leveraging the OEMs. Basically, if Gigabyte is sued, then chances are other OEMs are at risk, so they'll likely form a consortium and make a separate deal which puts pressure on Nvidia. It's a tactical move that encourages a settlement even if Nvidia has the stronger case.

The only two reasons I can think of why they'd include Gigabyte would be a) that's who made one or more of the plantiffs' specific cards, or b) somewhere Gigabyte actually advertised the incorrect specs of the card. My gut says it's the former, though.
 
I see nothing on any of the exhibits that supports the false advertising claims...

It has 4GB of GDDR5. Sure some runs slower, but the RAM is still all 4GB of GDDR5. Also bet somewhere in the documentation is the phrase "up to 224 GB/s speed".
None of the specs sheets or boxes list the number of ROPs, unless I missed it, nor the L2 cache.

They did not ADVERTISE it falsele - it has what they list it has, review sites are the ones that add the extra details to it, not them, nor gigabyte.

It may have said that to reviewers, and possibly the review cards DID have those specs. But 99% of the time, review cards are also shipped with a disclaimer that the specs may change before the retail release.
It's peoples own damn fault for not getting information on the release card, and only on the reviewer cards that are shipped out generally before main production models are completed.

If the "reviewer kit" contained literature that the press was allowed to make public as part of their reviews, then a claim could be made that those kits are a form of advertising, and even if they aren't "advertising" in the strictest sense, they are public, on-the-record statements about the card's capabilities. Someone mentioned there possibly being a disclaimer in the kit that the specs were subject to change. Theoretically that might be true, though it would be VERY fishy to be telling reviewers the specs weren't 100% locked down that close to the release of the card. Some reviewers would have commented. And to your second point, if the review cards had the higher specs and the release cards didn't, that would be a whole different scandal. Perhaps not one that's subject to litigation, but it would still be an asbolute PR nightmare for Nvidia. But let's be clear, NO ONE'S CLAIMING THAT'S THE CASE. Nobody, not any reviewers and not Nvidia are claiming that the review samples actually had the original specs and the retail cards were cut down. What Nvidia is claiming is that there was a communication breakdown between the engineers and the technical marketing guys who write the literature in the reviewer kits. As a result, the tech marketing guys put incorrect information in the kit, and nobody caught it before it went to the reviewers. The whole 3.5 vs 4 GB issue could be argued as one of nuance, since there are technically 4GB on the card, it'll depend on how the 4GB is described. If the performance of all 4GB is referenced in specific terms with no distinctions, they could be in trouble. But the ROP and L2 cache data was flat-out wrong, that'll be trickier for them to defend.

If you want to bash on people who are complaining, at least get your facts straight on what they're complaining about.
 
Also bet somewhere in the documentation is the phrase "up to 224 GB/s speed".

And it would still be FALSE, the top speed it can achieve is 196 GB/s for 3.5 GB of the vram onboard, and since it CAN'T access both memory partitions at the same time you cannot add up the throughput of the slow 0.5 GB partition to the fast one.
 
you pick a vendor so you can forum shop. I pretty sure Gigabyte was chosen because it allowed to pick a jurisdiction that they felt would be favorable to plaintiffs.
 
I don't see the damage, they're letting anyone who bought a card before they corrected the specs to return the card for a full refund. What damages could there be?
 
Great, so the next GTX 990 (or whatever) will be that much more expensive so nvidia or the oems can pay for the costs of this lawsuit.

Great plan
 
The lawsuit is against Nvidia and Gigabyte. The singling out of one OEM isn't accidental. I don't know the details of the case but usually when they target one vendor specifically they are using it to bring forward evidence (such as one model of Gigabyte's 970 line having some mention of ROPs/L2) and to apply pressure to the prime defendant (Nvidia) by leveraging the OEMs. Basically, if Gigabyte is sued, then chances are other OEMs are at risk, so they'll likely form a consortium and make a separate deal which puts pressure on Nvidia. It's a tactical move that encourages a settlement even if Nvidia has the stronger case.

Might be because of 12-b and Exhibit D of the complaint, that point out Gigabyte's advertising blurb of "Integrated with industry's best 4GB GDDR5 memory 256-bit memory interface" which is still found on all of their product pages *for both* 970 and 980 cards. I'm thinking most consumers comparing the specs of a Gigabyte 970 and Gigabyte 980 would be led to believe that both cards have the same memory subsystem, based on this statement (I'll admit, I did exactly that
jrQm1SF.gif
).
 
Might be because of 12-b and Exhibit D of the complaint, that point out Gigabyte's advertising blurb of "Integrated with industry's best 4GB GDDR5 memory 256-bit memory interface" which is still found on all of their product pages *for both* 970 and 980 cards.
and you think that's false advertising?
 
and you think that's false advertising?

It is when NVidia say this
http://www.pcper.com/reviews/Graphi...Full-Memory-Structure-and-Limitations-GTX-970

To those wondering how peak bandwidth would remain at 224 GB/s despite the division of memory controllers on the GTX 970, Alben stated that it can reach that speed only when memory is being accessed in both pools.
Its not a 256 bit memory bus otherwise peak bandwidth would be 256GB/s, he knows that.
If you check the block diagram on the same page that Alben provided, it shows that one of the 8 links from the crossbar is missing, so it cannot ever achieve the 256GB/s that a 256 bit bus is expected to provide.
 
wow you guys really mess some facts up. so many people wound up over so much misunderstanding about what is actually going on under the hood
 
Haven't most owners of a 970 already had some type of resolution?

I got a full refund from Newegg which was not what I was going for but they wouldn't budge on a % refund, so on principal I returned it.

Class action lawsuits seem like a last option for a 970 owner, Lawyers will get all the $$$. Partial refund or return for a full refund I think would be much better options, and that's already happening....

While this is true class action law suits do keep companies somewhat in line. Instead of thinking about it that way just pretend the lawyers are actually the government fining them lol.

At the end of the day to me and most people I couldn't give a rats ass how many GB, ROPs, shaders or what a GPU has what matters is FPS in games I play at resolutions I play with. This is what reviewers show people and the real reason they exist. If samply listing specs was good enough we wouldn't need reviewers.
 
wow you guys really mess some facts up. so many people wound up over so much misunderstanding about what is actually going on under the hood
Strange.
You have just been shown exactly whats going on under the hood, with "facts".
If you have some facts that say different, bring them to the discussion.
 
Strange.
You have just been shown exactly whats going on under the hood, with "facts".
If you have some facts that say different, bring them to the discussion.
fact is that it's still a 256bit memory bus...even the link provided didn't dispute that :rolleyes:
 
fact is that it's still a 256bit memory bus...even the link provided didn't dispute that :rolleyes:

I guess I put too much faith in your ability.
Lets get this established.

There are 4 blocks of memory, each has 2x 0.5GB memory segment and 2x cache.
Except for the last one which has only one cache shared by 2 memory segments.

The crossbar is capable of linking 256 bits, but because of the missing cache on the last memory block, there is no link to connect 1/8 of those bits.
1/8 of 256 available bits is 32 bits.

The available no. of bits to transfer from the crossbar to memory is
256 - 32 = 224 bits.
Not a 256 bit memory interface as advertised.

You could try and argue that that crossbar is the memory interface but that would be only partially correct.
The full memory interface includes the actual connections to the cache/memory as well.
Imagine if only half of the crossbar was connected on the memory side, would that be reasonable?
Could you still call it 256 bit?
 
Its not a 256 bit memory bus otherwise peak bandwidth would be 256GB/s, he knows that.
If you check the block diagram on the same page that Alben provided, it shows that one of the 8 links from the crossbar is missing, so it cannot ever achieve the 256GB/s that a 256 bit bus is expected to provide.
Here, Nenu, I'll explain further: you're conflating the memory controllers with bandwidth. It's not 256 bit memory bus = 256 GB/s

It's 32bit memory controllers x 8 memory controllers = 256 bit memory controller bus
then you have to calculate the memory bandwidth from the memory interface, the type of RAM, and the effective clockrate of the RAM.

My 780 has a 384 bit memory interface...that doesn't make the memory bandwidth 384 GB/s!

Yet again demonstrating that people all up in arms about this "issue" aren't even knowledgable about what they're angry over.
 
Here, Nenu, I'll explain further: you're conflating the memory controllers with bandwidth. It's not 256 bit memory bus = 256 GB/s

It's 32bit memory controllers x 8 memory controllers = 256 bit memory controller bus
then you have to calculate the memory bandwidth from the memory interface, the type of RAM, and the effective clockrate of the RAM.

My 780 has a 384 bit memory interface...that doesn't make the memory bandwidth 384 GB/s!

Yet again demonstrating that people all up in arms about this "issue" aren't even knowledgable about what they're angry over.

Its not me getting confused :p
It was advertised as a card with a 256 bit memory interface.
I demonstrated above that it isnt the case.
 
Its not me getting confused :p
It was advertised as a card with a 256 bit memory interface.
I demonstrated above that it isnt the case.
you demonstrated it by smashing numbers together until you came up with the result you already concluded...not by using the actual formulas for deriving the memory interface or bandwidth :rolleyes:

the card has 8 memory controllers and they are 32 bits each

even the corrected specs don't change the memory interface because it's still 256bits!
the bandwidth slows down because the SMM is communicating through a single L2...not because it lost a memory controller like you simply made up in your example.
 
you demonstrated it by smashing numbers together until you came up with the result you already concluded...not by using the actual formulas for deriving the memory interface or bandwidth :rolleyes:

the card has 8 memory controllers and they are 32 bits each

even the corrected specs don't change the memory interface because it's still 256bits!
the bandwidth slows down because the SMM is communicating through a single L2...not because it lost a memory controller like you simply made up in your example.

The card is not advertised as a 256bit memory controller.
It is advertised a 256 bit memory interface.
I'll refer you to post 57 where AFD quoted 12-b and Exhibit D of the complaint:
"Integrated with industry's best 4GB GDDR5 memory 256-bit memory interface"

The MCs are part of that but do not comprise the whole.
 
The card is not advertised as a 256bit memory controller.
It is advertised a 256 bit memory interface.
I'll refer you to post 57 where AFD quoted 12-b and Exhibit D of the complaint:
"Integrated with industry's best 4GB GDDR5 memory 256-bit memory interface"

The MCs are part of that but do not comprise the whole.
You're really confused.

The cards were and still are advertised as having a 256bit memory interface and that's based on having 8 32 bit memory controllers!

You are wrong, plain and simple. The memory controllers comprise the whole of the memory interface. As I wrote earlier, they don't dictate the memory bandwidth. That's something you conjured up in your mind because you don't seem to understand how memory bandwidth is calculated. This conversation is becoming beyond ridiculous.

The only reason they are quoting that sentence is because of the 4GB of GDDR5 claim. If they are going to make an issue out of the the second half of that sentence claiming it has a 256bit memory interface then they are going to lose plain and simple. Even the anandtech article everyone keeps citing, apparently either not reading it or not understanding what they are reading, explicitly points out the card has a 256bit memory interface just like the 980 and that's not where the bottle neck is.

At this point, you're simply arguing from your own misunderstanding of the technology so there's not much point in commenting further until you read the information and glean a better understanding of what you're supposed to be angry about.
 
If you're still having trouble understanding how this works take it up with anand:

When the GTX 980 and GTX 970 were released, NVIDIA provided the above original specifications for the two cards. The launch GTX 900 GPUs would be a standard full/die-harvested card pair, with the GTX 980 using a fully enabled GM204 GPU, while the GTX 970 would be using a die-harvested GPU where one or more SMMs had failed. As a result of this the big differences between the GTX 980 and GTX 970 would be a minor clockspeed difference, the disabling of 3 (of 16) SMMs, and a resulting reduction in power consumption. Most important for the conversation at hand, we were told that both possessed identical memory subsystems: 4GB of 7GHz GDDR5 on a 256-bit bus, split amongst 4 ROP/memory controller partitions. All 4 partitions would be fully active on the GTX 970, with 2MB of L2 cache and 64 ROPs available.

This, as it turns out, was incorrect.

As part of our discussion with NVIDIA, they laid out the fact that the original published specifications for the GTX 970 were wrong, and as a result the “unusual” behavior that users had been seeing from the GTX 970 was in fact expected behavior for a card configured as the GTX 970 was. To get straight to the point then, NVIDIA’s original publication of the ROP/memory controller subsystem was wrong; GTX 970 has a 256-bit memory bus, but 1 of the 4 ROP/memory controller partitions was partially disabled, not fully enabled like we were originally told. As a result GTX 970 only has 56 of 64 ROPs and 1.75MB of 2MB of L2 cache enabled. The memory controllers themselves remain unchanged, with all four controllers active and driving 4GB of VRAM over a combined 256-bit memory bus.
--http://www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation

Doesn't get more factual than that nor more clear.
 
You highlighted the problem perfectly.
To get straight to the point then, NVIDIA’s original publication of the ROP/memory controller subsystem was wrong; GTX 970 has a 256-bit memory bus, but 1 of the 4 ROP/memory controller partitions was partially disabled, not fully enabled like we were originally told.

As stated it has a 256bit memory bus.
The advertised 256 bit" memory interface" comprises a lot more than that and includes the partially disabled ROP/memory controller partitions which reduces memory performance to 224bit.
Even worse, they tagged on another 0.5GB of ram to share that connection and cache so neother can be fully utilised at the same time.
The best you can hope for is full performance from 3.5GB, or reduced performance.
 
The advertised 256 bit" memory interface" comprises a lot more than that and includes the partially disabled ROP/memory controller partitions which reduces memory performance to 224bit.
no, this is not how you calculate memory bandwidth :rolleyes:
and, no, memory performance is not reduced to "224 bit" :D
 
Nothing in the pleading about memory interface or bandwidth
read it youreself: http://www.scribd.com/doc/256406451/Nvidia-lawsuit-over-GTX-970

the issues raised are: "The Defendants engaged in a scheme to mislead consumers nationwide about the characteristics, qualities and benefits of the GTX 970 by stating that the GTX 970 provides a true 4GB of VRAM, 64 ROPs, and 2048 KB of L2 cache capacity, when in fact it does not."

That's it. If you think you have an additional claim based on the memory interface you better contact the lawyers because they're missing your critical claim...if you even have a 970 which I doubt.
 
no, this is not how you calculate memory bandwidth :rolleyes:
and, no, memory performance is not reduced to "224 bit" :D

I'll spell it out for you.

Its part of the calculation of memory bandwidth available to the GPU.
The GPU is the only thing making use of what is in memory so its of highest importance.
The available bandwidth the GPU sees is from the "memory interface" is linked directly to performance.

NVidia acknowledged in that quote you posted that the original released specs were wrong.
Are you saying that the stated "1 of the 4 ROP/memory controller partitions was partially disabled" will have no impact on available bandwidth?
 
Nothing in the pleading about memory interface or bandwidth
read it youreself: http://www.scribd.com/doc/256406451/Nvidia-lawsuit-over-GTX-970

the issues raised are: "The Defendants engaged in a scheme to mislead consumers nationwide about the characteristics, qualities and benefits of the GTX 970 by stating that the GTX 970 provides a true 4GB of VRAM, 64 ROPs, and 2048 KB of L2 cache capacity, when in fact it does not."

That's it. If you think you have an additional claim based on the memory interface you better contact the lawyers because they're missing your critical claim...if you even have a 970 which I doubt.

You are saying that the case notes clarifying the case are irrelevant?
What they really mean is what we have been discussing already.
 
I'll spell it out for you.

Are you saying that the stated "1 of the 4 ROP/memory controller partitions was partially disabled" will have no impact on available bandwidth?
For the last time, it has an impact on memory bandwidth but does not change the memory bit bus.

The fact you don't seem to understand is that the memory bandwidth and memory bit bus are not 1:1 like you keep miscalculating.
 
For the last time, it has an impact on memory bandwidth but does not change the memory bit bus.

The fact you don't seem to understand is that the memory bandwidth and memory bit bus are not 1:1 like you keep miscalculating.

If only the "memory bit bus" what what mattered and was being discussed, you would have a point.
Its the whole memory interface which is at issue because it does not perform as you would expect from a full 256 bit memory interface.
 
Not to jump into any argument (honestly :)), but I just wanted to clarify my particular misunderstanding of Gigabyte's advertising claim..

I'm not contesting that the 970 is or isn't a 4GB 256-bit card at the moment. If a 980 is "Integrated with industry's best 4GB GDDR5 memory 256-bit memory interface", and a 970 is also "Integrated with industry's best 4GB GDDR5 memory 256-bit memory interface", wouldn't it be understood by the consumer that both the 970 and 980 use the exact same memory subsystem, and that the 970's should be every bit as good as the 980? But that's just not the case.

I'm not sure if that statement used in such a manner constitutes false advertising, but it is incredibly misleading when used identically to describe both cards. It is effectively the same deceptive information (whether intentional, or accidental) as having both 970 and 980 specs showing identical ROP and L2 cache in the reviews that we've all read at launch (using incorrect info provided by Nvidia in their reviewer's guide).. only done so in a very simple manner directly by the AIB to the consumer.

Just my 2¢ (which is probably what I'll get from the class action suit :D)
 
well I'm not going to go any deeper into it than the reviews and follow-ups but the basic point is that it does use the same 256bit memory interface. in fact, that's a defining point of maxwell and why they're able to deliver such great performance at such a low price (by utilizing the same memory interface but cutting out some L2 cache).

claiming it had the same cache and ROPs as the 980 was the only part that could be called misleading. the memory interface is just straight up consumer misunderstanding and if you think the 970 should be every bit as good as a 980 then it's difficult to respond to your position on that: clearly one was $200 cheaper than the other and they're not the same card so, no, one can not reasonably expect that they'd behave the same...
 
Back
Top