HardOCP looking into the 970 3.5GB issue?

Status
Not open for further replies.
These are the like the most oddest settings and most random resolutions. It looks like they took their time to find benchmarks that fit there 1-3% claims.

When AMD had frame times all over the place when using crossfire NVIDIA developed fact. Where is there fcat testing?

We need some independent testing on this.

At those settings the GPU grunt will be the biggest bottleneck, so would hide most issues when looking at just FPS.
 
ya'll buthurt non GTX 970 users better spend more dinero to get one and see its performance !
ya'll GTX 970 users if you happen to believe this smear campaign, better do an SLI !
easy peasy
 
I like how you conveniently ignore posts like this right next to the one you posted. It shows the way you lean regardless of facts:

https://forums.geforce.com/default/...tx-970-3-5gb-vram-issue/post/4434270/#4434270

Also, definitely loving how people just realized there was "something wrong" with their cards 4 months after it was released. It really shows how crazy human psychology is.

Plus that CUDA program that has everyone up in arms has been shown to be incorrect and pretty much useless. As you said though the paranoia has already spread quickly beyond the realm. Not sure if my friends EVGA 970 is special but it doesn't seem to have issues as far as he knows with Shadow of Mordor or his other games. SoM is the only heavy Vram based game he has (well, I forget if Thief is) so can't really say what is what since nothing has been unsatisfactory. Regarding what I've read, it seems like it may be just an architectural thing when you start snipping things from the original(gtx 980).

But who knows, I'm not an engineer, and I'm not an Nvidia engineer, but the internet sure has plenty of armchairs. Then again every facet has an armchair or two on the web now.
 
Wow... Incredibly thick statement here man. Please be smarter than this.

More like a fundamental lack of comprehension on your part.

You're now saying that 980 is affected with nobody seeing the same memory results in the CUDA benchmark, and no one complaining about stuttering or anything wrong with 980?

No, I'm not - in this thread - commenting on the 980 at all.
 
Oh really now? I guess this isn't a dual 6 pin card :rolleyes:
http://www.evga.com/Products/Specs/GPU.aspx?pn=d8328514-f9bc-44aa-ae85-d50c9f433297

EVGA isn't the only company with those original dual 6 pin connectors on their aftermarket designs either.

If you want to play the fact game, start providing sources with links.
That is an OEM (Aftermarket, as you put it) PCB design. Nvidia had absolutely zero input on the number and type of power connector, number of power phases used on board, layout of components, number and type of display outputs, etc. For these OEM designs, all Nvidia provides is the chip itself and the specifications to talk to it.

A Reference design is something different. Here, Nvidia not only provides the chip but also the design for both the PCB and the cooling hardware. Reference 980s all use the same PCB and the distinctive NVTTM-variant cooler.
Nvidia did not provide a reference design for the 970. The only 970s available are either using custom OEM designs, re-used PCB designs from previous cards (many using 770 PCBs), and one manufacturer putting 970 chips on 980 reference PCBs (Manli).
 
That is an OEM (Aftermarket, as you put it) PCB design. Nvidia had absolutely zero input on the number and type of power connector, number of power phases used on board, layout of components, number and type of display outputs, etc. For these OEM designs, all Nvidia provides is the chip itself and the specifications to talk to it.

A Reference design is something different. Here, Nvidia not only provides the chip but also the design for both the PCB and the cooling hardware. Reference 980s all use the same PCB and the distinctive NVTTM-variant cooler.
Nvidia did not provide a reference design for the 970. The only 970s available are either using custom OEM designs, re-used PCB designs from previous cards (many using 770 PCBs), and one manufacturer putting 970 chips on 980 reference PCBs (Manli).

That is not true one bit, they have to have authorization from Nvidia before changing anything on the design of the card, including just changing the aftermarket cooler. An OEM can't simply buy Nvidia chips and start doing whatever the hell they want. There are contracts involved with a set of limitations.
 
An OEM can't simply buy Nvidia chips and start doing whatever the hell they want. There are contracts involved with a set of limitations.
Those limitations are on providing sufficient minimum power to the chip (with voltages within ripple etc spec), providing sufficient cooling, and on limiting the available clock speeds and core voltages exposed to the user. PCB layout is not one of the things covered and OEMs have free reign there, as evidenced by the variety of PCB and cooler designs available.
And said limitation apply for warranty support (OEM to Nvidia for the chips themselves). OEMs are free to produce utterly beastly core voltage and overclocks, but void the right to send failed chips back to Nvidia if the chip doesn't work. That is one of the reasons why there is a noticeable jump in pricing between 'OC' cards within Nvidia's proscribed clock and voltage ranges, and ones that fall outside of that range; the OEM has to absorb a much larger proportion of hardware failure costs.
 
Who the fuck cares about power connectors. That has nothing to do with this thread. Take your dick-measuring contest elsewhere.
 
That is not true one bit, they have to have authorization from Nvidia before changing anything on the design of the card, including just changing the aftermarket cooler. An OEM can't simply buy Nvidia chips and start doing whatever the hell they want. There are contracts involved with a set of limitations.

Considering that the 970 has no reference board, how the hell do you propose that nVidia limit any design changes?

From AnandTech:

Furthermore, as we mentioned in our GTX 980 review, GTX 970 has been a pure virtual (no reference card) launch, which means all of NVIDIA’s partners are launching their custom cards right out of the gate. A lot of these have been recycled or otherwise only slightly modified GTX 700/600 series designs, owing to the fact that GM204’s memory bus has been held at 256-bits and its power requirements are so low.
 
and its been updated.

http://www.pcper.com/reviews/Graphi...Full-Memory-Structure-and-Limitations-GTX-970

doesn't exactly scream good news, more like bad.

The diagram says everything. nVidia managed to make an unbalanced 256-bit 4GB card, and the reality is that it is a 224-bit card effectively. It would have been better to just have made it 3.5GB and called it a day. Basically, this is to be able to sell chips with a single faulty L2, but still bill it as "256-bit 4GB".
 
The diagram says everything. nVidia managed to make an unbalanced 256-bit 4GB card, and the reality is that it is a 224-bit card effectively. It would have been better to just have made it 3.5GB and called it a day. Basically, this is to be able to sell chips with a single faulty L2.

Funny part about all this. Techreport.com comment on this issue back in October, and no one gave a shit until now.

This to me is worse then some bullshit horrible reference cooler. They lied....and are in damage control mode.
 
I get that the performance is the same as it was on launch day, but however that does not change the principle of the matter. If it was not for the talented folks in our little community (the pc enthusiasts) that discovered this issue, I doubt we would have ever heard anything from Nvidia.
 
Something being lost in this and not many people are asking is what about other Maxwell GPUs? The GTX 980m, 970m, 965m, 750?

How about Kepler even?
 
this is nvida right now in regards to the 970 issue.

chuck-bailingblog.jpg


I think they will need a bigger bucket now, or the boat is going down. At least they are being up front about it after it was pointed out.
 
this is nvida right now in regards to the 970 issue.

chuck-bailingblog.jpg


I think they will need a bigger bucket now, or the boat is going down. At least they are being up front about it after it was pointed out.

Yea only took them 4 months....to realize every major hardware vendor had the specs wrong?.....lol
 
Funny part about all this. Techreport.com comment on this issue back in October, and no one gave a shit until now.

This to me is worse then some bullshit horrible reference cooler. They lied....and are in damage control mode.

It's just a memory partition. :rolleyes: Every 970 review shows that it's an awesome card.

Now the AMD throttling issue. That was a major f*** up.
 
This is little more than an opportunity for some people to whine and stamp their feet. Honesty is always the best policy so any anger at Nvidia is self made, but IMHO it's actually a rather good example of engineering to provide MORE performance than what would have been possible before with Kepler.

Had the GeForce GTX 970 been built on the Kepler architecture, the company would have had to disable the entire L2/MC block on the right hand side, resulting in a 192-bit memory bus and a 3GB frame buffer. GM204 allows NVIDIA to expand that to a 256-bit 3.5GB/0.5GB memory configuration and offers performance advantages, obviously.

Size of L2 and # of ROP units is different than originally stated, and they do deserve to be called out on that. 4GB memory is correct, and while an argument can be made that it should have been noted for what it is (3.5GB full speed, 500MB cache or something to that effect), the fact remains that it does offer a full 4GB available, and prioritizes the larger/faster pool first (as it should, obviously).

970 is still a tremendous value for the performance it offers and that speaks for itself in gameplay.
 
It's just a memory partition. :rolleyes: Every 970 review shows that it's an awesome card.

Now the AMD throttling issue. That was a major f*** up.

How is the AMD throttling issue even remotely close to this? All cards throttle now. Nvidia has been doing it longer than AMD actually with their " variable boost clocks".

Nvidia was just caught falsely advertising the 970 in number of ROPS, L2, and Memory bus width. If you think the AMD throttling issue is even remotely close to this I feel sorry that some of your green blood was spilled.:rolleyes:
 
and its been updated.

http://www.pcper.com/reviews/Graphi...Full-Memory-Structure-and-Limitations-GTX-970

doesn't exactly scream good news, more like bad.

"First, despite initial reviews and information from NVIDIA, the GTX 970 actually has fewer ROPs and less L2 cache than the GTX 980. NVIDIA says this was an error in the reviewer’s guide and a misunderstanding between the engineering team and the technical PR team on how the architecture itself functioned. That means the GTX 970 has 56 ROPs and 1792 KB of L2 cache compared to 64 ROPs and 2048 KB of L2 cache for the GTX 980."

how come this was never corrected? they knew the spec were wrong, but didnt say anything?
 
Meta discussion, but surprised AnandTech has an article up before Tom's Hardware. AT still lists the "corrected" specs as 256-bit when in reality the card operates as 224-bit 99% of the time, unless I am mistaken.
 
Considering that the 970 has no reference board, how the hell do you propose that nVidia limit any design changes?

From AnandTech:

The design has to be cleared by Nvidia. They are 100% involved in certifying non-reference cards. You took what I said out of context, I never said anything about limitation of design changes.
 
Of course, that's why I read reviews... I left that part out. Reviews said it cleaned better. Done.

Just like reviews agreed 970 performed as it does. Done. You don't need to speculate how this affects performance when you have already seen performance reviews from every site on the planet!

Supporting full speed access to the final 512MB obviously would've increased cost. Obviously a decision was made, and there was a trade off to get the thing into more people's budgets. A trade off that affects performance in a small percentage of situations, and maybe costs 0-2% to overall performance apparently.
The problem is 1-2% is enough to cause frame stuttering
 
It's just a memory partition. :rolleyes: Every 970 review shows that it's an awesome card.

Now the AMD throttling issue. That was a major f*** up.

The specs listed on the official nvidia slides aren't correct. Not only are they not correct, they never bothered to correct them. If they didn't get called out on this we still wouldn't know the correct specs. And you want to talk about Amds throttling? Really dude?
 
It's just a memory partition. :rolleyes: Every 970 review shows that it's an awesome card.

Now the AMD throttling issue. That was a major f*** up.

No this is major fuck up. Lieing about specs on a video card? WAY worse then throttling (Which was fixed with good coolers).

There is no fix to this. You can try to spin it. Nvidia fucked up bad, and is damage control. This way worse then anything AMD ever did. At least they didn't lie about specs of a video card.

There is no way you can spin this Prime1. Just wont happen this time.
 
Anandtech (Ryan Smith) has an article about it.

In short this is probably why it gets noticeable (stutters?).
GTX 970 can read the 3.5GB segment at 196GB/sec (7GHz * 7 ports * 32-bits), or it can read the 512MB segment at 28GB/sec, but not both at once; it is a true XOR situation. Furthermore because the 512MB segment cannot be read at the same time as the 3.5GB segment, reading this segment blocks accessing the 3.5GB segment for that cycle, further reducing the effective memory bandwidth of the card. The larger the percentage of the time the crossbar is reading the 512MB segment, the lower the effective memory bandwidth from the 3.5GB segment.

Check out the article, it's pretty informative at least compared to PCPer's NV statement.

http://www.anandtech.com/show/8935/geforce-gtx-970-correcting-the-specs-exploring-memory-allocation
 
Talking about a competitors issue is a typical bait and switch tactic of someone that is dedicated to one company over another. Investing emotions into a company like that is just ridiculous and always just puts you into a bad mood when you see them fail.

The only time I ever care about a company is if it's a fair and local business, and it's helping my local economy. Even then, if they mess up, I expect them to fix whatever it is within reason.

A while back Nvidia did a reimbursement program for GTX 280/260 early adopters that got screwed when they did major price drops after the first month.
http://www.maximumpc.com/article/ne...sement_program_gives_cash_back_early_adopters

Maybe they will refund their customers for the false advertisement on the video cards. I hope they do something, cause this is a bunch of crap and their most recent response is nothing we didn't all ready know.
 
According to the pcper article Nvidia's labs say the dfference could be 4-6%, so in one day they doubled the 1-3% previous estimate

NVIDIA’s performance labs continue to work away at finding examples of this occurring and the consensus seems to be something in the 4-6% range. A GTX 970 without this memory pool division would run 4-6% faster than the GTX 970s selling today in high memory utilization scenarios. Obviously this is something we can’t accurately test though – we don’t have the ability to run a GTX 970 without a disabled L2/ROP cluster like NVIDIA can. All we can do is compare the difference in performance between a reference GTX 980 and a reference GTX 970 and measure the differences as best we can, and that is our goal for this week.
 
There is no way you can spin this Prime1. Just wont happen this time.

You are the one spinning things. Here is the conclusion from the PCPER article that has your jimmies rustled.

NVIDIA has come clean; all that remains is the response from consumers to take hold. For those of you that read this and remain affronted by NVIDIA calling the GeForce GTX 970 a 4GB card without equivocation: I get it. But I also respectfully disagree. Should NVIDIA have been more upfront about the changes this GPU brought compared to the GTX 980? Absolutely and emphatically. But does this change the stance or position of the GTX 970 in the world of discrete PC graphics? I don’t think it does.

Tempest in a teapot. But if it makes you feel better to rant about it...have it at.
 
No way the louder we shout about this the more chances we get some free game vouchers :D
 
You are the one spinning things. Here is the conclusion from the PCPER article that has your jimmies rustled.



Tempest in a teapot. But if it makes you feel better to rant about it...have it at.
Eh, I understand that the performance is still as good as it was shown to be in reviews, but I think the fact that they were misleading (by omission, at least) about how the memory works, as well as the fact that they misstated the # of ROPs and L2 cache and never bothered to correct it is poor form. It's definitely not a non-issue, though I do agree that a lot of people are getting too emotional about it.
 
Don't worry, you can just lower settings so the GPU doesn't need to address memory above 3.5gb. This will also improve frame rates so it's a win-win!
 
I'm dismayed with this site. I have watched this once great resource turn in to the BBQ Pit Boys of hardware enthusiast sites.

You are too busy making anti-Apple click bait news items and catering to redneck pc gamers, than standing by your loyal readers. You should be giving Nvidia both barrels, just like you used to do when you still cared about what you do. I bought my 970 after reading the review on this site, and since then I've noticed the complete lack of any real pc industry news and views.

I'm out of here. Thanks for all the fish.
 
Status
Not open for further replies.
Back
Top