NVIDIA GT300 Yields Are Under 2%?

Given the fact that nv's new cards are mainly OEM low end crap and the design issues they have been having? I dont believe you will have a new GT300 properly in shops till pushing 2ndQ next year, especially if the yeilds are this shit. This screams of fundemental design flaws.

NV have made the same mistakes Intel did with the P4. Now its too hot, too big and out of the park.

The g92/4 saga pretty much confirms that. They designed for a lower TDP and couldnt hit it, so they raised it up and launched anyway. Then the packaging scandel and the still ongoing drama with laptops is carrying on the state of affairs. Flipping off your customers and flat out lieing is great PR btw. We'll remember that next time choice comes up. Just remember that.

NV need to pull their finger out and sort this fast otherwise Q1 of 2010 will be ATI. I sure as hell wont be buying a rebranded "old" card from NV. Either sort your shit and get with program or we'll find another supplier.
 
is the manufacturing process really that archaic still that they have to make over 400 chips before they can even test them? im thinking around the 20th one that was bad i would say wait, stop......
Well, you see, engineers use a form of statistical analysis to determine the probability that their design is defective. This exotic form of math takes in a dearth environmental variables, and the probability of a defective design scales much more slowly than the pile of bad chips, contrary to what most would consider common sense.

So for example, on a normal day, when the amount of solar flare activity is relatively low, if your assembly line spit out 47 dead chips and 3 working ones, the probability that the design is defective is only 7%. However, if you have 85 dead chips and 15 working ones, the probability that your design is defective is now 47%, even though there are proportionally more than twice as many working chips.

One reason the functions scale like this is that they take into account the familiarity of the assembly line workers with the design. It takes about 47 chips or so before the workers soldering in the tiny transistors and tiny wires get used to a design. Interestingly enough, that number (47 chips) is the same across pretty much all the process nodes, and lots of papers have been written about this. Anyway, once you've passed 47 chips, the quality of the products usually increases dramatically - if you have a good design.

From these numbers, I'm guessing that nV has a problem with their tools. A single tool, such as a soldering iron, costs hundreds of millions of dollars, and you need to buy a new one for each process node! If nV got a batch of defective soldering irons, it can end up being a huge pain and cost them a lot of money.
 
Yeah, there's no way a 2% yield would even make it out of the engineering labs. I'm calling BS on this one (oh wait, I do that with every Charlie article).
 
And why do you think they aren't the same sources for both Charlie and Kyle ?

Or the fact that Kyle "heard" it makes it true ?

Are you equating Charlie with Kyle? Granted they may be the same sources, but Kyle is saying that Charlie is not just making up this stuff without a source, unreliable or not. The rumors are already out there and Charlie is just jumping all over it. Unfortunately no one has heard anything contradictory to the bad news for Nvidia.
 
400 on a single wafer?
no, 4 wafers at 104 dies per wafer.

is the manufacturing process really that archaic still that they have to make over 400 chips before they can even test them? im thinking around the 20th one that was bad i would say wait, stop......
They have to finish manufacturing before they can run final tests on the whole wafer, and the time of manufacturing means they sent 4 wafers though.

And Im quote this from a guy who was in the industry and had a couple of remarks to add from another forum.
IDC @ anand said:
Can it be that bad? Sure, it can always be zero.

Let's just assume ALL of Charlie's numbers and sources are 100% correct...its four wafers.

Getting low yields on four wafers is not exactly uncommon. And it especially comes as no surprise for a hot lot as typically hot lots have nearly all the inline inspection metrology steps skipped in order to reduce the cycle-time all the more.

Those inline inspections are present in the flow for standard priority wip for a reason, related to both yield (reworks and cleanups) as well as cost reduction (eliminate known dead wip earlier in the flow).

I really pity anyone who is wasting their time attempting to extrapolate the future health of an entire product lineup based on tentative results from four hot-lotted wafers. That's not a put down to anyone who is actually doing just that, including Charlie, its an honest empathetic response I have for them because they really are wasting their time chasing after something with error bars so wide they can't see the ends of the whiskers from where they stand at the moment.

Now if we were talking about results averaged from say 6-8 lots and a minimum of 100-200 wafers ran thru the fab at standard priority (i.e. with all the standard yield enhancement options at play) then I'd be more inclined to start divining something from the remnants of the tea leaves here.

But just four wafers? Much ado about nothing at the moment, even IF all of the claimed details themselves are true.

This would have been far more interesting had the yields on those four wafers came back as 60% or 80%, again not that such yield numbers could be used to say anything about the average or the stdev of the yield distribution but it would speak to process capability and where there is proven capability there is an established pathway to moving the mean of the distribution to that yield territory.

But getting zero, or near-zero, yield is the so-called trivial result, it says almost nothing about process yield to get four wafers at zero yield. All it takes is one poorly performing machine during one process step and you get four wafers with yield killing particles spewed on them.
 
And why do you think they aren't the same sources for both Charlie and Kyle ?

Or the fact that Kyle "heard" it makes it true ?
Because Kyle runs a major, respected hardware enthusiast website and doesn't say things for nothing. And he also carries more weight than some nVidiot having a temper tantrum. :rolleyes:
 
Thanks for posting that, GLSauron.
I saw that late last night and was going to link to it.
 
When the Cell was first in production at IBM, the yields were lower than that...First silicon on even DRAM is usually pretty low yield, so it depends on when that yield happened and on how much volume. This is only one wafer, If it was the very first tested wafer with the GT300, that's nothing to be concerned about, and it's not too unusual to have dead wafers due to a mis-process somewhere in line. Since they're only quoting a single wafer, I wouldn't put much, if any, stock in this report. If you're telling me that it's been in post SQ production for a month and the benchmark wafer is 1.7%, then I'd say you've got a huge problem. If it's a pilot lot and the lowest wafer is 1.7%, then I say you're going to have a damn good product.
 
Uhh, dunno about you but my 7800GTX couldn't run shit after 3 years since buying it in 2005 and my GTX 260 puts it to shame after buying it LAST year.

Was talking about the low-yield, vaporware GTX 512. Please try to keep up with GPU lore.
 
no, 4 wafers at 104 dies per wafer.


They have to finish manufacturing before they can run final tests on the whole wafer, and the time of manufacturing means they sent 4 wafers though.

And Im quote this from a guy who was in the industry and had a couple of remarks to add from another forum.

Thanks. Someone in the know always pops up with the "proper" info :)
 
Are you equating Charlie with Kyle? Granted they may be the same sources, but Kyle is saying that Charlie is not just making up this stuff without a source, unreliable or not. The rumors are already out there and Charlie is just jumping all over it. Unfortunately no one has heard anything contradictory to the bad news for Nvidia.

No, but Kyle "hearing" it doesn't make it true, since if the source is bad, the info is bad...

As for no one hearing contradictory info to these so called "bad" news, is anyone else besides Charlie, reporting "these" bad news, as "news" as you seem to be saying ? Links ?
 
Because Kyle runs a major, respected hardware enthusiast website and doesn't say things for nothing. And he also carries more weight than some nVidiot having a temper tantrum. :rolleyes:

He said he "heard" it. He didn't confirm or deny its veracity, he just said he "heard" it. Learn the terms and their differences...:rolleyes:
 
Hey Silus, I've seen you post both here and at TR. All I have to say is, that you're very very quick to defend news posts that don't say anything positive about the green team. Not calling you a fanboy per se, but this is precisely what I've noticed across a long time span, by simply visiting these two tech sites.

Just thought I'd let you know. ;)
 
The big thing about this, isn't yield issues, but the fact that nvidia's engineers might not have enought 'working' chips to debug and test the line, and design the proper bug fixes before they can go ahead and prepare production runs for the consumer chips.

That can set back the process, for a length of time until the next batch returns to the fabs. If per the article, the next 4 waffer set returns more usable chips to test and diagnose with, it will only be a minor set back, if they have to order more, it will be a serious set back.(months)
 
Ah, that article says 1st silicon, in which case 1.7% isn't bad. However, the major issue I see is 4 wafers? Lots are 25 wafers, I don't know how they'd get only 4 out of it unless they had 6 splits on the lot and only 1 split had good die.

1st silicon often doesn't have many good die, sometimes there are none that are 'good' but they can still do parametric testing and partial testing of the die, failure analysis etc. to improve the next wave of material. If this were the 1st lot out of an initial 10 lot production run, 1.7% would be pretty bad, but 1st silicon or pipecleaner lots, it's not that big a deal depending on what caused it. I still don't get the 4 wafer deal, though, unless the rest of the lot got trashed somewhere in line. As I said, initial Cell yield at IBM was worse.
 
Care to show me which part in that original Japanese article stated the yields, and I mean the original Japanese texts? All I could see were the release dates.

You'd think there'd be a few here who could actually read Japanese here besides me...
 
He said he "heard" it. He didn't confirm or deny its veracity, he just said he "heard" it. Learn the terms and their differences...:rolleyes:
And neither did I. The point is that it's a rumor within the industry that has not been confirmed, but more importantly, NVIDIA has done nothing to deny it either. Although I suppose you can't see that logic through the tears.
 
I'd be surprised if GT300 cards were available to retail before Q1 2010. Be close run thing if they have test cards to "review" before end of november.

Its a damn good job Nv is working on cell phone chips cos they really have screwed up the gpu lines.

Oh and if charlie is right and they "rebrand" again... that will be fucking pathetic.
 
oh and thats based on the squirming they are doing with TSMC btw and the issues with getting a proper 40nm node process running.
 
oh and thats based on the squirming they are doing with TSMC btw and the issues with getting a proper 40nm node process running.

What do you mean?
It has to be an issue on nVidia's end, as ATi (apparently, with theirs just on the cusp) already has enough for a release.
 
What do you mean?
It has to be an issue on nVidia's end, as ATi (apparently, with theirs just on the cusp) already has enough for a release.

yep. rumor has it that Nv did not follow basic fabbing rules and so are suffering badly with their shrink. Given that they also did not follow packaging rules (skipped a layer hencing exascerbating the flexing when under a heat cycle) for laptop chipsets making it even worse, I can believe it.

nb (for those that go "oh god another charlie rumour" please go read the article where they chopped a macbook in half to test the chips and proved this and that nv lied and they still havent fixed it in all laptops, then go read the rather strongly worded apple tech notice where they pretty much say nv lied hence replacements of motherboards are required.)
 
start with
Geforce 3 great cards
Geforce 4 huge performance leap
Geforce 5 no big performance change
Geforce 6 huge performance leap
Geforce 7 minor performance change
Geforce 8 huge performance leap
Geforce 9 no big performance change
Geforce GTX200 huge performance leap (excluding rebadges)
sooooo..................

Sorry but charlie makes sense to me as an owner of every series so far.
 
Ok I don't post much but a couple things jumped out at me, one soldering irons? These chips are basically grown more like crystals than anything else. You lay down a layers of material and cut interconnects and such computer driven tools. There is no way to test these half way though. They basically work or they don't being too small to fix in any cost effective way. Using firmware you can disable certain pieces if they were design that way in advance. But for the most part you design it and hope that no major flaw in the materials used, or equipment does not hiccup, someone does not track any dust in past the clean rooms etc...

What is funny is these number are not only possible but likely for a chip that size on a 300 fab. Intel's chips which contain four processors are smaller using 300's. Nvidia needs to go to a larger fab if their chips are going to that much bigger.

As to cuda and being dead it is being used in data bases because it is very fast at thousands of simultaneous transitions. With Nvidia owning metal ray which can be recompiled to take advantage of the gaming video cards and tesla? cards and then turning around to market said cards to the people who spend stupid amounts of money on rendering, I don't think nvidia going any where.

What I'm curious is their current gpu is huge compared to cpus in transistor count, there memory interface is faster than the system bus, what is possibly of using it to use it as a Virtual Machine?
 
I am neither AMD nor nvidia fanboi having used several of each of their cards in the past. Currently using an AMD card. I just use whatever is best for the price and like competition. So I'm completely neutral between the two. (Same with AMD and Intel)

I had never heard of Charlie before today so I went to look at his site. I checked out his past news stories looking for the AMD and nvidia posts.

What became blatantly clear very quickly was that EVERYTHING he says about nvidia is disparaging and filled with hate and disgust while everything he says about amd is glowing and rainbows. It's the worst case of hardcore bias I've ever seen. I'd have a hard time picking out the bits of truth in anything he says if he was reading straight from a dictionary.
 
Charlie is the ultimate nvidia hater, he has made a career out of sensational over the top tripe. I give anything he writes the same faith as stories about "Bat Boy Marries Wolf Girl" in Weekly World News.
 
Charlie is the ultimate nvidia hater, he has made a career out of sensational over the top tripe. I give anything he writes the same faith as stories about "Bat Boy Marries Wolf Girl" in Weekly World News.

UFOs abduct the governer... lives to tell tale.
 
start with
Geforce 3 great cards
Geforce 4 huge performance leap
Geforce 5 no big performance change
Geforce 6 huge performance leap
Geforce 7 minor performance change
Geforce 8 huge performance leap
Geforce 9 no big performance change
Geforce GTX200 huge performance leap (excluding rebadges)
sooooo..................

Sorry but charlie makes sense to me as an owner of every series so far.

Wait. What?

Geforce 3 and 4 wern't that different were they...I thought it was just much of the same with some tweeking...

Also, the FX (5) was horrible :( dark days so they were.

The Geforce 3, 6 and 8 have had the biggest changes so far, in terms of design and performance increase (I think anyway)

I'm not sure about all this fab talk so i'll say this. I hope it comes out around February. I also hope i dont need a new PSU haha. 800W please be enough XD
 
Well, the MX series just sucked, but the Ti series (except for mobile) were excelent for their day.
 
Cyrilix, you don't think one chip version of GT200 was much quicker than one chip version of G92/G92b ? And please, don't compare it with 9800GX2, it's not normal to compare performace of single chip with two cards SLI'ed and glued together.
 
Cyrilix, you don't think one chip version of GT200 was much quicker than one chip version of G92/G92b ? And please, don't compare it with 9800GX2, it's not normal to compare performace of single chip with two cards SLI'ed and glued together.

I don't think he meant it like that...


But you are right.
However, the G92b that was in the 8800gt/9800gt was supposed to be in the GTS240 (vapor). It never got to the GT 2xx line.
 
I mean the fact, that GT200 was roughly at same performance level as the SLI of previous generation. If that is not enough, then, well, nothing is enough for you people :D.
 
My biggest complaint is the fact that, even though Charlie's website has many other "news", mostly AMD ball sucking, but related with other companies too.

Yet only the hate FUD is posted in sistes like these. Why is that ? Is Charlie such a bad source for everything, EXCEPT NVIDIA "rumors" ? That sounds like bias from the ones posting the news to me...otherwise we would be seeing everything that crack head posts on his website and not just NVIDIA hate FUD.
 
I mean the fact, that GT200 was roughly at same performance level as the SLI of previous generation. If that is not enough, then, well, nothing is enough for you people :D.

Nah, it's enough, but it's a "color" thing :)
If it's not done by their favorite "color, then it sucks and should be better.

Just like all the PhysX hate arguments. It's not that they don't want it. They do and they actually blame NVIDIA because AMD didn't license PhysX...but they've gone as far as saying "as long as it's not PhysX", which is hilarious!
 
Back
Top