Dissappointing Year So Far for AMD

Status
Not open for further replies.
On the other hand if your willing to discuss the pitfalls, and drawbacks of the technology, that's one thing, but "AMD sucks and your stupid" has no place here.

Several people tried to discuss the technology, but you didn't get much further than "AMD rules and your stupid" (sic), and "Intel can't be better, that's not fair!".
So don't start begging for a technical discussion now. You ruined it singlehandedly.
 
Several people tried to discuss the technology, but you didn't get much further than "AMD rules and your stupid" (sic), and "Intel can't be better, that's not fair!".
So don't start begging for a technical discussion now. You ruined it singlehandedly.

You've made claims that are impossible to validate. You make assumptions that require data that simply cant be done... And then you base your entire argument on that And even though you --know-- your lying, your adamant that your not.... Just like your "MCM is cheaper" BS. In order for your --assumptions-- to work out it requires AMD to have an impossibly low yield. Or just like your "AMD will never be able to compete" Or your "AMD cant catch up"

If your willing to have a technical discussion then I am., but so far it hasnt been anything more then a fanboy gloatfest.
 
I can just imagine duby sitting behind his PC, fingers stuck in his ears, shouting "Lalala I can't hear you!".
 
Boy you just keep diggin that hole deeper dont you?

If you took the time to read what I wrote throughout this thread, you'll know exactly what I think.

1: Barcelona is a stop-gap.

Strange , 3 months ago K10 was an Intel killer.Being honest with yourself is tough , isn't it ?

2: AMD bought ATi, for it's R600 architecture.

Really ? They were really smart for doing this :rolleyes: .I thought they needed some know how into implementing a GPU in the CPU.

4: Once this happens AMD will finally have a replacement for the --original-- K10.

Will AMD survive the crash crunch that long ?
5: Ruiz kicked Intels ass for how many years.

1.
Mid 05 to mid 06.
6: Becouse of that Meyer is going to have a more powerful company when he takes over.
7: Meyers big contributions will be in manufacturing.

Yeah ; did you also know that Rudolf the reindeer is a she ? Or worse , a eunuch.
 
I can just imagine duby sitting behind his PC, fingers stuck in his ears, shouting "Lalala I can't hear you!".

Nah , he has an old gramophone stuck into his head singing the same song with slight variations to adapt it to recent developments.
 
You've made claims that are impossible to validate. You make assumptions that require data that simply cant be done... And then you base your entire argument on that And even though you --know-- your lying, your adamant that your not.... Just like your "MCM is cheaper" BS. In order for your --assumptions-- to work out it requires AMD to have an impossibly low yield. Or just like your "AMD will never be able to compete" Or your "AMD cant catch up"

If your willing to have a technical discussion then I am., but so far it hasnt been anything more then a fanboy gloatfest.

That's the biggest horse manure ever told on this subforum.
You are incapable of handling a technical discussion since you don't understand the following terms :
1.Technical as in supplying data and logical arguments
2.Discussion as in listening to the other side.
 
I can just imagine duby sitting behind his PC, fingers stuck in his ears, shouting "Lalala I can't hear you!".

And as per usual when I offer a technical discussion he resorts to his usual personal attacks.... shame, shame......
 
That's the biggest horse manure ever told on this subforum.
You are incapable of handling a technical discussion since you don't understand the following terms :
1.Technical as in supplying data and logical arguments
2.Discussion as in listening to the other side.

Count that two...... as per usual, when a technical discussion is offered he resorts to his usual personal attacks...... tsk, tsk, tsk.......
 
Strange , 3 months ago K10 was an Intel killer.Being honest with yourself is tough , isn't it ?

Really ? They were really smart for doing this :rolleyes: .I thought they needed some know how into implementing a GPU in the CPU.

Will AMD survive the crash crunch that long ?

1.
Mid 05 to mid 06.

Yeah ; did you also know that Rudolf the reindeer is a she ? Or worse , a eunuch.

I've always sad that K10 was a stop gap. I had --hoped-- that t would perform better then it does. I made a few bets that I lost, but I had always known that K10 was a stop gap, as I always said it was.

Besides the GPU,CPU combo is also a stop gap in the sense that the ultimate goal is thorough integration... Meaning no functional difference between CPU, and GPU. This is phase 3 of Fusion. So really I just hope that AMD can hang in there until then. I have no doubt that they in fact will, but it's going to be a tough road to get there.

Also you got to admire Ruiz. He took AMD from a nobody scrounging off the bottom of the barrel, and helped it compete against a giant. Of course it helps that they had a great product in the K7, and K8 to get them there, but without Ruiz making the design wins it wouldnt have happened. Really AMD started kicking Intels ass in 2003.. At least thats when the tides turned in AMD's favor. They shifted back to Intels favor by the mid to late 06 with the launch and availability of Conroe. Which by the way far exceeded my expectations. I had to eat flack for that for a long time.

Now looking in the future, I have no doubt that AMD will survive, anybody suggesting otherwise is a flat out fanboy that needs smacked in the skull with a frying pan.

Anyhow that is my take on things. As it always has been. I havent been right on everything (e.g. Conroe) But, more often then not I'm right on the nose.
 
And as per usual when I offer a technical discussion he resorts to his usual personal attacks.... shame, shame......

The ball was in your court, duby.
Read back the thread, and notice that hard numbers from AMD about yields being about 50% were posted, and various sources quoted on MCM being cheaper to produce.
You never responded there. You started resorting to personal attacks. We're still waiting for your technical response in this discussion.
 
Count that two...... as per usual, when a technical discussion is offered he resorts to his usual personal attacks...... tsk, tsk, tsk.......

http://www.hardforum.com/showpost.php?p=1031706718&postcount=224

All the technical discussion you could want. The first time you squawked that this was paid for posting, the second time you ignored it.

There are number of process FACTS that you ignore, probably because you know them to be true and would like to pretend they don't exist.
 
Ummm. I told you from a technology perspective what was wrong with your assumptions. Your answer was that I must be paid to write this stuff. I answered that is paranoid delusion and ducking the facts. It is both.

Time won't change the facts of today.

Process/Defects Facts:

1: As a process matures defect rate drops. This is a fact.
2: Intel started producing 65nm processor much sooner than AMD, so it's process is more mature.
3: Therefore Intel has a lower production cost.

Certainly Intel must have a higher yield then AMD due to it's smaller Die size, however I think you --FAR-- overestimate how low AMD's yields must be. Anything less then 70% is simply unsustainable. It's unaffordable it would cost so much money that it simply wouldnt be possible. Lets say that AMD does indeed have the same defect ratio as Intel, with AMD's die size twice that of Intel's it would have nearly half the yield. If Intel had a 100% yeild that would give AMD somewhere around 50% That's BS and you know it.. You cant assume that they have the same defect ratio, becouse if they did AMD wouldnt have even bothered trying to manufacture it. As it is right now, it is --IMPOSSIBLE-- to know what --either-- companies yields really are.... But we can make a guess, and say that based on industry speculation Intel is somewhere around 87-91%, and AMD is somewhere around 79-83% In --both-- cases that is a guess. At these yields AMD's monolithic die will cost less.

Multi-Chip packaging Facts:
1: It allows matching of high speed bin parts to make very fast quads. AMD is stuck with whatever quads emerge, with much lower chances that all 4 chips clock high, so high clocking quads will be rare in comparison.
2: Intel can run the exact same line to build quads and duals, thus getting better economy of scale on both.
3: Building two half size cores improves yield rate again over one big quad core.

Acknowledged. And if you have money to burn then these are legitimate benefits, that AMD will someday try to gain when it releases it's MCM packaged 8 core products in the future...

Basically Native quads offer nothing. AMD went native because they had no choice with an integrated memory controller. Intel had a choice with FSB, so the they stayed MCM where there are significant advantages.

They are cheaper to manufacture, they're cheaper to package, and the list goes on. In almost every stage of manufacturing from fabrication, to packaging to binning to quality control it is cheaper.


The Current state of the chips.

IPC:
The real issue that tends to get overshadowed by this slow launch speed debacle is that Phenom did not catch Conroe on IPC. This is dire because Penryn further moves the old goal post with AMD did not catch. Intels SSE4 gives them another nice boost in encoding apps as well.
Clock speed:
First we have a glitch that means AMDs fastest Quads is slower than intels Slowest. This is a disaster.
Even ignoring the Glitch Intel has that it's technology clocks up easily and the only thing keeping them from releasing ever faster parts is a total lack of competition.
AMD has gone from the lead to almost back at K5-K6 days.

See now this is why I didnt respond to you... This is fanboy central right here... I mean really dd you get paid? How much did you earn? I mean we all know that IPC is lower then we were told. We all know that lock speed is lower then we were told. We are not stupid. You can stop gloating now.


Into the Murk, what the future holds:


Near term (next 6 months) AMD is in trouble. Intel will be unleashing a world of hurt with fast small die Penryns. The will be a die size, clock speed and IPC advantage over AMD. AMD will have to accept low ASP to move parts and continue to bleed red.

After that it gets very murky, it will be Bulldozer vs Nehalem. Which on paper sound very similar. But I think you have to give Intel the edge. They have already demoed Nehalem, they have already transitioned to an apparently very healthy 45nm process.

I would say AMD CPU line is mired deeply in second place for the foreseeable future.

Now with Penryns smaller die size it will in fact be cheaper then Phenom. No question about it. They will have a cost advantage.. How much that'll be I dont know, but Penryn looks to be a very successful die shrink, and Intel did real good with it. Hopefully AMD can get some of the bugs worked out in the next few core revisions and get clocks, and IPC up a bit. You might think this is wishful thinking, but it's happened before. It looks like AMD will be able to get a good IPC boost when it gets the TLB bug squashed, and hopefully AMD can get the clocking domain issues fixed in the next few revisions. The way I see it is that K10 wasnt ready for launch. I personally believe that AMD would have been better off had they chosen to delay. It certainly would have hurt their reputation, but not as bad as this though.


Ati the savior of AMD?


Now despite all the claims of this being a bad idea, I think AMD needed ATI to complete the platform, it was the right thing to do, it was just bad timing to join a war on two fronts, and find itself losing on both fronts. On to the speculation because facts are sparse:

R700
Rumor, small die, in 45nm, AMD gives up on monolithic and goes MCM ;) I will be pay close attention to see how this is executed. If this multi-die graphics does away with the overhead and driver grief of crossfire and does the load sharing efficiently at the hardware level ATI may have something here. But there are a boatload of ifs here.

We also know nothing of NVidias plan, they released 8800 a Looonnnnggg time ago, they must be building something interesting.

The integrated platform:

These small R700 cores on an individual basis might also provide the basis for ATI's integrated platform. Now imagine a not too expensive integrated platform that delivers R670 performance levels, that would be sweet and a competetive advantage for AMD. No one else can match the competetive graphics/CPU pacakage for quite some time. For me this represents the real potential savior of AMD and they did what had to be done. Now it is all down to how the execution falls out. AMD has to deliver a whole that is greater than the sum of it's otherwise second place parts.

That is my analysis for the day (unpaid analysis Duby :D )

On to the meat.

I think you totally misunderstand the benefits that a modern GPU has for a modern CPU company. Lets face the facts here folks, AMD is --not-- a platform provider. AMD's core business is CPU's. CPU's are AMD's bread and butter. And ATi's chipset division is nothing more then grape jelly for AMD's bread and butter. Sure it helps, but I will bet that AMD did --not-- buy ATi for it's platform business. Maybe I'm wrong, but after looking at the R600 architecture, I very seriously doubt it.

http://www.digit-life.com/articles2/video/r600-part1.html

Look at the block diagram, and then imagine a couple dozen integer processing units, and maybe a dozen or so x86 front end units, that include decoders, and ROB, and the register table and all that legacy crap. This would be one hell of a beasty that nobody could compete with. Legacy performance woud prolly suck, but then AMD announces SSE5... Coincidence? I dont think so. The reason AMD bought ATi was for --that-- architecture. They wanted that IP. Iiii'm sure that by the tme it gets implemented as a general purpose product, it'll be --heavily-- redesigned for x86 performance in mind, but this is it folks. This is the future.....
 
Nor does it give you the right to patronize me becouse of the choices I make. If you arent willing to discuss the technology,and bash those who are, then your trolling and should leave. Just like everyone was taught from a young age, if you dont have anything nice to say, dont say anything at all.

On the other hand if your willing to discuss the pitfalls, and drawbacks of the technology, that's one thing, but "AMD sucks and your stupid" has no place here.

You've made claims that are impossible to validate. You make assumptions that require data that simply cant be done... And then you base your entire argument on that And even though you --know-- your lying, your adamant that your not.... Just like your "MCM is cheaper" BS. In order for your --assumptions-- to work out it requires AMD to have an impossibly low yield. Or just like your "AMD will never be able to compete" Or your "AMD cant catch up"

If your willing to have a technical discussion then I am., but so far it hasnt been anything more then a fanboy gloatfest.



The tremendous hieghts of your hypocrisy and hyperbole know no bounds.You never fail to shock me,in just how far out and twisted your views are. :eek:

I'm sure your a decent fellow IRL duby,but...do you read what you are writing !????!! :confused:
 
Lets say that AMD does indeed have the same defect ratio as Intel, with AMD's die size twice that of Intel's it would have nearly half the yield. If Intel had a 100% yeild that would give AMD somewhere around 50% That's BS and you know it.. You cant assume that they have the same defect ratio, becouse if they did AMD wouldnt have even bothered trying to manufacture it.

You need to stop using wishful thinking in place of facts. It is very unlikely that the defects are equal. Using already mentioned facts. Defect rates are certainly higher at AMD This is due to the simple fact that defect rate gets lower as process matures and Intel has been producing 65nm much longer. These are simple constant facts of the semi-conductor industry. You can't use circular logic to state that if facts are bad for AMD, they just can't be. That doesn't make any sense.

Acknowledged. And if you have money to burn then these are legitimate benefits, that AMD will someday try to gain when it releases it's MCM packaged 8 core products in the future...

Again it doesn't take money to burn to have the advantages of multi-chip modules, all those advantages you acknowledged, are what make MCM more cost effective.

They are cheaper to manufacture, they're cheaper to package, and the list goes on. In almost every stage of manufacturing from fabrication, to packaging to binning to quality control it is cheaper.

The number one cost issue is yield, and as already stated. AMD must have higher defect rate in it's process and is producing a bigger piece of silicon, so it will have much lower yields. Your argument that if AMD wasn't doing this profitably they wouldn't bother doing it doesn't hold much water. They need to produce unprofitably until they can produce profitably. In case you missed it AMD is losing money. Which is from producing product unprofitably.

I mean we all know that IPC is lower then we were told. We all know that lock speed is lower then we were told. We are not stupid. You can stop gloating now.

It wasn't gloating, I was outlining the entire situation,Also it is very hard to assume you know anything as you ignore facts you don't like. I stated this back when the Barcelona benches came out. They have lower IPC and Phenom will not catch core 2. IIRC you disagreed. AMD is not going to pick up IPC without yet another serious architecture rework.


Hopefully AMD can get some of the bugs worked out in the next few core revisions and get clocks, and IPC up a bit. You might think this is wishful thinking, but it's happened before. It looks like AMD will be able to get a good IPC boost when it gets the TLB bug squashed, and hopefully AMD can get the clocking domain issues fixed in the next few revisions.

TLB bug, limits clock speed, it doesn't affect IPC. IPC is an core architecture issue, which is needs significant rework to change. So yes I do think this is wishful thinking.

Sure it helps, but I will bet that AMD did --not-- buy ATi for it's platform business. Maybe I'm wrong, but after looking at the R600 architecture, I very seriously doubt it.

I disagree. I think AMD bought them to have a complete platform, to produce MCM modules containing CPU/GPU to produce a high performance and cost effective platform. I think this could be a winner for AMD, but this is completely murky, but I see this is the one chance for AMD to produce a 1st place platform out of it's otherwise second place parts.
 
I think you totally misunderstand the benefits that a modern GPU has for a modern CPU company. Lets face the facts here folks, AMD is --not-- a platform provider. AMD's core business is CPU's. CPU's are AMD's bread and butter. And ATi's chipset division is nothing more then grape jelly for AMD's bread and butter. Sure it helps, but I will bet that AMD did --not-- buy ATi for it's platform business. Maybe I'm wrong, but after looking at the R600 architecture, I very seriously doubt it.

I think that AMD bought ATI simply because AMD wasn't a platform company and saw buying ATI as the cheapest, quickest way to become a platform company. Look at the massive success that was/is Centrino - its dominated the laptop market and has made Intel boatloads of money. AMD most certainly wants a piece of that action.
 
I think that AMD bought ATI simply because AMD wasn't a platform company and saw buying ATI as the cheapest, quickest way to become a platform company. Look at the massive success that was/is Centrino - its dominated the laptop market and has made Intel boatloads of money. AMD most certainly wants a piece of that action.



Very true ! That kinda sums it up in a nutshell.
 
It is very unlikely that the defects are equal. Using already mentioned facts. Defect rates are certainly higher at AMD This is due to the simple fact that defect rate gets lower as process matures and Intel has been producing 65nm much longer. These are simple constant facts of the semi-conductor industry.

...It doesn't take money to burn to have the advantages of multi-chip modules, all those advantages you acknowledged, are what make MCM more cost effective.

Why do you think Intel's been doing "frankenstiened" quad cores since last year for? It's certainly a feat to do a true quad on 65nm. I give AMD major props for doing it, however I don't think at this point it was particularily beneficial for them, considering all the issues they've had getting it out the door. First I heard Agena is going to launch this past Spring @ around 2.5-7GHz. Then summer. Then it got pushed back to the end of fall. Then we heard up to 2.8GHz at launch with 3GHz by christmas. Then it was 2.8, and then 2.6, and then 2.4 parts in December. Now it's the end of 1Q 2008. WTF?! Why? Yields, and the TLB bug placing the cherry on top. Paul Otellini and "Kicking" Pat Gelsinger must be laughing their asses off right now with their large wine glasses of sherry. Fuck giggles. Don't get me wrong, I'm an AMD fan, but after seeing official benchmarks and reading the writing on the wall, even I accept the truth. It's pouring out, and the storm doesn't look like it's about to let up any time soon, unless AMD can pull a rabbit out of their ass. I don't see that happening any time soon (if ever). Phenom was their ace in the hole, and even though it's not bad, it wasn't good enough to keep up.

The number one cost issue is yield, and as already stated. AMD must have higher defect rate in it's process and is producing a bigger piece of silicon, so it will have much lower yields. Your argument that if AMD wasn't doing this profitably they wouldn't bother doing it doesn't hold much water. They need to produce unprofitably until they can produce profitably. ...AMD is losing money. Which is from producing product unprofitably.

It all goes back to yields, and the ocean of red ink DAAMiT is drowning in, not to mention falling behind over a year ago and still not being able to catch up.

They have lower IPC and Phenom will not catch Core 2... AMD is not going to pick up IPC without yet another serious architecture rework.

IIRC it all comes down to how the instruction pipeline is designed. K8 and K10 have a 3 issue instruction pipeline, whereas Conroe and Penryn have a 4 issue instruction pipeline. That alone gives you an advantage from the get go for more IPC, no matter how much FPU power, core-to-core bandwidth, and memory latency/bandwidth you toss at the problem. Which is why Phenom still lags behind Kentsfield. Still, the fact that it can even get close in some cases is a testament to Agena and Barcelona's design. It's great for a 3 issue pipe, but not good enough to beat a 4 issue pipe.

[The] TLB bug limits clock speed, it doesn't affect IPC. IPC is an core architecture issue, which is needs significant rework to change. So yes I do think this is wishful thinking.

Which I don't think will be fixed until Bulldozer. IF AMD can sustain itself that long if Phenom continues to flounder behind the competition (which is more often than not...). AMD needs to ramp the clocks up past 3GHz and price competitively with equal all-round performing Intel counterparts if they hope to stay in the game. Yesterday.

I think AMD bought [ATi] to have a complete platform, to produce MCM modules containing CPU/GPU to produce a high performance and cost effective platform. I think this could be a winner for AMD, but this is completely murky, but I see this is the one chance for AMD to produce a 1st place platform out of it's otherwise second place parts.

Especially if the R700 MCM concept is a win. Fuad says a little birdie on his shoulder told him that the high end card will be capable of 2TFLOPs+ sustained. :eek:

nVidia and Intel's Visual Computing Group have something to be extremely worried about if that's true.

I think that AMD bought ATI simply because AMD wasn't a platform company and saw buying ATI as the cheapest, quickest way to become a platform company. Look at the massive success that was/is Centrino - its dominated the laptop market and has made Intel boatloads of money. AMD most certainly wants a piece of that action.

My QFT for the day. Not to mention a slice of the discreet graphics market, too, especially if they new the specs on R700, and let's not forget about Fusion, either. ;)
 
Manny, generally I try not to get into a highly contentious thread such as this, but I will give you my personal opinion on the matter, probably for the last time. I'm not on the Barcelona/Phenom team, and did not have any performance data myself until the public launch. If you do a search for my username and "Phenom",you will see that I never made any performance claims other than "let's wait for the release and see how it performs".

With that said, I can offer a few of my personal thoughts. As you know, Barcelona was late and did not launch at as high of a speed as planned. Had it gone as planned, I believe under some workloads in 4S configuration, it would have that kind of performance advantage over lowered clocked Clovertown used for the early projections. This is evidenced by the 2.5Ghz Barcelona part's SPEC FP numbers, albeit in a very specific configuration and load condition. Certainly that is no excuse, but I don't believe the claim was made in bad faith. Such is the pitfall with performance projections - things can change dramatically in the space of 6 months. As I said, they are my personal thoughts, so take them for what they are worth.

Lastly, the reason why I haven't been posting much, and probably won't anymore is perhaps a selfish one. I've been a member of the enthusiast community for many a years, long before I ever entered engineering school. Thus I felt compelled to share with fellow enthusiasts what I know within the legal bounds. Certainly you can understand as an enthusiast; when you are excited about something, you want to talk about it with like minded people. I don't get paid to browse forums, nor am I here on any official capacity. In fact there is always the fear that what I say violates some part of my employment agreement, thus I always scrutinize my post multiple times before posting. Why do you go through all this trouble to even come here then you ask? Because I'm genuinely excited about technology, and that's what got me into this business in the first place. I feel that my dealings with the members here have been courteous and tactful. Unfortunately, I cannot say the same for some in return. Alas, some members see fit to vehemently asperse my name and denigrate my character, and do so unprovoked. A member PMed me a few months ago and asked me how I can put up with this kind of constant derision. I finally see the light now. Some may enjoy this sort of abuse and thrive in it. I for one have other things in life that could better make use of my time. It's safer for me, and there will be fewer people trying to have at my flesh.



Totally respect what you just said,and wish you the best in whatever you do,wherever you do it !

:)


We had a few Intel people here,years back (2001/2002) who were quite vocal about the goings on inside the company,as well as with thier own personal opinions on all things tech.I hope you stay,but again,if you go all the best to you !
 
duby229 said:
Certainly Intel must have a higher yield then AMD due to it's smaller Die size, however I think you --FAR-- overestimate how low AMD's yields must be. Anything less then 70% is simply unsustainable. It's unaffordable it would cost so much money that it simply wouldnt be possible. Lets say that AMD does indeed have the same defect ratio as Intel, with AMD's die size twice that of Intel's it would have nearly half the yield. If Intel had a 100% yeild that would give AMD somewhere around 50% That's BS and you know it.. You cant assume that they have the same defect ratio, becouse if they did AMD wouldnt have even bothered trying to manufacture it. As it is right now, it is --IMPOSSIBLE-- to know what --either-- companies yields really are.... But we can make a guess, and say that based on industry speculation Intel is somewhere around 87-91%, and AMD is somewhere around 79-83% In --both-- cases that is a guess. At these yields AMD's monolithic die will cost less.

There are 4 common models used to approximate yields based on defect density and critical area. The "defect density" published by AMD is a summed factor of various defect densities as a function of various critical areas within the die. This makes it easy for others to do plug-and-chug calculations based on these common yield models.



The most optimistic yield model is the Exponential Model,

Y = 1 / ( 1 + A * D ) where A is total chip area(die size) in units of cm^2, and D is defect density.

The most pessimistic yield model is the Poisson Model,

Y = EXP - ( A * D )

In between these two is the Murphy Model,

Y = [ ( 1 - EXP [ - A * D ] ) / ( A * D ) ] ^ 2

Then there's the Seeds Model,

Y = EXP [ - ( A * D ) ^ 0.5 ) ]



Intel's Core2 process is "world class" with defect densities as was explained in an earlier post, of 0.22 / cm^2 and die size = 1.43 cm^2.

Barcelona defect density is < 0.5 / cm^2. We'll assume 0.45 / cm^2 just to be on the optimistic side. Die size is 2.83 cm^2.


With these models, the results are as follows:

Yields (%)

Core 2 Barcelona

Exponential 76 44
Poisson 73 28
Murphy 74 32
Seeds 57 32

So there you have it. Anyone who knows how to use a spreadsheet can do these plug-and-chug calculations. Most models will show Barcelona having 30-ish % yields while Core 2 would be 70-ish %.

duby229 said:
On to the meat.

I think you totally misunderstand the benefits that a modern GPU has for a modern CPU company. Lets face the facts here folks, AMD is --not-- a platform provider. AMD's core business is CPU's. CPU's are AMD's bread and butter. And ATi's chipset division is nothing more then grape jelly for AMD's bread and butter. Sure it helps, but I will bet that AMD did --not-- buy ATi for it's platform business. Maybe I'm wrong, but after looking at the R600 architecture, I very seriously doubt it.

http://www.digit-life.com/articles2/video/r600-part1.html

Look at the block diagram, and then imagine a couple dozen integer processing units, and maybe a dozen or so x86 front end units, that include decoders, and ROB, and the register table and all that legacy crap. This would be one hell of a beasty that nobody could compete with. Legacy performance woud prolly suck, but then AMD announces SSE5... Coincidence? I dont think so. The reason AMD bought ATi was for --that-- architecture. They wanted that IP. Iiii'm sure that by the tme it gets implemented as a general purpose product, it'll be --heavily-- redesigned for x86 performance in mind, but this is it folks. This is the future.....

On to the meat"? "Look at the diagram, then imagine" you say. Typical "if pigs could fly" argument.

Imagine a pig... with wings.

A fitting way to look at the "meat" don't you think?
 
Most models will show Barcelona having 30-ish % yields while Core 2 would be 70-ish %.



On to the meat"? "Look at the diagram, then imagine" you say. Typical "if pigs could fly" argument.

Imagine a pig... with wings.

A fitting way to look at the "meat" don't you think?


Ouch,thats bad.And I dont for a second think AMD (or any company with thier back to the wall) would ever be honest about yields.Not with the crap hanging over their heads currently.Of course they are going to say they are great,"And we are very satisfied"

What else would they say !? "Our engineering team in Dresden are idiots and are clearly out classed by Intels" ? Of course not.The delay,after delay,after delay,The launch speeds,the scaling issues,and the 'game ending" errata bugs are all the information one likely needs,to figure out whats really going on in Dresden.

B3 better rock like never before.
 
IIRC it all comes down to how the instruction pipeline is designed. K8 and K10 have a 3 issue instruction pipeline, whereas Conroe and Penryn have a 4 issue instruction pipeline. That alone gives you an advantage from the get go for more IPC, no matter how much FPU power, core-to-core bandwidth, and memory latency/bandwidth you toss at the problem. Which is why Phenom still lags behind Kentsfield. Still, the fact that it can even get close in some cases is a testament to Agena and Barcelona's design. It's great for a 3 issue pipe, but not good enough to beat a 4 issue pipe.

Actually, more important than that 4th ALU is the cache on the Core2. It's extremely large and extremely fast. It's no use having 3 or 4 ALUs when you don't have the data to feed them. I don't think that the 4th ALU does a whole lot for the Core2's IPC. Even the third ALU never did all that much on the K7 and K8.
Look at the original Pentium M and Core. Pentium M has only two ALUs, Core has three ALUs, but IPC is all very close.
 
Strange , 3 months ago K10 was an Intel killer.Being honest with yourself is tough , isn't it ?

Really ? They were really smart for doing this :rolleyes: .I thought they needed some know how into implementing a GPU in the CPU.
...
LOL

That scratches the surface of his arguments. No matter how bad the news is, it's always great news to him.
 
Yields that low are simply not possible to sustain. If they were that low it would have been scrapped.That's 2/3rds of a wafer wasted, and we all know how much those wafers cost. We all know how much it costs to maintain, and run the equipment. The bottom line is that you cant make assumptions on yeilds like that becouse, 1st, it's impossible, and 2nd neither company releases enough information to know, or calculate what the yields actually are. The best case scenario would be looking at how many wafers the fab is capable of producing vs how many chips the fab manufactures. In this case AMD has about 79-83% yeild, granted this covers all product ranges and bins that come out of FAB36. There is no other way to do it, the information required simply isnt available to make that kind of guess.

I think it's pretty clear to all that defect ratio is inversely proportional to yield... The more defects the lower the yield, then taking into consideration die size it's clear that is directly proportional to yield. The larger the die at a given defect ratio, the lower the yield. This is common sense, but a defect ratio that high would require an unsustainably low yield, and I dont buy it. It's BS.

Like I said from the very beginning it assumes a ridiculous defect ratio. And I dont buy that number.

Some of you guys claim that the TLB bug wont increase IPC, however every report that I've read claims that it causes a cache miss, and forces the decoder to redecode the instructions in the table. If this is in fact the case then it will clearly boost IPC when it gets fixed. This is the simple kind of common sense that I dont think needs to be explained.

Additionally there are a number of issues with various clocking domains. AMD has acknowledged that this chip is not propagating clocks very well, and that they are working on fixing these issues in the next few revisions. Once these clocking issues get resolved we can be pretty dang sure that this will be able to clock up much higher then these current chips. And if you guys try to claim that isnt going to happen then your a flaming fanboy. AMD has managed to get clocks increased on pretty much every new core revision going back to the k6.

3 months ago, I said that K10 was a stop gap, just like I said it was a year ago. I had hoped it would perform better then it does, Just like many of you did. If you can claim that you hoped it would perform like shit then your flaming fanboy. Maybe you guys should actually read some of my posts instead making shit up, as per your usual.

And if you guys really think that AMD bought ATi for it's chipset division, what with the SB600, which is a spitting image of the 8000 series south bridge.... And the already on die northbridge.... Not to mention the PCIe bridge.....I fear your sadly mistaken. Sure it comes in handy to have a brand name like ATi, and I'm sure they'll be able to leverage it for their benefit, but I will bet that was --not-- the driving factor in the buyout.
 
Some of you guys claim that the TLB bug wont increase IPC, however every report that I've read claims that it causes a cache miss, and forces the decoder to redecode the instructions in the table. If this is in fact the case then it will clearly boost IPC when it gets fixed. This is the simple kind of common sense that I dont think needs to be explained.

I already gave an in-depth technical explanation of how a TLB bug in L3 cache can't do wonders for IPC, but you simply ignored that, as you do most information that doesn't jive with your view of AMD's technical superiority.

Additionally there are a number of issues with various clocking domains. AMD has acknowledged that this chip is not propagating clocks very well, and that they are working on fixing these issues in the next few revisions. Once these clocking issues get resolved we can be pretty dang sure that this will be able to clock up much higher then these current chips. And if you guys try to claim that isnt going to happen then your a flaming fanboy. AMD has managed to get clocks increased on pretty much every new core revision going back to the k6.

I don't think anyone here thinks that AMD won't increase clocks at all... However, you quote speeds like 4.2 GHz...
 
I already gave an in-depth technical explanation of how a TLB bug in L3 cache can't do wonders for IPC, but you simply ignored that, as you do most information that doesn't jive with your view of AMD's technical superiority.



I don't think anyone here thinks that AMD won't increase clocks at all... However, you quote speeds like 4.2 GHz...

Which is perfectly reasonable for a 65nm product on it's final spin. Just like I predicted 3.4-.3.5ghz for 90nm that you personally flamed me for... Guess what I was right, and you were wrong. Just like I was right for 2.9ghz on 130nm, and I was right with 2.1ghz on 180nm....

4.0-4.2ghz for AMD's 65nm process is reasonable.

Your "in depth" explanation doesnt even address what the bug is, and only it's side affects. As such it is of no value to any one here.
 
So you're saying K10 is on death row because it's a "stone cold killer?" :D
 
Which is perfectly reasonable for a 65nm product on it's final spin. Just like I predicted 3.4-.3.5ghz for 90nm that you personally flamed me for... Guess what I was right, and you were wrong. Just like I was right for 2.9ghz on 130nm, and I was right with 2.1ghz on 180nm....

4.0-4.2ghz for AMD's 65nm process is reasonable.

Your "in depth" explanation doesnt even address what the bug is, and only it's side affects. As such it is of no value to any one here.

What was your prediction for K8 65nm then? 4.0 - 4.2GHz as well? ;)
 
4.0-4.2ghz for AMD's 65nm process is reasonable.

With liquid nitrogen?

Your "in depth" explanation doesnt even address what the bug is, and only it's side affects. As such it is of no value to any one here.

Ofcourse I don't address what the bug is, I assume we all know this already. Since you claim that a side effect of the bug is reduced IPC, I address these side-effects.
Of no value? Not to you perhaps, you don't have much use for reality.
 
What was your prediction for K8 65nm then? 4.0 - 4.2GHz as well? ;)
Here's another Nostraduby prediction: http://www.hardforum.com/showpost.php?p=1029127951&postcount=11

duby said:
No the realquestion is why is Intel benchmarking a chip that is not released? Lets face the facts here... Conroe doesnt exist yet.... When it is released it will not hit projected clocks.... It will not be widely available.... It will not scale as well as claimed on multicore.... It will not scale at all on multisocket....

It seems clear that Intel is going going all out for multicore... Thisis the one erea that it will get demolished in... So once again we got Intel going gung ho over the wrong approach...

My personal feelings is that multicore adds WAY too much redundant hardware.... Each core will have parts that do exactly the same thing... redundantly.... I think you'll see AMD come out with a better architecture that deals with these redundancy issues.... But in the mean time AMD has the advantage until they do.....

LOL, the comedy never stops. A typical post by Duby where every single one of his predictions was wrong, as usual. LOL @ the last part. It's not like AMD slapped 4 "redundant" cores on the K10 or anything. :D

Paragraph 1:
Conroe was first shown publicly at the end of 2005 (gee, it did exist), strike 1. Conroe was released at announced clocks (don't bring up some dumb vr-zone or inq rumor), strike 2. Conroe ramped and replaced Netburst very quickly, strike 3. It scaled better in frequency and performance than "native" quad core K10. Intel made some successful 2S and 4S (finally) NGMA chipsets, and took back significant server market share from AMD due to performance leads.

Paragraph 2:
Yeah, Intel sure has been suffering with NGMA. AMD must have imploded. :p The "wrong approach" has given Intel the performance lead, made the processor manufacturable and made Intel profits. AMD's "right approach" has caused it, um, what?

Paragraph 3:
Starts with an opinion based on no understanding of NGMA. It goes down from there.
 
As I already said, many times, I was wrong about Conroe, and have already received my share of flack for that.

Conroe was a much better architecture then I thought it would be....

But seeing as how this is the AMD forum, if you want to talk about Conroe, please proceed to the Intel forum where those topics belong. And you have nothing to asdd that hasnt been rehashed a hundred times by a dozen people.... You can feel free to go back over there.
 
Yea, why did Intel demonstrate Conroe before release?
Well, because they don't like to dupe their customers.
Really, I have no words for AMD. Look at those videos on Youtube where AMD spindoctors claim CPUs at 3 GHz and 40% faster than Clovertown.
Guess we all know why AMD didn't demonstrate theirs.
Only people with technological know-how such as myself have known all along that AMD's claims were bogus, and that the current CPU was what was to be expected.
Fanboys like duby ate up the AMD marketing babble.
 
If they wanted to continue working with K8, they prolly could do that. We may still see that with Griffin, which is based on K8.

You didn't answer the question. What was your prediction for 65nm K8? Oh, that's right, negative scaling... I'm sure you predicted THAT! :D
 
Yea, why did Intel demonstrate Conroe before release?
Well, because they don't like to dupe their customers.
Really, I have no words for AMD. Look at those videos on Youtube where AMD spindoctors claim CPUs at 3 GHz and 40% faster than Clovertown.
Guess we all know why AMD didn't demonstrate theirs.
Only people with technological know-how such as myself have known all along that AMD's claims were bogus, and that the current CPU was what was to be expected.
Fanboys like duby ate up the AMD marketing babble.

Just like you knew that P4 was going to be the greatest thing since sliced bread, and that Itanium was going to revolutionize the industry... Only people with your level of know how knew that.
 
Uumm didnt I just answer this? Uhhh Griffin maybe?

Griffin is mobile. Hardly relevant to the discussion, unless you expect Griffin to exceed 3.2GHz at low thermals... :rolleyes:

Again, you failed to answer my question - what was your prediction for 65nm for K8? Did you make one?
 
what was your prediction for 65nm for K8? Did you make one?
I'll make a prediction: EOL near the middle of next year (even if production stops earlier), with 65nm K8 production shifted mostly to mid-range parts and BE-23xx models before that. Yes, I expect K8 to be around too long even in 90nm because what else is AMD going to make in 90nm? :p

I'm not counting Griffin in that since it's only based on K8.
 
Just like you knew that P4 was going to be the greatest thing since sliced bread, and that Itanium was going to revolutionize the industry... Only people with your level of know how knew that.

I never said P4 was a great architecture... Unlike you I'm not a fanboy who claims that everything that company X does is fantastic. Go ahead, try to find any P4-related statement of mine that isn't true, I dare you. Heck, I was so underwhelmed by the P4 that I never actually owned one myself. I went from a Pentium II 333 to a TBird 1400, and didn't return to Intel until Core2 Duo.
And Itanium is long from dead... it's still a fine architecture, and Intel is still developing it, so who knows where it may still go.
Everyone with my level of know-how acknowledges the strong points that the Itanium architecture has.
 
Yields that low are simply not possible to sustain. If they were that low it would have been scrapped.That's 2/3rds of a wafer wasted, and we all know how much those wafers cost. We all know how much it costs to maintain, and run the equipment. The bottom line is that you cant make assumptions on yeilds like that becouse, 1st, it's impossible, and 2nd neither company releases enough information to know, or calculate what the yields actually are. The best case scenario would be looking at how many wafers the fab is capable of producing vs how many chips the fab manufactures. In this case AMD has about 79-83% yeild, granted this covers all product ranges and bins that come out of FAB36. There is no other way to do it, the information required simply isnt available to make that kind of guess.

I think it's pretty clear to all that defect ratio is inversely proportional to yield... The more defects the lower the yield, then taking into consideration die size it's clear that is directly proportional to yield. The larger the die at a given defect ratio, the lower the yield. This is common sense, but a defect ratio that high would require an unsustainably low yield, and I dont buy it. It's BS.

Like I said from the very beginning it assumes a ridiculous defect ratio. And I dont buy that number.

Some of you guys claim that the TLB bug wont increase IPC, however every report that I've read claims that it causes a cache miss, and forces the decoder to redecode the instructions in the table. If this is in fact the case then it will clearly boost IPC when it gets fixed. This is the simple kind of common sense that I dont think needs to be explained.

Additionally there are a number of issues with various clocking domains. AMD has acknowledged that this chip is not propagating clocks very well, and that they are working on fixing these issues in the next few revisions. Once these clocking issues get resolved we can be pretty dang sure that this will be able to clock up much higher then these current chips. And if you guys try to claim that isnt going to happen then your a flaming fanboy. AMD has managed to get clocks increased on pretty much every new core revision going back to the k6.

3 months ago, I said that K10 was a stop gap, just like I said it was a year ago. I had hoped it would perform better then it does, Just like many of you did. If you can claim that you hoped it would perform like shit then your flaming fanboy. Maybe you guys should actually read some of my posts instead making shit up, as per your usual.

And if you guys really think that AMD bought ATi for it's chipset division, what with the SB600, which is a spitting image of the 8000 series south bridge.... And the already on die northbridge.... Not to mention the PCIe bridge.....I fear your sadly mistaken. Sure it comes in handy to have a brand name like ATi, and I'm sure they'll be able to leverage it for their benefit, but I will bet that was --not-- the driving factor in the buyout.

You've used 556 words and 2439 characters to basically say ... nothing . I pity you and your delusional state where your own fantasies are projected into real life as facts.
 
Status
Not open for further replies.
Back
Top