AMD Engineer Explains BD Fiasco

Sounds like management attempted to increase efficiency and inter polarity within the company, but completed neglected the extreme performance penalty.
 
It would be nice if they gave some advantages of this automated process.. reduce costs (via staffing)? The reality I am seeing, the die size is about 50% larger (315mm vs213mm) which means less processors per wafer. twice as many transistors, higher power usage, poor performance --- where is the logic in this??
 
It would be nice if they gave some advantages of this automated process.. reduce costs (via staffing)? The reality I am seeing, the die size is about 50% larger (315mm vs213mm) which means less processors per wafer. twice as many transistors, higher power usage, poor performance --- where is the logic in this??

AMD and ATI can share common components, its much quicker to design processes and you dont need to QA as hard (as the program should have done that for you) and you dont need as skilled of engineers to do it for you (as you have the process to make the things).

Im assuming the biggest advantage is that since many common components are shared due to the processing combining them in a fusion APU is even easier than ever.
 
For a chip that doesn't have a GPU the 800 million transistor uncore is staggering. One thing that's sort of good about this article is that there is still a chance they could salvage this design, it may take a few years, but they could make gradual improvements stepping by stepping. Looks like a lot of room for improvement and the design may not have been a complete failure.

If they went and hand crafted the uncore in order to optimize it.
 
For a chip that doesn't have a GPU the 800 million transistor uncore is staggering. One thing that's sort of good about this article is that there is still a chance they could salvage this design, it may take a few years, but they could make gradual improvements stepping by stepping. Looks like a lot of room for improvement and the design may not have been a complete failure.

If they went and hand crafted the uncore in order to optimize it.

What is AMD supposed to live on between now and the "few years" it will take to unscramble the automated mess they made of BD? Their market cap is less than 4% that of Intel, and many would argue that's mostly video cards. They had to use software to design the overbloated overtransistored POS because they can't afford real R&D. With BD selling like turdcakes, GloFo unable to get 32 nm right let alone 22 nm, and their share price drooping continuously (it's 10% of what it was a few years ago) where exactly is BD and AMD going? * FLUSH *
 
This all reads like the people who made the decisions where not engineers nor people with common sense.

So yeah, rip amd. Hitler has no love for ya.
 
What is AMD supposed to live on between now and the "few years" it will take to unscramble the automated mess they made of BD? Their market cap is less than 4% that of Intel, and many would argue that's mostly video cards. They had to use software to design the overbloated overtransistored POS because they can't afford real R&D. With BD selling like turdcakes, GloFo unable to get 32 nm right let alone 22 nm, and their share price drooping continuously (it's 10% of what it was a few years ago) where exactly is BD and AMD going? * FLUSH *

I don't know. Previously I thought they were done for good, with no viable architecture in the pipeline. If you look at their road map, bulldozer is at the core of every single product except for the netbooks. I didn't think they had any chance in recovering.

But if this article is true, there seems to be a lot of room for improvement and at least maybe not the entire architecture is a waste. Maybe they can get some resources to "optimize" the uncore, do it on a year schedule in smaller phases. Financially AMD has been in a worse position in the past, I'd like to believe this is not the end of AMD and the begging of a single CPU maker for us PC enthusiasts..

If I were AMD I would use every resource at my disposal to get some of the original NexGen engineers back and have them build a team which would go over the Bulldozer design with a fine comb, write a report on the most critical areas of improvement.
 
Ahem.
As someone who used to write these automated tools, here is another perspective.

AMD does not have a foundry or its own fab-processes any more. As a result it is much harder for them to do what is called custom or semi-custom designs. Any design tweak they do by hand has to be verified by the foundry to work and be reliable.

So instead, what AMD is doing is using synthesizable design flows; i.e. it sticks to gates which are provided by the foundry in their verified libraries. That being set, the foundries will go all out to provide as many different versions of the gates and cells that AMD will ever need.

However it will still be a discrete number and not allow the infinite amount of tweaking a foundry which permits continuous scaling will allow. Further while machines do a great job in handling large amount of data, they still can not beat the skills of a well experienced engineer. You can guide the tools and come very close to a hand-crafted design, so the 20% number is plucked from a place where the sun never shines (or supposed to at least).

It is a trade off between producing designs with fewer bugs, requiring less man hours, and more amenable to changes, versus designs which have the best performance and lowest area. The CougarPoint bug is an example where hand-crafting left behind an artifact which cost Intel $1B. AMD can not afford to have those kind of bugs and survive. Hence they rely on more automated flows and bear the penalty.
 
When Jerry left, AMD died for all intents and purposes. Everything that came after him was a vast wasteland of stupidity... and don't get me started on acknowledged criminals like Ruiz.

I'm saddened by the lack of people (especially CEOs) passionate about the products their company makes, not this pointless shareholder suck-festival. You have to know business of course, but damn, good products sell too. Manage it well - to not lose money on dumb things - and leap forward.

AMD had that. Now it doesn't. I've never bought anything AMD that made me sad. Even when they didn't top Intel's performance, they had something to make up for it - with style. Not to mention when they topped Intel's performance for less money! Now they have *nothing* going for them besides Radeons, and they're so hard to get around here (and expensive).

Oh well... I just hope they get back on track. Day after day, this is only hope, because facts aren't backing it up at all.
 
Who is to blame? Ruiz? Meier? Both?

EDIT:
@aviat72. Thanks for your post. Very interesting.
 
Last edited:
When Jerry left, AMD died for all intents and purposes. Everything that came after him was a vast wasteland of stupidity... and don't get me started on acknowledged criminals like Ruiz.

QFT.

The failure is summarized just right here:
The management decided there should be such cross-engineering [between AMD and ATI teams within the company] ,which meant we had to stop hand-crafting our CPU designs and switch to an SoC design style.

And guess who advocated the merger? Oh wait, he doesn't work for AMD anymore.
 
Looks like AMD got their short term gain in using automated tools. (in theory, not counting the 4 years...)

Now they have to pay the long term price for it.
 
I remember the forum thread where this guy was saying all of this. I don't know if I believe you need to hand craft every transistor. I'd wager some cutting and pasting gets done in the design phase. That being said, I'm sure certain aspects of the CPU being designed by hand beat the snot out of automated tools as far as efficiency goes. If what this guy says is true and AMD did in fact use these tools to craft Bulldozer, this points to an even sadder state of their design team because it shouldn't have taken this long to design Bulldozer. This points to some probable scrapping of earlier designs and rebuilding the design of Bulldozer from almost the ground up at least once. That's about the only reason to explain the delay unless their engineers were simply too reliant on the tools and were unable to find and correct flaws within the design once they reached a certain point. This points to incompetence on the part of the design team. A truly sad state of affairs if correct.

Honestly I believe they only released Bulldozer to try and get some money coming in and to save face rather than going through the same of increased delays. I've no doubt they'll redouble their efforts on the next version of the architecture, but you can only polish a turd so far. Many people predicted Bulldozer even as somewhat of a flop wouldn't be as bad as Phenom's launch was. I disagree. Bulldozer's performance is worse than Phenom / Phenom II as far as IPC goes and it takes nearly twice the cores to equal the Core i5 2500K or exceed it's performance in any way. Add to that about double the TDP and it's a sad, sad processor. It's better than Phenom II but only through brute force.

They'd have done better to die shrink Phenom II and try and shove two Phenom II X6's (or even X4's) in one package. It may not be elegant or modular, but the damn thing would have worked. Though we might have needed new motherboards for it. (Which is a good thing in my opinion.)
 
Wasn't there another posting quite recently of an AMD ex-engineer which basically said:

AMD probably couldn't refine the old arch very anymore so you wouldn't see very much performance increase in 32nm?

I see that as a feasible reason why they decided to dump BD on everyone in this state. They were all out of options and time to refine the thing so they basically are selling what they have. I mean, they've been trying really hard with just refining their arch but llanos basically are 32nm Phenoms and you really see a performance increase at all.
 
I remember the forum thread where this guy was saying all of this. I don't know if I believe you need to hand craft every transistor. I'd wager some cutting and pasting gets done in the design phase.

Yeah...I highly doubt they "hand-craft" each one of 2 BILLION transistors.

Then again, things like the cache takes up a lot of that number and I imagine that's mostly copy/paste for the memory cells.
 
They'd have done better to die shrink Phenom II and try and shove two Phenom II X6's (or even X4's) in one package. It may not be elegant or modular, but the damn thing would have worked. Though we might have needed new motherboards for it. (Which is a good thing in my opinion.)

Not sure if good. This is one of the few things that still makes AMD compelling for the budget enthusiast.

I have 8 machines at home. One is a Intel Core i7. All the others are AMD, Socket 939 (Athlon 64 3000+, with AGP slot yay) and AM2/AM2+/AM3. I get new stuff, I pass the old stuff to someone and can get a MoBo+processor combo to sell. The better boards+processors stay, the sucky ones go away. I can swap all the processors back and forth between MoBos if needed, it's nice.
 
Wasn't there another posting quite recently of an AMD ex-engineer which basically said:

AMD probably couldn't refine the old arch very anymore so you wouldn't see very much performance increase in 32nm?

I see that as a feasible reason why they decided to dump BD on everyone in this state. They were all out of options and time to refine the thing so they basically are selling what they have. I mean, they've been trying really hard with just refining their arch but llanos basically are 32nm Phenoms and you really see a performance increase at all.

That could be. Phenom wasn't a great performer to begin with and Phenom II might have been the pinnacle of what that architecture could do. At some point die shrinks don't help anymore and you really do need a new design. It's sad that AMD would have maxed out an architecture in only 2 generations. Though in fairness Phenom and Phenom II were not entirely new architectures. In fact I think Bulldozer is their first completely new architecture since K7's introduction. I think they might have had certain goals that had to be achieved and they sacrificed IPC to get it and then they've done it again. This is obviously a backwards approach as Intel never made it truly pay off with Netburst. What makes AMD think they can do better using the same plan? Netburst got slightly better going from the Willamette to the Northwood Core and then dropped performance going to Prescott. The pipeline got deeper and deeper and this didn't help performance, though it did allow for clock speed scaling. IPC wise the whole Netburst architecture was worse than the P6 derived Pentium III. Intel tried to fix Netburst as much as they could with Smithfield etc., but again it never truly paid off the way they had hoped.

Intel bet on the fact that they'd be able to scale the clock speed past 5GHz and they never achieved anything beyond 3.8GHz with any reliability. AMD certainly doesn't seem to be betting on being able to get clock speeds in excess of what Intel can. It seems they are betting more on Bulldozer's modularity and increasing the core count over what they have today. The problem is that this requires too much reliance on software multithreading. At some point this won't help because certain programs by their nature may never work very well with a multithreaded design so ignoring single threaded performance seems incredibly short sighted. Further more in today's market there are only one or two desktop applications that can truly leverage anything in excess of quad core CPUs. This is why the Phenom II X6, Core i7 980X and Core i7 990X don't wipe the floor with everything else out there.

Though Bulldozer's modularity may in fact prove useful in other ways. It may allow the architecture to stretch it's legs far beyond anything AMD's had before. They can design new modules for it with improvments that may benefit the entire architecture / processor going forward. They may be able to totally redesign parts of the CPU at times. I'm not a semi-conductor engineer so I can't say, but given what I know on the subject this seems reasonable to an extent. We won't really know how well this will work until we start seeing Bulldozer CPUs with greater than 8 cores in the desktop market and the next iteration of the Bulldozer core.
 
... always came back to me with designs that were 20% bigger, and 20% slower than our hand-crafted designs, and which suffered from electro-migration and other problems," the former AMD engineer said.....

I wonder what that means for processor reliability of BD? Wasn't electro-migration a big deal with overvolted/OCed CPUs dying quickly?
 
I think a 32mn Phenom III product could have been great as long as the die shrink provided for 200-300mhz clock speed increases, and reduced power consumption, even if there was little or no IPC improvement.

That would have been a great "look here, get a quad or hex core machine for cheap" deal for OEMs, but if they are not making any money on Phenom II I can see why they did not opt for that route.
 
Not sure if good. This is one of the few things that still makes AMD compelling for the budget enthusiast.

I have 8 machines at home. One is a Intel Core i7. All the others are AMD, Socket 939 (Athlon 64 3000+, with AGP slot yay) and AM2/AM2+/AM3. I get new stuff, I pass the old stuff to someone and can get a MoBo+processor combo to sell. The better boards+processors stay, the sucky ones go away. I can swap all the processors back and forth between MoBos if needed, it's nice.

The reason I say it's good is because the design of AM3+ boards is already dated as hell. The memory slots are too close to the CPU and due to the mounting holes and the way they are positioned, you've got few options for how you mount some of the larger cooling solutions forcing them to be positioned in less than optimal directions. Blocking memory modules is practically a given. The only true way of overcoming this problem without AMD redesigning everything is to water cool. Something many are still unwilling to do for various reasons. Though units like the Corsair Hx00 series are making it more and more common. Given improvements made from one board generation to the next in terms of technology support, do you really want to saddle your shiny new processor with some board using outdated PCIe technology, outdated memory support, USB 2.0 only support or whatever? I wouldn't. I've rarely upgraded CPUs without getting a new motherboard as well. I'd wager most people do the same thing. If they ever upgrade CPUs it's probably about mid-way through to the latter half of a given processor families expected market life span. In other words, after the first refresh.

I look at AMD processor compatible boards all the time and when you compare them to the layout of the Intel processor compatible boards, they ALL suck. The design must evolve because it currently blows. Maintaining legacy solutions or compatibility is almost never a good thing in the long run.
 
Intel tried to fix Netburst as much as they could with Smithfield etc., but again it never truly paid off the way they had hoped.

But they still put it at shelves, and it sold. A lot. We still have some much of these running at work it's obscene. And I have a P4 box at home working too, got it from a friend who was tossing it away. These MF'ers live on. And they sold tons, a lot more than AMD, because they were there and were from Intel.

AMD disappointed us, but they're doing the right thing to not delay it anymore and just release the suck for us to embrace (lol). Keep the cash coming so they can go ahead and focus on something else. It doesn't make sense to us, but to a company it surely does.

As I always say, don't go DNF on us. Now they're free to go ahead.

Further more in today's market there are only one or two desktop applications that can truly leverage anything in excess of quad core CPUs. This is why the Phenom II X6, Core i7 980X and Core i7 990X don't wipe the floor with everything else out there.

Which applications? Seriously, I don't know exactly. Video encoding? CAD applications? (AutoCAD, Revit).

Amen. I hope it gets good.
 
The only true way of overcoming this problem without AMD redesigning everything is to water cool. Something many are still unwilling to do for various reasons. Though units like the Corsair Hx00 series are making it more and more common.

I look at AMD processor compatible boards all the time and when you compare them to the layout of the Intel processor compatible boards, they ALL suck.

That is propably the reason why AMD was offering 'dozer with water cooling unit.
 
I started reading the article, and it's very informative, but I have a hard time believing the engineer. Wouldn't AMD understand something obvious like that, and use computer models to estimate different outcomes, like relying on computer models alot or not relying on them at all, and the difference to transistor count? Just seems like something they wouldn't make a mistake on.

On the other hand, if it's true, it paints a promising picture for the upcoming Bulldozer revisions. If figures it out and changes pace (assuming that that is the problem), we may see a pretty competitive processor come Piledriver (or whatever the 2012 models are called).
 
I'm thinking that the result of using automated tools to design Bulldozer was a result of AMD unable to hire enough chip designers and engineers to hand craft the next CPU.

Again, if you compare it to Intel, they can afford to hire the needed chip designers and engineers to hand craft new features and new CPU architectures.

I think using the automated tools helped reduced chip development costs and design, but cost AMD performance and power efficiency gains in the end.

I'm probably going to bet right now that if AMD had the same monetary backing as Intel does now, they would have been able to hand craft and not automate their chip designs.

But, unfortunately, AMD isn't Intel and we shouldn't expect them to be that. However, if they shifted the money spent on automated tools to hiring engineers, hiring the right people at the top to the bottom, I'm sure Bulldozer would have turned out differently if it was a more hand crafted design.

I simply think that even if reduced operating income, they are spending money in the wrong places. AMD really should reorganize itself from top to bottom. I'm not sure if the new CEO would be willing to do that given their small profit margins compared to Intel.
 
The reason I say it's good is because the design of AM3+ boards is already dated as hell. The memory slots are too close to the CPU and due to the mounting holes and the way they are positioned, you've got few options for how you mount some of the larger cooling solutions forcing them to be positioned in less than optimal directions.

Yes, I can see that. It's really messy. I assembled three Core i7's, and damn, what a nice thing. It's so easy, everything is in its right place. But I thought it was because the MoBos in question were great, not because of some design flaw of AM3 boards. Can't they just solve that without a whole new socket? I ask this out of curiosity...well... logic says they would have solved it by now.

Blocking memory modules is practically a given. The only true way of overcoming this problem without AMD redesigning everything is to water cool. Something many are still unwilling to do for various reasons. Though units like the Corsair Hx00 series are making it more and more common. Given improvements made from one board generation to the next in terms of technology support, do you really want to saddle your shiny new processor with some board using outdated PCIe technology, outdated memory support, USB 2.0 only support or whatever?

I do it because out of budget issues. I'm in Brazil, fancy features are expensive. We barely have SSDs around, and all of them are insanely expensive. That's why I didn't bother with SATA3. Same thing with USB 3.0, no affordable devices available yet. We get the old tech, that's the way it's always been.

I wouldn't. I've rarely upgraded CPUs without getting a new motherboard as well. I'd wager most people do the same thing. If they ever upgrade CPUs it's probably about mid-way through to the latter half of a given processor families expected market life span. In other words, after the first refresh.

Yes... but for some reason I always have a MoBo laying around that can take the CPU and get it running again. Maybe because I get my friend's scraps, I don't know, but it's nice.

But I agree with you, my reality is different, it holds tech back.

I look at AMD processor compatible boards all the time and when you compare them to the layout of the Intel processor compatible boards, they ALL suck. The design must evolve because it currently blows. Maintaining legacy solutions or compatibility is almost never a good thing in the long run.

Yes... I was wrong. One day you have to let go. This design is what, 6 years old? My first AM2 board was from 2007, and don't forget we get things here later.
 
But they still put it at shelves, and it sold. A lot. We still have some much of these running at work it's obscene. And I have a P4 box at home working too, got it from a friend who was tossing it away. These MF'ers live on. And they sold tons, a lot more than AMD, because they were there and were from Intel.

Pentium 4 processors sold because AMD was undercut by Intel when selling to OEMs. Also people in the IT industry tend to have a philosophy. If you want reliable go Intel. If you want cheap and go AMD you'll pay for it later. And having worked in that industry for so long I can tell you there is some truth to it. Mainly it's due to the platform more than anything else. The processor has almost nothing to do with that but the fact that AMD is the most recognizable brand in those boxes tends to cause people to lay the blame on their doorstep rather than on the chipsets, motherboards, shoddy power supplies, etc. their bargain basement units might have been built with.

AMD disappointed us, but they're doing the right thing to not delay it anymore and just release the suck for us to embrace (lol). Keep the cash coming so they can go ahead and focus on something else. It doesn't make sense to us, but to a company it surely does.

As I always say, don't go DNF on us. Now they're free to go ahead.

I can't argue with this. They needed new product. Good or not, new product was needed.

Which applications? Seriously, I don't know exactly. Video encoding? CAD applications? (AutoCAD, Revit).

In the professional and workstation world there are MANY applications that can leverage processors with more than four cores. AutoCAD, 3D Studio Max, Lightwave, Maya, and more are all great examples. In the desktop arena it's really your video encoding applications that can do it. That's really about it. Some entry level professional applications like Photoshop, video editing software etc. can do so as well. However the percentage of the population using desktops which can utilize the technology is exceedingly small but not half as small as the group of people that can actually afford them. Bulldozer had a chance to do really well for that group bringing a lower cost 6 and 8 core CPU into the mix, but like Phenom II before it, the Bulldozer still gets trounced by Intel's quad core offerings with Hyperthreading a lot of the time. In fact the only advantage Phenom II X6 had was it's encoding performance.

Amen. I hope it gets good.

You never know. I'd wager the improvement to the next CPU based on Bulldozer will be at least as good as Phenom II was compared to Phenom I. That's still not saying much.
 
Yes... I was wrong. One day you have to let go. This design is what, 6 years old? My first AM2 board was from 2007, and don't forget we get things here later.

Technically things haven't really changed in terms of the basic layout since AM2 and to a lesser extend S939 and S754. I believe the thermal mounting holes changed slightly going from S939 to AM2, but the relationship of DIMM sockets to CPU sockets has remained roughly the same. The difference is that back then we didn't need such massive cooling solutions for the CPUs. Since AMD integrated the memory controller into the CPU however, they've always had the DIMM slots too close to the CPU socket. The problem is simply more apparent than ever before because of today's cooling needs.

Intel could have done the same thing but didn't because of all the changing in memory technologies would have painted them into a corner the way it did for AMD. (The corner being the need to redesign the processor's memory controller and potentially change sockets simply to support new memory module types.) Furthermore Intel didn't move to an integrated memory controller until they were able to adequately address certain problems. The main problem being signal degradation over the Hypertransport links. Intel's Quick Path Interconnect or QPI isn't subject to the same latency and signal degradation issues as Hypertransport is. LGA1156 and LGA1155 processors took yet another different approach which resolved the problem just as well if not better. Many lauded AMD for their approach but the fact is the approach wasn't AMD's but rather their version of what DEC Alpha CPUs had been doing for some time already. Intel was well aware of the approach and knew the problems they'd have trying to do the same thing with their architecture even though there could have been potential gains in performance. Though I don't think that Netburst would have gained all that much performance from an integrated controller. Nehalem on the other hand was designed to love memory bandwidth even with terrible latencies. AMD's approach on the other hand has always been latency sensitive.
 
For Dan:

Would you put much hope that something like "Vishera," the update to Bulldozer, be better than what the FX-series is now?

Or, should AMD just skip the AM-socket, work on redesigning the layout of the motherboard components, and rework Bulldozer first? In other words, make all the parts work well together and perform better before it is released to market.

I can definitely attest to the CPU and memory slot arrangement dating as far back as to when I had an Athlon XP 2600+ CPU. I've had an easier time installing custom heatsinks for Intel than I have had with AMD. (Cut fingers and peeled skin are good examples of trying to get those damn clips in. And, I have small fingers. I'm Asian. Lol. It still didn't help.)

Intel has shown better design and performance. I can definitely see QPI being better performing and efficient in design over Hypertransport just basing it on CPU reviews alone. It's getting better with each new Intel CPU. AMD just upgrades Hypertransport and increase its speed and bandwidth, and hope for the best.

So, that comes to my next question:
Should AMD just scrap Hypertransport and start designing something new with the next CPU and go with a new socket?
 
I'd wager that the next iteration of Bulldozer will be significantly better than Bulldozer is today. What does that mean? Well I don't know. As I said before it will probably be at least as good as Phenom II was to Phenom I. That doesn't say much but when the AMD fan boys were prepped and ready for Phenom II to pay off and kill Intel's Penryn and Yorkfield processors, I knew better. I expected improvement just as I do from Bulldozer's successor. However you can only take evolutionary steps so far. You don't make incremental adjustments and get revolutionary results.

As for the socket fiasco that is AM3+, I don't know. I prefer LGA style sockets and AMD does use them for the Opteron but my issue isn't with the socket itself but rather the mounting holes and the distance to the DIMM slots. What they really need to do is go to a symmetrical four hole design, and if they could match Intel on this precisely allowing us to use the same exact thermal solutions for both camps that would be fantastic. I'd like to see the distance between the DIMM slots and the CPU socket increased, but I don't know if that's possible without replacing Hypertransport. They can bypass the need to do so by emulating Sandy Bridge's approach with an integrated I/O controller, but I don't know if this is feasible. It's certainly possible, but financially it may not make sense for AMD to even try that route. Perhaps some kind of signal booster could be designed to allow HT to stretch farther? AMD may have investigated this and deemed it too costly. and it certainly might be the case. It is after all additional hardware and margins in this business are very thin already.

As I understand it, the rumor is that AMD will create Bulldozer's successors around socket FM1. And AGAIN AMD screwed up with FM1 as it has the same exact flaws as AM2, AM2+, AM3 and AM3+ all do. The devil is in the details and AMD doesn't seem to care about those details. Again they had the opportunity to create a new design and failed to do so. They just changed the pin count of the socket itself. :rolleyes: In all fairness if compact self-contained water cooling units take off the way heat pipe coolers did 5 years or more ago then it won't be a real problem anymore. So far we have every indication that this might happen due to the sheer volume of coolers following that basic design being released all the time. Am I a fan of this, no. Will it get the job done, yes. AMD may have seen the writing on the walls with that and said; "screw it, we'll let heat sink and fan makers worry about that." AMD may feel the approach they've taken with sockets is the nicer approach with regard to upgrades and making the customers happy. For the most part it seems to be the case as everyone with an AM3 / AM3+ board seems to be really happy they've got upgrade options.

And if you still end up needing a new motherboard because you don't get the proper BIOS update from your motherboard vendor, then it's not AMD's fault, it's your board makers fault. (You AM2+ guys should remember how this game is played.) Secretly I'd bet AMD wants most of the motherboard vendors to avoid updating their BIOS to some degree forcing people into buying motherboards now that AMD is back in the chipset business. But this way they can save face and make the motherboard vendors the bad guys in all of this.
 
I remember the forum thread where this guy was saying all of this. I don't know if I believe you need to hand craft every transistor. I'd wager some cutting and pasting gets done in the design phase. That being said, I'm sure certain aspects of the CPU being designed by hand beat the snot out of automated tools as far as efficiency goes. If what this guy says is true and AMD did in fact use these tools to craft Bulldozer, this points to an even sadder state of their design team because it shouldn't have taken this long to design Bulldozer. This points to some probable scrapping of earlier designs and rebuilding the design of Bulldozer from almost the ground up at least once. That's about the only reason to explain the delay unless their engineers were simply too reliant on the tools and were unable to find and correct flaws within the design once they reached a certain point. This points to incompetence on the part of the design team. A truly sad state of affairs if correct.

I'm by far no expert in this, but having worked on SRAM design for my masters studies, it was something I learned first hand not to rely on generated layouts. In fact the first thing my professor had me do was to hand craft the layout of a conventional 6T cell and another layout is generated by software. The difference in power consumption and delay was significant, and this was only 1 cell which holds 1 bit. Imagine if we build a 4MB cache out of this. Of course I'm sure the tools that AMD use are much more advance, but it still remains important to hand craft certain design at the most fundamental level.

You're right, there's no need to hand craft every transistor's layout, as in my case with SRAM, the most critical part is the layout of one single cell, then we merely duplicate them to create small blocks, simulate this block, play around with them, and once satisfied, we put them into larger blocks and we may rearrange the position of the larger blocks later on, but we won't have to redraw each transistor, we just move them in larger blocks.

In the article, it mention blocks for other functional units like adders and multipliers, it's important that these units are hand crafted, once you get an optimal layout for an adder block for example, you merely duplicated them wherever they are needed.

If AMD indeed let those automated tool generate everything right down to the layout of each individual functional units, that would be shocking. You'd have to question if they even intend on competing with Intel in the performance level segment or have they given up. :confused:
 
I'm by far no expert in this, but having worked on SRAM design for my masters studies, it was something I learned first hand not to rely on generated layouts. In fact the first thing my professor had me do was to hand craft the layout of a conventional 6T cell and another layout is generated by software. The difference in power consumption and delay was significant, and this was only 1 cell which holds 1 bit. Imagine if we build a 4MB cache out of this. Of course I'm sure the tools that AMD use are much more advance, but it still remains important to hand craft certain design at the most fundamental level.

You're right, there's no need to hand craft every transistor's layout, as in my case with SRAM, the most critical part is the layout of one single cell, then we merely duplicate them to create small blocks, simulate this block, play around with them, and once satisfied, we put them into larger blocks and we may rearrange the position of the larger blocks later on, but we won't have to redraw each transistor, we just move them in larger blocks.

In the article, it mention blocks for other functional units like adders and multipliers, it's important that these units are hand crafted, once you get an optimal layout for an adder block for example, you merely duplicated them wherever they are needed.

If AMD indeed let those automated tool generate everything right down to the layout of each individual functional units, that would be shocking. You'd have to question if they even intend on competing with Intel in the performance level segment or have they given up. :confused:

I just realized that they may have decided not to compete with Intel in that regard anymore and instead opted to move more resources to Llano and other Fusion based CPU / GPU chips and go after the mobile market with a vengeance. This also could explain why Bulldozer took so long to design and produce.
 
Dan, I've also seen that AMD CPU's are very sensitive with cache sizes. Could this be related to the small L1 cache used on Bulldozer? I think is too small.
 
Who is to blame? Ruiz? Meier? Both?

EDIT:
@aviat72. Thanks for your post. Very interesting.

IMHO: Ruiz has 101% of the responsibility for the immolation of what was once AMD. He has destroyed everything he has ever touched because he was always looking out to see what he could get for himself rather than to build shareholder value. Check his history and see. Who knows what backroom deal he made to keep himself from being Raj's cellmate for the next 11 years?
 
Dan, I've also seen that AMD CPU's are very sensitive with cache sizes. Could this be related to the small L1 cache used on Bulldozer? I think is too small.

Well this was true with regard to the Athlon 64, Athlon X2, Phenom & Phenom II. However this may not be true of Bulldozer. It's a completely different architecture so we don't know for certain. Not yet anyway.
 
Back
Top