Picking brains - similarities between BD and the 2900xt situations...

Heavy_Nova

Lurker
Joined
Mar 2, 2008
Messages
936
Just thinking out loud and curious about whether anyone else would be willing to share their thoughts. AMD's current situation with Bulldozer feels similar to what was experienced several years back with the HD2900XT. If you guys recall - it suffered much of the same criticism and overall ho-hum-ness that BD is facing now.

However, the 2900XT also proved to be the precursor to a truly fantastic generation of GPUs that followed it... The 3800 series started cleaning the mess up, then the 4000 series kicked ass, followed by the 5000 series being pretty amazing.

AMD/ATi was trying to think outside the box with the 2900XT and got bit. The card didn't suck entirely, but it was not what folks were expecting. The biggest thing to me is that at least they're trying to be innovative. I think it's that spirit that led to the fantastic later generations of GPUs.

I see BD poised in a similar position. It's trying to break a mold and go about things a different way. It may seem misguided or foolish in some respects, but it's not entirely terrible. Just not what folks were expecting. I'm excited about what AMD will bring to the table building on top of this technology, and I'm hoping they can pull a similar turn-around like they did with the 2900XT.

Every time I think about BD I can't help but think about the 2900XT and what followed it. Anyone else find the situation similar and are hoping for a similar result?
 
I am hoping for a similar result yes, but bare in mind the 2900 XT issue was helped a lot by the die shrink. BD will likely not have one for quite sometime now. AMD is stuck on their current process, so the huge die and power issue is going to be harder to solve. Perhaps once the process gets matured and by the time Piledriver comes we will have those performance ehancements and the proper clocks. Sounds like AMD wanted these to be stock at 4 GHz +, but that didn't happen due to some issues.
 
If AMD can improve its design and increase IPC between now and next year... perhaps they can be slightly more competitive with Intel if Ivy bridge only offers the current estimated increase of up 10% over Sandy bridge (perhaps 5% more IPC combined with a little more overclock headroom).

Unfortunately I don't think we'll see a 6 core in the ivy bridge lineup so if AMD can shore up Bulldozer's faults they may be able to catch up a bit.
 
I think the issue for AMD though is they are in a much tougher position than ATI was when faced with nVidia.

ATI and nVidia were on similar footing as they were both fabless, and they only compete with one another on designs.

The issue for AMD and its performance CPU business is that they are quite a bit behind on fabs and design. Bulldozer had a small window of opportunity to be successful, but that window is most likely gone, with Intel's 22nm Tri-gate Ivy Bridge around the corner, Intel will be out of sight for AMD I am afraid.

I do wish them the best of luck though, even though I am very pessimistic about their future in the CPU business.
 
Non simililarities:

nVidia wasn't trying for the latest fab node every gen.
Both shared the same fabs (TSMC, UMC)
HD2900XT was faster than previous generation GPUs

GPUs have an extreme amount of design automation. In theory, BD does, too. However, GPUs have a 12month cycle right now. CPUs, closer to 3 years. Not to mention... the G80 that whooped the R600 up and down the field was nVidia's most hand designed shader core that reputedly took the longest time (5 years) and money for nVidia to ever design.

AMD actually has an architectureal advantage in their GPUs vs nVidia, in sense of performance/die space, due to nVidia pursuing GPGPU off the bat with G80.

The HD2900XT was refreshed into the HD3870 in almost record time, due to the 55nm node being avalible.




Of course, all above are negative comments, since I'm generally a negative person :p
 
is that why nvidia's tech beats amd's tech in DX11 testings?

not sure when you got that info:rolleyes:

You mean when Nvidia bullies developers to increase tessellation to levels that create massive amounts of artificial, useless work for the GPU? Is that what you're talking about? Bravo. :rolleyes:
 
The problem is that the videocard market is less volatile. You can swap out an Nvidia card for an AMD card without swapping motherboards. CFX and SLI changed that a tiny bit, but still. Comparing a VGA launch to a CPU launch is a stretch.
 
If AMD can improve its design and increase IPC between now and next year... perhaps they can be slightly more competitive with Intel if Ivy bridge only offers the current estimated increase of up 10% over Sandy bridge (perhaps 5% more IPC combined with a little more overclock headroom).

Unfortunately I don't think we'll see a 6 core in the ivy bridge lineup so if AMD can shore up Bulldozer's faults they may be able to catch up a bit.

How ? They've just released the damn thing. It will take years until something new is prepared. Next gen is supposed to have 3% IPC and like 10% frequency improvement.
 
I personally believe the next generation of BD processors will perform admirably.

IMHO, AMD knows dumping over 800mn transistors into the uncore was a poor and likely time-constrained (or desperate) decision. With the onset of yet ANOTHER generation of SB cores, they had to release SOMETHING to the market, and using the automated design process would allow for final silicon production much faster than hand designing the circuitry. I have a feeling the uncore will be heavily re-worked, and will yield much better IPC/watt in piledriver or whatever replaces BD.

I believe they will increase single threaded IPC by 10-20% in the next generation of chips...

I also believe the node process will mature dramatically in the next year or two, further increasing IPC/watt, increasing maximum clock speeds and overall chip performance.

Finally, I believe that with the bad press BD received, a new chip (be it a revised BD, or PD) will be refreshed/released much quicker than the typical 3y cycle, ala TLB bug replaced Phenoms.

I do not believe it will beat IB's 3D transistors, as these are a major leap forward in transistor design... Though I also don't believe IB will be dramatically faster than SB. I feel that if the above does not happen, PD will be as irrelevant as the pentium D's were.
 
...
I believe they will increase single threaded IPC by 10-20% in the next generation of chips...

I also believe the node process will mature dramatically in the next year or two, further increasing IPC/watt, increasing maximum clock speeds and overall chip performance.

AMD declared they expect 3-5% from uarch tweaks ( normal for a new stepping ) and 10% from more frequency ( process maturing ).

How can you "believe" in something that not even the manufacturer commits to ?

Finally, I believe that with the bad press BD received, a new chip (be it a revised BD, or PD) will be refreshed/released much quicker than the typical 3y cycle, ala TLB bug replaced Phenoms.

I do not believe it will beat IB's 3D transistors, as these are a major leap forward in transistor design... Though I also don't believe IB will be dramatically faster than SB. I feel that if the above does not happen, PD will be as irrelevant as the pentium D's were.

The last thing AMD needs is to do things in a hurry. K10 failed because AMD did it in a hurry after canning the 2 previous projects. BD is more of a fail than not after the 1st iteration was cancelled and the 2nd had to be done in as short of time as possible.
Result in both cases : underlivering on the promises, low performance, high power consumption, bugs in some cases.

I prefer them to aproach things methodically, focus on a few key markets and execute accordingly.
 
AMD declared they expect 3-5% from uarch tweaks ( normal for a new stepping ) and 10% from more frequency ( process maturing ).

How can you "believe" in something that not even the manufacturer commits to ?

Easily. They MUST adapt or they will FAIL.

The last thing AMD needs is to do things in a hurry. K10 failed because AMD did it in a hurry after canning the 2 previous projects. BD is more of a fail than not after the 1st iteration was cancelled and the 2nd had to be done in as short of time as possible.
Result in both cases : underlivering on the promises, low performance, high power consumption, bugs in some cases.

I prefer them to aproach things methodically, focus on a few key markets and execute accordingly.

AMD will continue to focus and execute in their "few key markets", I'm simply speaking on behalf of the enthusiast community that, admittedly, makes up very little of their revenue stream.

For AMD to stay relevant with enthusiasts, at all, the conditions I outlined above must come to pass. I personally believe the above will occur simply because it has to. Those 3-5% increases in u-arch efficiencies were quoted long before the dismal single threaded performance numbers of BD were known by the general public. I simply believe, due to market pressure, that these numbers HAVE to be better.

AMD consistently shows that doing things in a hurry seriously hurts their ability to deliver a solid product, but this time (and only within the enthusiast sector) I believe two things are different: 1, they HAVE to perform much better ASAP and 2. they're (hopefully) not canning any more architectures... HOPEFULLY they can allow BD to mature, as redesigning portions of the chip ought to take significantly less time than a complete architecture overhaul or re-draft.

The market has spoken and shown where AMD went wrong, perhaps correcting these specific areas (such as the massive uncore) will be a priority for the next iteration of chips.
 
A friend of mine where talking about this very same comparison this past weekend. The difference to me though is that ATI was upfront about the fact that they had a card to compete with the 8800GTS, not the GTX, and they priced it accordingly. It appears with BD that AMD claimed it would compete and even beat SB, but that was not the case.
 
Back
Top