The difference here is that Fermi was already as fast, or usually faster than the competition at the time. It was just hot, inefficient, and yielded very poorly due to leakage. The GTX580 was really just rearranging some of the GF100 GPU to fix leakage, enabling the extra shaders and upping the clocks. Bulldozer, conversely, isn't even in the same realm as the competition 90% of the time. In gaming the i3 is usually better than AMD's best chips in everything but very heavily multithreaded games, of which there aren't many yet. The architecture seems to work alright on multicore designs that are built for it, but for everything that isn't (read: almost all current software), it runs worse than the previous generation. And that previous generation was already far behind Intel in IPC performance. It's highly unlikely they can fix all of the problems in a single revision. But I'd love to be wrong.I know that my response doesn't completely apply here, but when Nvidia released their first incarnation of Fermi, it was a big heap of mess. It was big, hot, used a lot of power, expensive, and had very low yields. They stuck with it and have continued to improve the design, and it has been worth it. AMD may be able to do the same with their new CPU architecture.