Piledriver cores will use resonant clock mesh

I know that my response doesn't completely apply here, but when Nvidia released their first incarnation of Fermi, it was a big heap of mess. It was big, hot, used a lot of power, expensive, and had very low yields. They stuck with it and have continued to improve the design, and it has been worth it. AMD may be able to do the same with their new CPU architecture.
The difference here is that Fermi was already as fast, or usually faster than the competition at the time. It was just hot, inefficient, and yielded very poorly due to leakage. The GTX580 was really just rearranging some of the GF100 GPU to fix leakage, enabling the extra shaders and upping the clocks. Bulldozer, conversely, isn't even in the same realm as the competition 90% of the time. In gaming the i3 is usually better than AMD's best chips in everything but very heavily multithreaded games, of which there aren't many yet. The architecture seems to work alright on multicore designs that are built for it, but for everything that isn't (read: almost all current software), it runs worse than the previous generation. And that previous generation was already far behind Intel in IPC performance. It's highly unlikely they can fix all of the problems in a single revision. But I'd love to be wrong.
 
true, lets just hope AMD perfects it before intel switches.



live next to a military base, so you can't have anything that exceeds the height of your house on your property but wind power would definitely work here.

And only one well-placed generator in the base commander's office would be needed...
 
It's a forest-for-the-trees situation for so many short-sighted critics.

Quite obviously there is no "once single change will make you see God" kind of improvement left in modern processers (until a new breakthrough tech, say, quantum computing, takes place).

AMD is taking the road of "let's make all the improvements we can and add them up" apporach. CPU/GPU parallel tech, resonance clock mesh, die shrink, dropping L3, etc.
 
I believe it is the Anand article linked above that mentions the high likeliness that BD's design theory could have merrit in the future. Maybe the miracle is that when that time comes AMD has had significant practice with that architecture. Of course AMD has announced that they're getting out of the HPC segment and is focusin on mobile and Fusion.

When did AMD announce that?There is no need for them to get out of HPC as you are saying.
They are just reorganizing things,getting their stuff together for now.
 
The only thing that makes me think about it,AMD seams that they want to avoid SOI for some reason to give them more flexibility and to don't rely that much from GF,and to be able to chose other fabs for their cpus.Maybe that is why AMD licensed the resonant clock mesh for their cpus.
 
AMD is taking the road of "let's make all the improvements we can and add them up" apporach. CPU/GPU parallel tech, resonance clock mesh, die shrink, dropping L3, etc.

Why would dropping L3 net imporvements? L3 is a benefit to gaming I know that. I couldn't tell you about other uses.
 
Why would dropping L3 net imporvements? L3 is a benefit to gaming I know that. I couldn't tell you about other uses.

Cost is the biggie. Having more cache always helps, but generally in server applications it helps even more, which is why Bulldozer was crammed with so much slow cache. On a desktop it makes less sense (in gaming it can help) because it needs to be fast as well whereas on the server you can get away with it being a bit slower if you've got more of it. Unfortunately, it wasn't a bit slower but quite a bit slower (clock speeds weren't high enough).

Cache tends to be the most transistor dense part of a CPU, so decreasing cache sizes amounts to a less costly and difficult chip to produce. It's why the Athlons had no L3 but roughly the same clock speeds and cheaper price tags. With APUs that transistor density is shifted towards the GPU part of the APU so having L3 + on-die GPU makes it difficult to make (GloFo yield issues with Llano) and pretty damn expensive.

Don't expect Vishera or Trinity to bring AMD to a level where they can contend for the performance crown. They've given up racing Intel in that respect. AMD will be looking to make cheap chips with great performance-per-dollar.
 
Back
Top