Piledriver cores will use resonant clock mesh

The Inquirer = take with a huge grain of salt

Let's all hope that AMD can get back into the game.
 
It's been confirmed by AMD, FireBean.

The results are either a 10% decrease in total power consumption at a given frequency or a 10% higher clock at the same frequency without said feature.

It looks as if it may help, but it'll be nowhere near that 24% mark and perhaps that 10% figure is optimistic as it depends on a specific frequency and will likely fluctuate depending on how they address that it favors clock speed specificity. Still, any improvement is an improvement =P
 
It's been confirmed by AMD, FireBean.

The results are either a 10% decrease in total power consumption at a given frequency or a 10% higher clock at the same frequency without said feature.

Did you mean "...or a 10% higher clock at the same power draw without said feature," or do I know even less than I think I do? Because I thought clocks and frequency were used interchangably to refer to the same thing. (clocked at X.X GHz/frequency = X.XGHz)
 
Yea, that's what I meant. Sorry =P

+10% higher clock at same power envelope or
-10% power consumption.

The issue seems to be just how they'll address how the tech works across various frequencies, because from what I've gathered it performs best at a specific frequency.
 
lol amd does need 2 save power ;)

Intel 3900 series uses like 550watts overclocked under full load.

How is that any different. Power doesnt concern me one bit. What concerns me is IPC and core scheduling. Fix it on the next line of processors, blow intel out of the water, and bring real competition back is more important than the next low power chip.
 
Intel 3900 series uses like 550watts overclocked under full load.

How is that any different. Power doesnt concern me one bit. What concerns me is IPC and core scheduling. Fix it on the next line of processors, blow intel out of the water, and bring real competition back is more important than the next low power chip.

And the 3900 series has no competition in raw performance. AMD was struggling to beat it's own previous gen, 45nm part, in both performance and power consumption.
 
This was presented at ISSCC on Monday (with die photos), so it isn't exactly a rumor. I may be reading too much into the presentation, but the part that concerns me is most of the data presented talked about frequencies of 4GHz or less, with the peak efficiency around 3.3 GHz. The paper does mention power savings from 3.2 GHz to 4.4GHz, but in turbo mode Bulldozer already clocks higher than that.
 
We all know that AMD fucked up with bulldozer.
But i like what they are trying to do.They are constantly trying to lower power consumption on all their stuff lately.Where i live just a few days ago they raised the electricity bill by a huge 30%.
And i don't think prices will go down either.
I don't care who has the best cpu as long as it is good enough.
 
Last edited:
We all know that AMD fucked up with bulldozer.
But i like what they are trying to do.They are constantly trying to lower power consumption on all their stuff lately.Where i live just a few days ago they raised the electricity bill by a huge 30%.
And i don't think prices will go down either.
I don't care who has the best cpu as long as it is good enough.

yep thats was another reason for me getting the heck out of california.. i was paying 30 something cents a kWh. now in eastern washington i pay 6 cents a kWh and drops to 3 cents a kWh after 700kW. but i'm still stuck in the power efficiency mindset. hopefully AMD can fix the IPC issues since i like modular design they are going for, because i don't need an i7 2600k or a 3900 series, but i also don't want something thats as fast as my current system but consumes twice the amount of power.
 
Hmm well we shall see won't we! Half the problems are power consumption, the othe one is "price"

If FX were a lot more competitive it might get a look in.

I'm FX ready but I can't say I am inspired with what is on offer
 
You guys need to make your own solar panels and get off the grid =) JK

would be nice but this far north solar is useless during the winter and i have a 2 story house right behind mine which blocks the sun during the winter.
 
But its not funny.
Soon you will see intel as well to take amds path.
Amd is in the correct direction right now.
Lets see how well they will do.
 
But its not funny.
Soon you will see intel as well to take amds path.
Amd is in the correct direction right now.
Lets see how well they will do.

true, lets just hope AMD perfects it before intel switches.

Just use Wind Power Generators :p

live next to a military base, so you can't have anything that exceeds the height of your house on your property but wind power would definitely work here.
 
would be nice but this far north solar is useless during the winter and i have a 2 story house right behind mine which blocks the sun during the winter.

i don't know what part of the globe you inhabit but germany arguably has the most advanced solar voltaic panels on earth. outside winters in the arctic circle, it (germany) has some of the 'un-sunniest' weather of any country.

i can only imagine what those krauts would be able to develop if they lived in my neck of the woods (australia). hell if australian scientists had half the amount of funding our northern hemisphere neighbours had, the efficiency of solar voltaic would be such that the consumer could tell our coal spewing energy providers to fu*k 'em selves. that's if you could get the egg-heads out of the pub long enough...

on PD, 10% whether it's spent on increased clocks or slightly lower voltage, combined with a more mature node & architectural tweaks, may have something to look forward to.
 
The results are either a 10% decrease in total power consumption at a given frequency or a 10% higher clock at the same frequency without said feature.
It's a bit more complex than that. The paper discussing how it works is here: http://www.eecs.umich.edu/eecs/about/articles/2012/ISSCC_2012_Piledriver_final_submission.pdf

The implementation, as it applies to the PD core, is that:
Over the frequency range 3.0 GHz to 4.4 GHz, the power savings from rclk enable either a frequency increase of about 100 MHz for the same power, or a power reduction of 5-10% for the same frequency.
which is good, but modest. The "increase of about 100 MHz for the same power" statement on say a 4GHz processor shows how modest the power savings are at full load (2-3%).

Geeky cool, but not really living up to the theinq hype.
 
So in few words they are just adding up more complex stuff to the already fucked up design that they have. lol
 
Makes me wonder why they are still keeping that design since its so hard to tweak the stupid thing.
Why cant they just go with something more simple?
A final blow to AMD would be for INTEL to implement that silly design but to make it successfully. lol
 
At $.30/kwh it might make sense to install solar! In my neck of the woods typical payback on solar is 10 years, but at a third or less the energy cost.

Does this tech add significantly to the transistor count? I'm guessing no, so concerns of complexity are probably misguided.
 
At $.30/kwh it might make sense to install solar! In my neck of the woods typical payback on solar is 10 years, but at a third or less the energy cost.

Does this tech add significantly to the transistor count? I'm guessing no, so concerns of complexity are probably misguided.

I am not talking about the resonant clock mesh.
What i am talking about its the design that they initially had with bulldozer.
I say scrap the whole thing,why insist on something that is so hard to tweak?
 
Why cant they simply admit that they failed and make something different?
That design just seams to work for the servers but not for gaming.
 
it takes years and years of development to design a CPU. Scrapping it and starting over from scratch isn't an option.
 
Why cant they simply admit that they failed and make something different?
I know that my response doesn't completely apply here, but when Nvidia released their first incarnation of Fermi, it was a big heap of mess. It was big, hot, used a lot of power, expensive, and had very low yields. They stuck with it and have continued to improve the design, and it has been worth it. AMD may be able to do the same with their new CPU architecture.
 
Why cant they simply admit that they failed and make something different?
That design just seams to work for the servers but not for gaming.

I think things will be much better when they fix their high latency low bandwidth cache. I believe this is robbing the cores of their processing power.
 
I do agree to what you guys have to say.
But there is also another possibility,that this design if it does not workout at the end,it will take them back even more.Struggling with a design that simply is not easy at all to work on,it is not an option either.
 
I think things will be much better when they fix their high latency low bandwidth cache. I believe this is robbing the cores of their processing power.

The issue is that the slow cache is a "feature" that allows BD to clock so high. In general, higher cache latency allows for higher clock speed, and since BD was a server chip it was more about how much cache it has rather than quick that cache performed. Obviously it didn't work out in the server space either due to the same issues that plague it in the desktop: poor IPC and that -20% CMT tax; which, if it can be improved, will likely net the most performance gains particularly across heavily threaded workloads.

Considering some Trinity chips are breaking the 4Ghz barrier I'd have to say that they're likely relying on tweaking with the L1 size, the WCC between the L1 and L2 and maybe decreasing the overall size of the L2. You should instinctively question the design when you see stock clock speeds that high. It's indicative of a still all-too-long pipeline and likely still-slow cache. I'd think the IPC may have gotten closer to the Llano while any performance gains will be derived from the new instruction sets and difference in clock speed.
 
Obviously it didn't work out in the server space either due to the same issues that plague it in the desktop: poor IPC and that -20% CMT tax; which, if it can be improved, will likely net the most performance gains particularly across heavily threaded workloads.

I thought it was doing ok in the server section?
 
http://www.anandtech.com/show/5279/the-opteron-6276-a-closer-look

The disappointing results in the non-server applications is easy to explain as the architecture is clearly more targeted at server workloads. However, the server workloads show a very blurry picture as well. Looking at the server performance results of the new Opteron is nothing less than very confusing. It can be very capable in some applications (OLTP, ERP, HPC) but disappointing in others (OLAP, Rendering). The same is true for the performance/watt results. And of course, if you name a new architecture Bulldozer and you target it at the server space, you expect something better than "similar to a midrange Xeon".
 
I do agree to what you guys have to say.
But there is also another possibility,that this design if it does not workout at the end,it will take them back even more.Struggling with a design that simply is not easy at all to work on,it is not an option either.

I'm all fairness though all designing CPUs in general is not easy.
 
550 watts would be fore a whole system plus that CPU, no?

Yes for the average system. I know that 550watts of power is a lot of juice for a lot of people.

That system would include 1 processor, one hard disk or ssd, a dvd/blu ray drive, and a motherboard.

That is A LOT of power for such simple components. The chip alone at 4.8'ish GHZ is using like 250watts.

However I have heard reports of 3960x using 500watts alone at 5.0++ ghz. Though I would have no lab setup to test such claims. That is a freaking HUGE amount of juice for just a CPU.
 
This was presented at ISSCC on Monday (with die photos), so it isn't exactly a rumor. I may be reading too much into the presentation, but the part that concerns me is most of the data presented talked about frequencies of 4GHz or less, with the peak efficiency around 3.3 GHz. The paper does mention power savings from 3.2 GHz to 4.4GHz, but in turbo mode Bulldozer already clocks higher than that.
This is implementation specific. The drop in efficiency is likely due to approaching the limits of configurable LC resonance:
An LP formulation was used to determine inductor allocation from a palette of 5 with values in the 0.6 to 1.3 nH range
 
Yes for the average system. I know that 550watts of power is a lot of juice for a lot of people.

That system would include 1 processor, one hard disk or ssd, a dvd/blu ray drive, and a motherboard.

That is A LOT of power for such simple components. The chip alone at 4.8'ish GHZ is using like 250watts.

However I have heard reports of 3960x using 500watts alone at 5.0++ ghz. Though I would have no lab setup to test such claims. That is a freaking HUGE amount of juice for just a CPU.

hell an i7 920 @ 4ghz will pull 500W @ the wall, (system) without the gpu being under load.
 
They need a miracle to pull this one off with piledriver.

I believe it is the Anand article linked above that mentions the high likeliness that BD's design theory could have merrit in the future. Maybe the miracle is that when that time comes AMD has had significant practice with that architecture. Of course AMD has announced that they're getting out of the HPC segment and is focusin on mobile and Fusion.
 
Back
Top