Why are ARM processors evolving so quickly?

Trackr

[H]ard|Gawd
Joined
Feb 10, 2011
Messages
1,786
I remember back in 2008, the Atom CPUs were released.

And today, almost 4 years later, we have marginally faster dual-core versions.

But ARM processors..

2007 - Single-core 220Mhz
2008 - Single-core 330Mhz
2009 - Single-core 660Mhz
2010 - Single-core 1Ghz w/the GPU power of a GMA 950
2011 - Dual-core 1Ghz/1.2Ghz w/the GPU power of a 6600 GT
2012 - Quad-core 1Ghz+ w/the GPU power of a 7600 GT
2013 - Octo-core 1-2.5Ghz w/the GPU power of a 9600 GT
2014 - Octo-core 1-2.5Ghz w/the GPU power of an 8800 GTX

Can someone explain this?

I don't think there has EVER been an industry where such growth was seen so quickly.

I mean, I get the low-power attraction.. but these things can't even run Windows.
 
Intel is scared? Naaahhh... But seriously, I DON'T want MSFT to be spoiled by the strong hardware, resulting in bloated and resource-hogging software. WP7 Mango was a good start, so is Windows 8.
 
I remember back in 2008, the Atom CPUs were released.

And today, almost 4 years later, we have marginally faster dual-core versions.

But ARM processors..

2007 - Single-core 220Mhz
2008 - Single-core 330Mhz
2009 - Single-core 660Mhz
2010 - Single-core 1Ghz w/the GPU power of a GMA 950
2011 - Dual-core 1Ghz/1.2Ghz w/the GPU power of a 6600 GT
2012 - Quad-core 1Ghz+ w/the GPU power of a 7600 GT
2013 - Octo-core 1-2.5Ghz w/the GPU power of a 9600 GT
2014 - Octo-core 1-2.5Ghz w/the GPU power of an 8800 GTX

This is based on what? Evolving fast is sticking in more cores?

They are way behind state of the art. It is relatively easy to improve performance when you start so far from state of the art, since you actually don't have to really advance the state of the art to make progress, you can just sort of coast.

Also bear in mind that an 8800GTX in 2014 isn't impressive. That will be 8 years after it's debut. A smartphone today is more powerful than the most powerful supercomputer in my province when I was going to University.

Back to the actual CPU side, The real question is why is Atom such a piece of crap. This is mainly because Intel was so wrapped up in the war for the top end that they neglected Atom as an afterthought.

Still the fastest ARM chip is several times slower than an i5 for integer performance and Hundreds of times slower for FP performance. Now if ARM designs actually started to push the envelope and rival x86 designs for Integer/FP performance, then they would slow down because then they would have the same issues as any technology actually on the leading edge, in which you actually have to advance the state of the art to make improvements.
 
Last edited:
OP's example is wrong, Snapdragon appeared on phones in late 2009 (HD2/Xperia X10), and Tegra 2 devices did ship in 2010.
 
I think windows 8 is going to make computing a lot more interesting when we have a third choice. I think a geforce 8800 gtx performance on board is very impressive, but by 2014 I would expect AMD to have 6950 performance on cpu :)
 
The OPs numbers are off.

In 2007 when the iPhone came out it was running a 620MHz ARM processor underclocked to 412MHz.
 
It's easy to evolve when you're already far behind the state of the art and are just leveraging new process nodes :p
 
This is based on what? Evolving fast is sticking in more cores?

They are way behind state of the art. It is relatively easy to improve performance when you start so far from state of the art, since you actually don't have to really advance the state of the art to make progress, you can just sort of coast.

Also bear in mind that an 8800GTX in 2014 isn't impressive. That will be 8 years after it's debut. A smartphone today is more powerful than the most powerful supercomputer in my province when I was going to University.

Back to the actual CPU side, The real question is why is Atom such a piece of crap. This is mainly because Intel was so wrapped up in the war for the top end that they neglected Atom as an afterthought.

Still the fastest ARM chip is several times slower than an i5 for integer performance and Hundreds of times slower for FP performance. Now if ARM designs actually started to push the envelope and rival x86 designs for Integer/FP performance, then they would slow down because then they would have the same issues as any technology actually on the leading edge, in which you actually have to advance the state of the art to make improvements.

Excellent post (although, a bit pessimistic).

That was actually my theory; that ARM CPUs are simply designed to maximize power efficiency by sacrificing a tremendous amount of performance.

So, I suppose what you're saying is - if Sandy Bridge sacrificed IPC to the extent that Tegra does, we could all have 64-core i7-2800ks by next month.

Well, then.. the only logical conclusion is that the reason to go more cores instead of more performance per core.. is marketing.

So, we're getting 8-core Tegra 4 because it sounds better than a faster 4-core Tegra 4.

It's kind of like AMD and their failing business of rivaling Intel's 4 cores with their 8 cores.

All in all, I guess I get excited when I think of an 8-core Tegra, because I want a CPU that can run in a tablet but be as powerful as a laptop. And I guess that was what Atom was going to be.. now ARM is here but it doesn't support x86..

I guess the only option now would be Windows 8.
 
Excellent post (although, a bit pessimistic).

That was actually my theory; that ARM CPUs are simply designed to maximize power efficiency by sacrificing a tremendous amount of performance.

So, I suppose what you're saying is - if Sandy Bridge sacrificed IPC to the extent that Tegra does, we could all have 64-core i7-2800ks by next month.

Well, then.. the only logical conclusion is that the reason to go more cores instead of more performance per core.. is marketing.

So, we're getting 8-core Tegra 4 because it sounds better than a faster 4-core Tegra 4.

It's kind of like AMD and their failing business of rivaling Intel's 4 cores with their 8 cores.

All in all, I guess I get excited when I think of an 8-core Tegra, because I want a CPU that can run in a tablet but be as powerful as a laptop. And I guess that was what Atom was going to be.. now ARM is here but it doesn't support x86..

If we go by future fantasy releases. In 2014 Intel is supposed to have an all new Atom Architecture on a 14nm process(Intel is the process King). I am betting it will be decently power efficient and more powerful than an 8 core ARM. There is also 8 core Atom in that time-frame, for servers where it makes more sense).

NVidia is desperate for ARM to hit the laptop/desktop because with Integrated GPUs from Intel/AMD constantly improving, they are getting squeezed out of the laptop/desktop business. So they are eager to pump ARM performance any way they can. The way they can do that with be architecture improvements is pumping the core count.

Which is great for synthetic benchmarks in your advertising, but you get diminishing returns adding more and more cores. I have 4 cores on my desktop. But I don't see cores 3 or 4 activate unless I am encoding Video/Audio. Most workload just don't have that much available parallelism. 8 core tablets/laptops is just silly for the foreseeable future.

The bigger news for ARM is better cores like the A15 which is something like 50% better IPC. I would rather have a dual core A15 than a quad A9. Even though 4 A9s will score higher DMIPS.
 
Last edited:
It all comes down to what you me by evolving quickly. Top of the line x86 so completely kills ARM in terms of raw horsepower that it's still woolly mammoth compared to an elephant. That said ARM is powerful enough to run a lot of smaller applications which are really the bulk of what the average person would ever use.

But in a couple of years with Windows going smaller and Intel going more power efficient ARM and x86 should be very close to power efficiency.
 
ARM is in a strange position where having "less" features has won them the popularity contest. ARM's appeal is simply its barebones design. I mean, they (ARM) are just now getting around to having more than a 3 stage pipeline. And their instruction set is still overly simplistic (even moreso than any other comparable RISC architecture such as MIPS or PowerPC).

The thing is, while newer ARM architectures are currently being being developed, the things they are just getting around to adding are features that have been found in every microprocessor made in the last 15 years. If you really look into it, there's very little innovation going on there. They're just simply adding stuff as they go from this ground up approach.

Also worth noting is that ARM is still a fabless company. They don't make chips, they just license their tech off to whomever wants it. This lets them focus all their efforts towards research and design instead of getting their hands dirty in material science and the whole economics of fab processing.

Intel, in contrast, is working from the top down. They're taking all of their collective innovation accumulated over the past decades and trying to cram what they can into a reasonably small chip while keeping it within a certain power envelope. Basically, Intel isn't trying to reinvent the wheel. They're perfectly happy with their x86 instruction set architecture. Atom was their first attempt at going super low power. And I think it was a success.

So you've got two companies with widely different histories, working completely polar opposite approaches towards some common goal: to build a highly efficient yet extremely low power microprocessor. Nvidia's Tegra and AMD's Fusion just muddy up the waters up a little more because of their GPGPU bias.
 
If we go by future fantasy releases. In 2014 Intel is supposed to have an all new Atom Architecture on a 14nm process(Intel is the process King). I am betting it will be decently power efficient and more powerful than an 8 core ARM. There is also 8 core Atom in that time-frame, for servers where it makes more sense).

NVidia is desperate for ARM to hit the laptop/desktop because with Integrated GPUs from Intel/AMD constantly improving, they are getting squeezed out of the laptop/desktop business. So they are eager to pump ARM performance any way they can. The way they can do that with be architecture improvements is pumping the core count.

Which is great for synthetic benchmarks in your advertising, but you get diminishing returns adding more and more cores. I have 4 cores on my desktop. But I don't see cores 3 or 4 activate unless I am encoding Video/Audio. Most workload just don't have that much available parallelism. 8 core tablets/laptops is just silly for the foreseeable future.

The bigger news for ARM is better cores like the A15 which is something like 50% better IPC. I would rather have a dual core A15 than a quad A9. Even though 4 A9s will score higher DMIPS.

It seems to me that everyone is trying to reach the main bulk of mainstream consumers.

AMD is doing is with their CPU/GPU monolithic chips.
Intel is doing it with their low-power (compared to x86 chips) Atom processors.
ARM are doing it by ditching x86 and going ultra-low-power.

The first gets desktops.
The second gets laptops/netbooks.
The third gets mobile devices.

AMD has competition from Intel in the desktop devision, so it their CPU/GPU might now work.
So, it leaves Intel vs. ARM.

A 14nm Quad-Core Atom vs a 22/32nm ARM Octo-Core.. seems like the Atom will win.

Except that Atom has no GPU.. so they'll have to put one in and sacrifice 1/2 cores.

Which means that no matter what, ARM will always be more power-efficient but will get less so with better IPC and Atom will continue to have an identical IPC but get more power-efficient with lower fabrication processes.

They seem to be coming from opposite sides and trying to hit the same mark.
I think, like the PS3, Atom has a "burden", the x86 instruction set, but that it will win in the end.

That is, IF Intel puts the effort into it. Currently, ARM is putting in a lot more.

ARM is in a strange position where having "less" features has won them the popularity contest. ARM's appeal is simply its barebones design. I mean, they (ARM) are just now getting around to having more than a 3 stage pipeline. And their instruction set is still overly simplistic (even moreso than any other comparable RISC architecture such as MIPS or PowerPC).

The thing is, while newer ARM architectures are currently being being developed, the things they are just getting around to adding are features that have been found in every microprocessor made in the last 15 years. If you really look into it, there's very little innovation going on there. They're just simply adding stuff as they go from this ground up approach.

Also worth noting is that ARM is still a fabless company. They don't make chips, they just license their tech off to whomever wants it. This lets them focus all their efforts towards research and design instead of getting their hands dirty in material science and the whole economics of fab processing.

Intel, in contrast, is working from the top down. They're taking all of their collective innovation accumulated over the past decades and trying to cram what they can into a reasonably small chip while keeping it within a certain power envelope. Basically, Intel isn't trying to reinvent the wheel. They're perfectly happy with their x86 instruction set architecture. Atom was their first attempt at going super low power. And I think it was a success.

So you've got two companies with widely different histories, working completely polar opposite approaches towards some common goal: to build a highly efficient yet extremely low power microprocessor. Nvidia's Tegra and AMD's Fusion just muddy up the waters up a little more because of their GPGPU bias.

I don't know..

ARM is less powerful and less versatile. But consumers aren't either.

Atom is more powerful and more versatile. But with Windows 8, does that matter?

But the thing is.. Atom already has a lower IPC than desktop CPUs.

For instance, the original N270 was about 1/3 as powerful, per clock as an Arrandale E2180.

So Atom is already playing the ARM game - less performance, less heat.
 
A 14nm Quad-Core Atom vs a 22/32nm ARM Octo-Core.. seems like the Atom will win.

Except that Atom has no GPU.. so they'll have to put one in and sacrifice 1/2 cores.

Intel Atom SoCs use PowerVR licensed designs that many ARM chips use as their graphics core, so essentially Atom graphics = ARM graphics.
 
Intel Atom SoCs use PowerVR licensed designs that many ARM chips use as their graphics core, so essentially Atom graphics = ARM graphics.

Oh, I wasn't aware. My last Atom was an A330. It housed its onboard GPU in its northbridge.

But nVidia is readying faster and faster ARM GPUs every year. How can it compete?
 
I think you are paying too much attention to NVidia Marketing.

PowerVR isn't standing still either. They have several faster designs coming and Tegra 3 that just came out from NVidia still hasn't caught up with PowerVR in the iPad 2 from back at the beginning of the year.
 
I think you are paying too much attention to NVidia Marketing.

PowerVR isn't standing still either. They have several faster designs coming and Tegra 3 that just came out from NVidia still hasn't caught up with PowerVR in the iPad 2 from back at the beginning of the year.

So Tegra 3 has twice the CPU speed but less GPU performance?

And.. yeah, I guess I'd like to believe nVidia "marketing" when they claim they'll have a GPU with 128 SPs (assuming) by 2014.
 
Well it has twice the CPU core count. That is not the same as twice the CPU speed.

The main point is that NVidia is not all that for mobile graphics. Their latest, just out the door Tegra3 still has slower graphics than PowerVR Series5. Shipping for over 8 months.

So it will have to be yet another version just to catch up if PowerVR stands still and they aren't. Series 6 PowerVR is coming soon and it sounds like a beast from leaks (they tend not release questionable future roadmaps).

This just all points back to Atom SoC and graphics, which should have no worries if Intel just keeps using PowerVR designs.

My opinion of NVidia PR, is they tend to over-hype and under deliver (or deliver late).

For the foreseeable future I would bet on PowerVR at least keeping up.

Graphics won't be the Atoms Achilles heel (drivers are the hiccup).

Edit: Previous NV roadmap:
http://www.geek.com/wp-content/uploads/2009/06/tegra_roadmap.jpg

Note Tegra 3 was supposed to ship in 2010 and it didn't really show up until December 2011...
 
Last edited:
Well it has twice the CPU core count. That is not the same as twice the CPU speed.

The main point is that NVidia is not all that for mobile graphics. Their latest, just out the door Tegra3 still has slower graphics than PowerVR Series5. Shipping for over 8 months.

So it will have to be yet another version just to catch up if PowerVR stands still and they aren't. Series 6 PowerVR is coming soon and it sounds like a beast from leaks (they tend not release questionable future roadmaps).

This just all points back to Atom SoC and graphics, which should have no worries if Intel just keeps using PowerVR designs.

My opinion of NVidia PR, is they tend to over-hype and under deliver (or deliver late).

For the foreseeable future I would bet on PowerVR at least keeping up.

Graphics won't be the Atoms Achilles heel (drivers are the hiccup).

I actually didn't really know much about PowerVR prior to this thread.

I do, however, question whether they can beat the biggest GPU manufacturer in the world.
 
I actually didn't really know much about PowerVR prior to this thread.

I do, however, question whether they can beat the biggest GPU manufacturer in the world.

You mean like ARM "keeping up" with the biggest CPU manufacturer in the world in Mobile? PowerVR is essentially the ARM of Mobile GPUs.
 
Indirectly related to this thread

1. While x86 and ARM are "discussing" what it meant to each other, MIPS has re-entered the ring...with Android...

Alpha, PA-RISC, MIPS, Power, Itanium, Sparc, ARM ...

What Itanium seek to move....one by one, they find a path to swing back into action....obviously now it is challenging for Alpha (line of AMD, many from Alpha-heritage), PA-RISC (line of Itanium itself),...until all elementals run their courses...
 
You mean like ARM "keeping up" with the biggest CPU manufacturer in the world in Mobile? PowerVR is essentially the ARM of Mobile GPUs.

Yeah, but no one else is trying to make ARM CPUs.

If Intel got into the ARM field, they'd probably win.

This is nVidia here. In any case, competition right? That has to be good for us.
 
Indirectly related to this thread

1. While x86 and ARM are "discussing" what it meant to each other, MIPS has re-entered the ring...with Android...

Alpha, PA-RISC, MIPS, Power, Itanium, Sparc, ARM ...

What Itanium seek to move....one by one, they find a path to swing back into action....obviously now it is challenging for Alpha (line of AMD, many from Alpha-heritage), PA-RISC (line of Itanium itself),...until all elementals run their courses...

Um, AMD has never been a part of the Alpha architecture, nor has Intel been a part of PA-RISC.
AMD processors are all x86 and the Intel Itanium is IA-64, not RISC of any kind.

Alpha was owned by DEC until they were bought out by Compaq and HP later on, AMD was never a part of this deal.
PA-RISC was owned by HP, this has nothing to do with any Intel processors.


Another thing, Alpha processors haven't been used in years.
It would be impossible for them to compete with the POWER7 processors as even now, higher-end x86 chips have near the performance of POWER7.
 
Last edited:
Some of original Alpha-heritage (technology, idea, human expertise, misc ) went over to AMD and Opteron 64 with hyper-transport and many other associated things were born. They receive further development till today...

HP was on PA-RISC, then linked up with Intel to get Itanium going, in this process dropping its own PA-RISC development. It is understood HP has core-relationship with Itanium development...

But that is not important any more, since now

1. Maybe the original Alpha-influenced team at AMD is very much less now...
2. Itanium latest development is in the news lately. Even HP now already preparing some kind of explorative plan ahead in recent news as well...

3. MIPS is still operating all these years after SGI dropped it in their server/workstation lineup. However, with the recent Google Android Ice Scream Switch, they again found a possibility in the consumer scene...when single-core proves viable, dual-core will come, when demand pickup, soon quad-core will be interesting. It is known that currently the biggest outlook for MIPS is due to its use in far east links. Since many designs are now already paying licensing to MIPS for full-scale compatibility, it appears to be well-accepted from now onward.

4. Oracle, as you know is running Sparc now...Oracle is keeping a lot of IT people employed...

5. There is a reason why IBM is a 100-year company...

----------------------

The following is side-note

1. Many years ago, when HP integrated Compaq and triggered floods of changes, many had opinions. Here not going to touch on details but the situation. This is the Carly-era.
2. You can say many strong opinions.

3. Years later, when HP tries to reverse the course and preparing swing again, many had opinions. Here not going to touch on details but the situation. This is the current-era.

4. Interesting note ... immediately the entire x86 industry asks HP not to do it, even Intel...
4.1 Read the forum for opinions...
4.2 Hence, Carly is vindicated! :) Microsoft/Intel admit it is better for HP to stay within the ring...
4.3 However, the original issues are still valid, so the counter-argument team is vindicated as well. Why, HP has challenging situation when quick software response is needed.
4.4 No argument for software since HP is originally a hardware-oriented engineering firm-common conceptual view, though I admit not sure.

5. HP is tied to Microsoft and Intel in this current form...even Intel and Microsoft know it.
6. From here we understand software expertise takes time and dedication, and other challenging circumstances. Look at Microsoft's maximum effort in trying to gain traction on Search, Cloud, Mobile, misc...Carly is vindicated again because the entire industry knows it takes real effort to meet these challenges...
6.1 Again, there's no free lunch, you gain something and you maybe not so flexible something...
 
As far as I can tell, DEC only licensed out some of the bus and memory architectures to AMD during the Athlon XP era systems and CPUs.

I'm not saying you're wrong, but I've never heard of any of this before.

Do you have any documentation to back your claims about HP/Intel and DEC/AMD?
If so, I'd love to read them. :)
 
As far as I can tell, DEC only licensed out some of the bus and memory architectures to AMD during the Athlon XP era systems and CPUs.

From Wikipedia:
In August 1999, AMD released the Athlon (K7) processor. Notably, the design team was led by Dirk Meyer, who had worked as a lead engineer on multiple Alpha microprocessors during his employment at DEC. Jerry Sanders had approached many of the engineering staff to work for AMD as DEC wound down their semiconductor business, and brought in a near-complete team of engineering experts.
 
I actually didn't really know much about PowerVR prior to this thread.

I do, however, question whether they can beat the biggest GPU manufacturer in the world.
You do know PowerVR ship about 1 million GPU’s a day? They have something like 70% of the mobile market and are far in advance of Nvidia in mobile GPU tech. Year after year Nvidia has been a good generation or two behind. To put it in prospective PowerVR’s last gen phone GPU can beat NVidia’s just realised Tegra 3 tablet GPU.

PowerVR are not a new company they powered the Dreamcast which at the time was graphically ahead, they power the Sony PSP 2 among many other device's going back years.

http://uk.gamespot.com/infinity-blade-ii/videos/infinity-blade-ii-visuals-video-6346438 thats on an almost 1 year old PowerVR mobile GPU and is far in advance of what Nvidia can do in the same market.
 
Last edited:
I see the facts. But if nVidia pulls their R&D into this.. they'll win hands down.

That's all I'm saying.
I disagree PowerVR has something like 80% of the company in R&D year in year out. Can Nvidia even match that? The way Nivida is setup I don’t think they can even get 50% of the company in R&D.

Plus PowerVR have far too many advantages over Nvidia for example PowerVR have a tile based deferred render system that is vastly more efficient. NVidia are not going to win hands down as its going be one hell of a fight for Nvidia just to match PowerVR let alone win.

EDIT: Some evidence: http://www.inspiredgeek.com/wp-content/uploads/2011/11/image26.png
That’s PowerVR old series 5 chips which are about to be replaced with series 6 against Nvidia’s 2012 chips.
 
Last edited:
ARM is able to do this because they have previously been non-aggressive when it comes to high-performance designs. They have ignored easy improvements like deeper pipelines, dual-issue, SIMD, advanced branch predictors, out-of-order execution and large caches because these all sap power and die area.

But thanks to innovations in the last 5-10 years (the perfection of clock-gating and near-instantaneous frequency changes, amongst other things), there's suddenly a huge power budget. And since ARM has been ignoring all these fancy features, there's plenty of die space for more advanced cores (and more of them). Adding a core costs you zero idle budget thanks to aggressive clock-gating, and since the cores take a pittance of die space there's plenty of room to add as many as you want :D

In terms of "whiz-bang" features, there's a lot of easy fruit to be picked, and the performance gains are very similar to that seen by Intel in the 1990s:

Fully-pipelined 486 processor, the first x86 processor with L1 cache (1989)
Dual-issue Pentium with advanced branch prediction (1993)
Out-of-order multi-issue Pentium Pro with back-side-bus cache (1995)
Introduction of MMX (1996) and SSE and on-die L2 cache (1999).

In a ten year span most of the important cutting-edge processor features were integrated into the x86 architecture. This despite the fact that these technologies were new and relatively unproven, and everyone claimed that making a pipelined, multi-issue out-of-order x86 processor was impossible.

Today all this "advanced" technology is old-hat, and discussed in-depth in any reputable computer architecture textbook. With the introduction of the modern Smartphone market with the iPhone, suddenly there is a need for a high-performance processor. ARM is only too happy to provide these "advanced" features if it means they can charge extra for the chips, and the features are relatively easy for a well-traveled chip company like ARM to add to their product lines.

You wait and see: after the Cortex-A15, these massive performance gains will likely hit a wall. Cores more advanced than A15 will take many years to develop, and since the power budget has been consumed, performance gains will be constrained to new process nodes. Unless there's another major breakthrough regarding power consumption, I expect the ARM market to settle-in to match Intel's current pace soon.
 
Last edited:
Because Intel had to create the processes to create these fast demons. Once the framework was done, it's easier for these other companies to follow in their footsteps and use their manufacturing methods.

I do like the performance vs. low power, though. I like having tons of computing power in the palm of my hand. My phone has more computing power than my computer did 10 years ago. I can't wait to see what it's like 10 years from now. Both from a tablet/phone POV and a PC POV.
 
Adding a core costs you zero idle budget thanks to aggressive clock-gating, and since the cores take a pittance of die space there's plenty of room to add as many as you want :D

Clock-gating does nothing to eliminate leakage current, which has become the predominant power consumer in deep sub-micron processes.
 
ARM is in a unique position because in certain arenas...applications are very "focused". That focus means that designers are looking to squeeze down everything (power, cost, etc). For example..there are now good ARM9 cores in FPGA (Zync device, which I am using). The core isn't anything amazing...but it is small, cheap, and low power. Pretty much what i'm after day in and day out.
 
Clock-gating does nothing to eliminate leakage current, which has become the predominant power consumer in deep sub-micron processes.

True, but a lot of modern designs can go further and actually turn the processor off completely. I did not state this clearly in my post, but this is what I intended to state. Sorry about that.

Intel's Nehalem (and derivatives) have 6 C-states, which range from:

C0 - active. Dynamic and leakage power
C1 - clock-gating. Only leakage power
C3 - partial core shutdown. Partial leakage power
C6 - near-complete core shutdown. Near-zero leakage

See here (page 4):

http://cs466.andersonje.com/public/pm.pdf

In C3 and C6 states you get reduced leakage because portions of the core are actually powered-down. You can bet that ARM has similar features, and given that it would be almost zero power hit to add cores to the die, assuming they spent most of their time in C3/C6 equivalent states.

AMD's Llano and Bulldozer also have similar support for C1 clock-gating and C6 with a near-complete core shutdown.
 
Yeah, The First Dell Axim released 9 years ago had a 300MHz ARM processor. The over 7 year old X30 had a 624MHz ARM processor.

ARM designs move pretty much in line with other developments in processors. There has been slow and steady progress since 1995: http://en.wikipedia.org/wiki/List_of_ARM_microprocessor_cores

The GPU claims are often for simplistic triangle throughput, and not raw shading performance or comparable real world gaming/computing performance. Also, lacking the very high memory and interface bandwidth of PCIe desktop GPUs, the mobile GPUs paired to ARM cores have their own sets of compromises. I'm not dissing the recent ARM processors. They pack a very good performance/power ratio for mobile devices.
 
True, but a lot of modern designs can go further and actually turn the processor off completely. I did not state this clearly in my post, but this is what I intended to state. Sorry about that.

Intel's Nehalem (and derivatives) have 6 C-states, which range from:

C0 - active. Dynamic and leakage power
C1 - clock-gating. Only leakage power
C3 - partial core shutdown. Partial leakage power
C6 - near-complete core shutdown. Near-zero leakage

See here (page 4):

http://cs466.andersonje.com/public/pm.pdf

In C3 and C6 states you get reduced leakage because portions of the core are actually powered-down. You can bet that ARM has similar features, and given that it would be almost zero power hit to add cores to the die, assuming they spent most of their time in C3/C6 equivalent states.

AMD's Llano and Bulldozer also have similar support for C1 clock-gating and C6 with a near-complete core shutdown.

Thanks for the info, great list as well.
That PDF was awesome, good to learn about the S and C states.
 
Last edited:
It is alot easier to add features others invented than to invent yourself. More or less that is all ARM does. Also it is really all about the explosive growth of smart phones which used to be a niche product for business users and are now a main stream product as people decided face book was ever so important to have real time. The whole deal with ARM is they just need to keep power use down and add performance. Intel has to get power use down but already does performance.

I think someone else mentioned your time table is bogus as well. Without looking at anything I know my phone had a 500mhz CPU in 2008.
 
Back
Top