Intel CEO Takes On Apple's A7

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
During Intel's earnings report, CEO Brian Krzanich had a few things to say about Apple's new 64-bit A7 chip.

All of our products are 64-bit. The products we're shipping today are already 64-bit. And if you take a look at things like transistor density. And if you compare, pardon the pun, apples to apples, and compare the A7 to our Bay Trail, which has a high-density 22-nanometer technology, then our transistor density is higher than the A7 is.

The A7 is a good product, but we do see the Moore's Law advantage from 28 [nanometer] to 22, when you compare dense technology to dense technology. And we believe 14 nanometers is just another extension of Moore's Law. That is, twice the density [of 22-nanometers].
 
macbook air powered by a7 cpu and limited edition gold finish for only 1699.99 FTW!
 
Nice paragraph of Techno-Babble there by the Intel guy. 95% of the buying public would have given up reading it by sentence number 2.
 
A whole lot of comparison regarding transistor density more than anything else...
 
At least this is better than what Intel was saying about AMD's 64-bit chips just a few years ago:

Intel: "You don't need 64-bits on the desktop." Intel ran an entire advertising campaign around that pithy little jingle.....Doubt that will ever be forgotten....;) (All because Intel had no 64-bit desktop cpu but AMD did, while Intel was trying instead to push 64-bit Itanium on a world that didn't want it.)

Now Intel's faced with a real reason for using the slogan, as in "You don't need 64-bits in a cell phone! DUH!"

...but of course, Intel doesn't use it appropriately this time, either....Intel can't ever seem to get the 64-bit thing straight, can it?...;)

It boggles the mind that anyone might care in the slightest what Apple puts in cell phones, anyway...! Rip Van Winkle says: "Wake me when cell phones become interesting enough to talk about!"
 
A whole lot of comparison regarding transistor density more than anything else...

Although its not in the plug, that's what the question was originally about which was basically "why are you spending all these resources to go 14nm when Apple is getting pretty good performance out of 28nm?"
 
It boggles the mind that anyone might care in the slightest what Apple puts in cell phones, anyway...!
every processor manufacturer has had to directly respond to the A7 but it's probably no big deal
 
Although its not in the plug, that's what the question was originally about which was basically "why are you spending all these resources to go 14nm when Apple is getting pretty good performance out of 28nm?"

Makes sense, though the guy does ramble a bit about it instead of being concise.
 
It boggles the mind that anyone might care in the slightest what Apple puts in cell phones, anyway...!


Because the A7 is an ARM CPU and ARM CPUs are now being put in servers.

An ARM CPU can saturate a 1Gbps ethernet connection with a very low power drain, so why would you bother using an Intel CPU.
 
Now Intel's faced with a real reason for using the slogan

When the article about Apple going 64-bit was posted on this site, I made a comment that basically said that going 64-bit actually has advantages beyond memory addressing, especially for modern media phones (in comparison to what was going on in desktops) considering the microcontroller heritage of ARM (not to mention paving the way for the future). Then there was an article where Intel basically said "64-bit is pointless for phones," to which I made a post saying they are full of shit and just being competitive, also proving the rumors about Apple switching laptop/desktop chips away from Intel many years down the road. Now, we have this article where they state that "all" of their chips are 64-bit and instead have superiority in process size.

So, if you're following that trend, it's pretty clear Intel is sore about the move, which to me only supports my original stance that it's an important step for Apple and the ARM market. Keep in mind I don't own any Apple products (generally dislike the company), don't have an Android phone, and my CPU's are mostly Intel, so I'm being unbiased on this. I'm just setting the record straight as far as Intel changing their mind three times in as many weeks in response to the issue, showing that they really don't have a clue about how to handle it.
 
So, if you're following that trend, it's pretty clear Intel is sore about the move, which to me only supports my original stance that it's an important step for Apple and the ARM market. Keep in mind I don't own any Apple products (generally dislike the company), don't have an Android phone, and my CPU's are mostly Intel, so I'm being unbiased on this. I'm just setting the record straight as far as Intel changing their mind three times in as many weeks in response to the issue, showing that they really don't have a clue about how to handle it.
I agree, and maybe it's just me, but I think it has been blatantly obvious for quite some time just how miffed Intel is about the changing tech market. They need to get on the train to compete.
 
I agree, and maybe it's just me, but I think it has been blatantly obvious for quite some time just how miffed Intel is about the changing tech market. They need to get on the train to compete.

But they just sold all their ARM and mobile ventures and left the train.
 
Although its not in the plug, that's what the question was originally about which was basically "why are you spending all these resources to go 14nm when Apple is getting pretty good performance out of 28nm?"

My theory:

In x86 the expensive part of processor design is the decode unit. The complexity goes up more than a pure RISC implementation with the number of decoders because the fetch unit has to handle instructions not being on a clean boundary, and of course the complexities of micro-code (the worst ARM has to deal with is Thumb). AMD followed a similar design practice with Jaguar, opting to optimize their existing 2-wide decoder rather than face the power penalty of 3-wide.

If this assertion is true, Intel's best solution in terms of die space area and power was to continue with a 2-wide design that can turbo for short periods. Also, if you can design the chip for higher clock speeds, it makes it easier to clock it up and put it into low-end desktops/nettops/notebooks. Hence the reason we saw some rumors about Atoms being re-branded as Pentiums/Celerons.

http://www.techspot.com/news/52767-...atom-processors-as-celeron-pentium-chips.html
 
Brian Krzanich comes from an engineering background, not a business background. He hasn't learned to speak corporate bullshitspeak yet.
Your assertion is that people from engineering backgrounds are incapable of rambling?
 
macbook air powered by a7 cpu and limited edition gold finish for only 1699.99 FTW!
lol

If Apple does move some of its OS X products to ARM eventually, it will be interesting to see what they wind up charging for it. The pricing could be very flexible, for Apple at least. I agree it may be expensive for an ARM laptop.
 
.10nm is going to be the brick wall they hit. If they go to .14 the next logical step is .10 and then that's it unless some new tech to go beyond that happens and even then you are getting into spaces that might get squirrelly to control.
 
Intel already 7nm has already under development, which is scheduled for release in 2017. From Intel's process technology blogs, Intel seems to strongly be hinting it will go to tunnel FET (FeFET) before it moves to III-V channel on Si.

I don't think anyone is going to .10nm any time soon. ;)
 
*Intel already has 7nm under development...

(brain distraction)
 
he forgot the most important comparison factors: the competition is GOOD ENUFF! & COSTS LESS!
 
I don't think the transistors are the densest things at Intel...

Intel has made some idiotic mistakes over the past 6-7 years and that's why x86 is becoming irrelevant for most of the consumer world. They created the atom core in 2007, but sat on it for over 5 mfing years before finally releasing silvermont in 2013. They decided to have the SoC designs lag their desktop + laptop core process by a year or more.

They are fixing all these things, but I just think it's a little too late. At the end of the day somebody can create an ARM device cheaper and perform at par and possibly beyond what Intel's design's can. Legacy support might be something needed for enterprise, but the stranglehold windows and x86 had on the consumer market is over.

Intel isn't going away by any means, but their growth years are probably behind them.
 
Back
Top