Haswell article online @ RWT

pxc

Extremely [H]
Joined
Oct 22, 2000
Messages
33,063
Page 4 and page 6 have the most interesting info for performance. http://www.realworldtech.com/haswell-cpu/4/

The 2 biggest changes are moving from a 6-eu wide architecture on sb/ib to 8-eu wide architecture on Haswell, and avx2/FMA. The latter doubles Simd throughput over sb/IB. There's also transactional memory which could make MT applications more efficient. There are also changes to the ring bus which should help igp performance.

Haswell isn't a small tweak from sb/IB.
 
Thanks for the heads-up pxc, I always enjoy David Kanter's articles. As someone who digs energy-efficient designs, I was already looking forward to Haswell. Now I'm looking forward to the performance gains too. I'll be sure to shop around for a new laptop with Haswell when they ship.
 
Is this a typo?

"we estimate that a Haswell core will offer around 10% greater performance for existing software, compared to Sandy Bridge."

I had expected 10% higher than Ivy, not Sandy.
 
I had expected 10% higher than Ivy, not Sandy.
Yeah, I guess he meant to say Ivy Bridge. Even though IVB was a 'tick' (process change rather than architecture) processor it had architectural improvements as well so 10% faster than SNB sounds no better than IVB.
 
Yeah, I guess he meant to say Ivy Bridge. Even though IVB was a 'tick' (process change rather than architecture) processor it had architectural improvements as well so 10% faster than SNB sounds no better than IVB.

Yeah, that's what I thought too. However, the rest of the article is so technologically spot on that I figured that Kanter wouldn't make such an obvious mistake.
 
Yeah, that's what I thought too. However, the rest of the article is so technologically spot on that I figured that Kanter wouldn't make such an obvious mistake.

Anand mentioned 10-15% bump over Ivy and that's what I recall from other sites/writers, so Kanter probably made a slight typo there.

I love Kanter articles :)
 
Pretty decent write up..

Turning to the microarchitecture, the Haswell core has a modestly larger out-of-order window, with a substantial increase in dispatch ports and execution resources. Together with the ISA extensions, the theoretical FLOPs and integer operations per core have doubled. More significantly, the bandwidth for the cache hierarchy, including the L1D and L2 has doubled, while reducing utilization bottlenecks. Compared to Nehalem, the Haswell core offers 4× the peak FLOPs, 3× the cache bandwidth, and nearly 2× the re-ordering window.

This is the most exciting thing, and for anyone that doesn't wanna read the whole article, sorta sums up the most interesting stuff..
 
Anand mentioned 10-15% bump over Ivy and that's what I recall from other sites/writers, so Kanter probably made a slight typo there.

I love Kanter articles :)

I'll take a 10%+ over Ivy on conventional software with the expectation that the developers will sooner or later catch up with Haswell's more advanced features. However, the buzz seems to be that the Q2 '13 intro will be mostly laptop parts which excite me as much as nude centerfolds of Oprah. Any idea when the really good Haswell desktop CPUs will be launched (and I don't mean on LGA2011 since Intel is postponing that until shortly after the invention of warp drive...)
 
I'll take a 10%+ over Ivy on conventional software with the expectation that the developers will sooner or later catch up with Haswell's more advanced features. However, the buzz seems to be that the Q2 '13 intro will be mostly laptop parts which excite me as much as nude centerfolds of Oprah. Any idea when the really good Haswell desktop CPUs will be launched (and I don't mean on LGA2011 since Intel is postponing that until shortly after the invention of warp drive...)

If it follows the Sandy and Ivy release cycle then laptop 45W CPUs would be first with 35W following and then the higher end desktop models, i5s and i7s, with the i3's coming even later.

I wouldn't hold my breath waiting for developers to recompile and take advantage of Haswell's architectural improvements. That sort of thing takes 3-5 years to come to fruition, so don't go out buying Haswell and expecting the AVX2 ISA to immediately make an impact.

My biggest question is just how high will these models clock? We've heard Intel give IPC estimates ranging from 10-15% on average, but...

First, Haswell’s Last Level Cache (LLC) has been enhanced for performance. Each slice of the LLC contains two tag arrays, one for data accesses and one for prefetching and coherency requests. The data bandwidth is still 32B/cycle, since the ring bus has not widened. The memory controller also has a larger re-ordering window, yielding better throughput for write data.
Second, the ring and LLC are on a separate frequency domain from the CPU cores. This enables the ring and LLC to run at high performance for the GPU, while keeping the CPUs in a low power state. This was not possible with Sandy Bridge, since the cores, ring and LLC shared a PLL, and wasted power on some graphics heavy workloads.

They've decoupled the LLC from the core clock speeds, meaning it'll run slower for the CPU but offer better GPU-dedicated performance. This might bring in a case of the Bulldozer L3 aches but given the overall improvements probably shouldn't be drastic in the overall scheme of things. It does point to Haswell being an Ultrabook first type of CPU design, sacrificing the L3 access speeds for iGPU gains.

haswell-5.png


It looks great but there are a few lingering questions I have...

1 - What are the clock speeds compared to Ivy? My instinct tells me they've either held steady or even dipped judging by the sheer size of the on-die GPU and the now massive 40 EUs in the GT2 models. Considering Intel hasn't said a word about clock speeds, I'm inclined to say they held steady in order to maintain that same TDP threshold set by Ivy. Unlike Bulldozer, whose architecture was envisioned to gain almost strictly from clock speeds in single-threaded workloads, Haswell appears to be an IPC first and the clock speeds appear to be ignored. Performance is performance, so a gain in either one while maintaining the same TDP (or in this case the CPU's total power output decreased) is a great thing and I'm not complaining :)

2 - The price!

Haswell will be the first big x86 core to compete against ARM-based cores in tablets. While the performance will be dramatically higher, the power budgets are very different. Haswell SoCs will reach 10W, while competing solutions are often closer to 4W. The real question is the relative efficiency of Haswell SoCs, and the advantage of the massive x86 software ecosystem. Fortunately, Windows 8 provides an opportunity to accurately measure performance and efficiency. The results will inject some hard data into discussions that have been otherwise vacuous and largely driven by marketing.

Rumors are that Haswell will likely be even more expensive than Ivy was upon release. In order to compete with tablets Intel will need to play ball with the pricing as well. People don't care if Haswell will run circles around an A15 if it consumers 4x as much power and the Haswell SoC costs as much, or more than, an entire tablet like the Nexus 7. Simply put, is this going to be another case of Ultrabooks, such that it's priced absurdly high and out of contention? If I'm buying an ultraportable device, you can bet your ass I'm not looking to spend over a $1000 on it, especially if I'm going to be using a neutered 10W TDP-neutered part that offers half of the performance of a competing 35W laptop at a much higher price.

3 - 10W is still too high for a tablet, though it is in the realm of passively cooled with a hefty HSF. Are there any lower TDP SKUs?

4 - Related to the one above, Intel has showed that the 10W HW chip with GT3 will offer the same performance as a 17W Ivy ULV in graphics, but what's the CPU performance scaling like? What I mean is, what's the overall % drop in performance going to lower TDP ranges? and secondly, are we going to see the same throttling issues that plague Ivy ULVs when loaded on both ends of the chip? It's quite fine to advertise a 2.7ghz Turbo clock speed, but if the CPU can't hit that clock speed for more than a fraction of a second when gaming, spending most of its time at idle instead, then the issue of stock clock speeds becomes an even more important question to ponder over.
 
3 - 10W is still too high for a tablet, though it is in the realm of passively cooled with a hefty HSF. Are there any lower TDP SKUs?

I would say it's likely. Intel already offers cTDP down on select mobile parts, so it's feasible they could drop below the 10w nominal TDP. All you really need for a beefy tablet is to get power below 5-6w, and that's probably possible.

I like the looks of Haswell, but it will be a bit of time before software makes use of those massive FMA improvements. Hell, I'm still waiting for GROMACs to support the AVX unit on my Sandy processors, so I probably won't bite on this one, but the groundwork has been laid so we may see better support shortly!
 
Probably. In tel already offers cTDP down on select mobile parts, so it's feasable they could drop below the 10w nominal TDP. All you really need for a beefy tablet is to get power below 5-6w, and that's probably possible.

But then what's the price? If it still costs 200-300+ then what's the point?

And what's the scaling like? Microarchitectures are made to scale optimally between certain TDP ranges and when they step outside of that range see diminishing returns (think of the decreasing perf-per-watt you get with Ivy Bridge past like 4.2ghz).

And how would that play with the Atom? Intel isn't making huge revisions and architectural changes to the Atom line if they're just going to dump it altogether come 2014.

I think 2013 is going to be an interesting year for Intel. I just really hope they don't price themselves out of contention here and the current rumors regarding pricing and SKUs isn't true.
 
Fascinating article. You don't find many of these right ups anymore for each microarchitecture like we once had. Sounds pretty spot on. Small boost out the gate, overall longetivity with the added instructions and optimizations. Really depends on how fast programmers want to move.

I'm really hoping for that general 10-15% boost as I've heard it's closer to 5-10% and Ivy was a solid 5% in most cases not that 10% they promised so it's safe to just cut it down the middle to be watchful. Plan to have Haswell for 5-6 years before my next upgrade.
 
If there is gonna be a bunch of laptop Haswells before the i7s then with Intel's new laissez-faire attitude (since they have no one nipping at their i7 Haswell heels now that AMD is moribund) I'm thinking that we might not see any i7 Haswells until Ivy is on LGA2011 at the end of '13 (at the earliest). Looks like I'm gonna be still milking my 2600K next year at this time. Too bad as I really wanted/needed an upgrade in the next few months. :(
 
The 10% is for existing software. As new compilers are tuned to take advantage of Haswell (which may also benefit other processors), the figure can go higher. Other than microbenchmarks and SPEC, that will probably take a couple of years.
 
3 - 10W is still too high for a tablet, though it is in the realm of passively cooled with a hefty HSF. Are there any lower TDP SKUs?
10W is and isn't too high for a tablet, particularly if you stretch out weight to 2lbs. The more important figure I'd worry about is low power "active" state, not just the new "keep the screen alive while the CPU sleeps" low power modes. If it can go down below 1W-1.5W, it could work out OK. Obviously a 10W Haswell will stretch the boundaries of the typically ARM-based tablet form factor, and won't be going into an iPad mini-like 0.66 lb design.

Lower power SKUs would be the realm of the newer Atom chips. On 22nm and finally with a new out of order execution core (and updated graphics), those Atoms should be much faster than current Atoms. Considering that, I'm sure Intel could make an Atom-like power Haswell, but it wouldn't perform any better than an Atom. Performance and/or lower power aren't free.
 
10W is and isn't too high for a tablet, particularly if you stretch out weight to 2lbs. The more important figure I'd worry about is low power "active" state, not just the new "keep the screen alive while the CPU sleeps" low power modes. If it can go down below 1W-1.5W, it could work out OK. Obviously a 10W Haswell will stretch the boundaries of the typically ARM-based tablet form factor, and won't be going into an iPad mini-like 0.66 lb design.

Lower power SKUs would be the realm of the newer Atom chips. On 22nm and finally with a new out of order execution core (and updated graphics), those Atoms should be much faster than current Atoms. Considering that, I'm sure Intel could make an Atom-like power Haswell, but it wouldn't perform any better than an Atom. Performance and/or lower power aren't free.

10W is still too high to be passively cooled in a standard tablet form factor. We might finally see Ultrabooks without the need for fans to go along with their heatsink (if the heatsinks are cleverly enough designed), but fitting a 10W TDP Haswell chip inside a run-of-the-mill tablet isn't going to happen.

There's also the issue of battery capacity and price. Haswell will chew up a lot more power in SoC form at full load than will ARM-based SoCs or Atoms and even AMD's Temash. Though the battery life in idle at the (now deeper) sleep states will reduce battery life under a majority of workloads, Intel and the OEMs will still need to account for the full load disparity between the two architectures. It'll be awesome to get 10 hours of battery life at idle but if you're only getting 3 hours watching video or 2 hours gaming then it's still a problem.

These are definitely Ultrabook chips and even the 10W SKUs. But the fact that Intel has managed to get it down so low in TDP means there's definitely hope for some future chips jumping the sub-5W TDP hurdle
 
10W is still too high to be passively cooled in a standard tablet form factor. We might finally see Ultrabooks without the need for fans to go along with their heatsink (if the heatsinks are cleverly enough designed), but fitting a 10W TDP Haswell chip inside a run-of-the-mill tablet isn't going to happen.

There's also the issue of battery capacity and price.
10W is not with typical use. If typical use is down in the 1W-1.5W range (via significant voltage and clock speed reductions*), it should be OK.

It would definitely require a design with a fan, or at the very least contact with a large surface like the back of the device if it's made of metal.

* if, for example, the GPU is rated at 3W (say 800MHz, 1.1v) and the CPU/uncore is rated at 7W (say 1.4GHz, 1.1v), downclocking the GPU to 200MHz @ 0.7v (0.5W) and CPU to 400MHz @ 0.7v (1.3W) yields a reduction in power to 1.8W for example, roughly**. Brief periods of full speed of the CPU may be common, but it wouldn't run that continuously under "normal loads". Certainly a tablet with a 10W CPU may not be suitable as a DTR if you require full speed most of the time. But it should definitely be designed to handle the load if needed. However, with this hypothetical example, a 30WHr battery should be able to power a tablet (SoC Haswell, display, SSD storage) under "typical use" for almost 8 hours, and a little under 2 hours if the CPU and GPU are 100% continuously.

** typically this will be much less because the SoC will run in some lower power state when idle, and those figures are maximums for the clock speed and voltage, however atypical that situation is. This is just to illustrate how a typical load uses much less power than 10W.

(Small aside... if you own a handheld iOS device, you know how quickly some games eat the battery life... and that's with a low power ARM SoC!)
 
Last edited:
You're assuming it scales down to under 10W TDP. Thus far, Intel has only mentioned a 10W variant.

The "typical use" range wouldn't matter. The chip is going to sit in idle 95% during use, but it's still rated at a 10W TDP because that's the cooling requirement at full load so that's what OEMs have to account for when designing the device that it's sitting in.

You also can't just underclock it, bin it and lock the multiplier at whatever clock speeds/vcore you want. If that were true you'd be able to get your desktop chip down to 5W as well, but there's a range of available TDP thresholds where a chip can't cross over due to process/architectural reasons. It's a lot more complicated than just "drop the clock speed by another 500mhz and the vcore as well" and, boom, you've got your sub-10W part.
 
I take no particular issue to owning a tablet with a fan. Realistically, the fan would only necessarily need to be running during demanding situations where a slight amount of noise and vibration (which could probably be rendered imperceptible with a finely-balanced, high-quality fan) is probably acceptable. The only area of concern would be the necessity of vents, which takes something that could in theory be sealed to something that is no longer sealed. Practically speaking, though, I don't think this is of great concern.

I think it's worth noting, too, that we shouldn't get too terribly wrapped up in what the TDP is. It's the rating that suggests to what extent cooling is necessary, not a typical power consumption figure.
 
You're assuming it scales down to under 10W TDP. Thus far, Intel has only mentioned a 10W variant.
You don't seem to understand how power consumption works. Look at any of the Intel data sheets to see how much power a processor uses (max) under various operating states. Hint: a 17W processor doesn't always use 17W, for example. :p

A 10W Haswell SoC will likely have a sub-2W active state, as I described above. In various lower power states, it should use sub-1W while displaying content and waiting for something to happen (input, network activity, etc).
 
You don't seem to understand how power consumption works. Look at any of the Intel data sheets to see how much power a processor uses (max) under various operating states. Hint: a 17W processor doesn't always use 17W, for example. :p

A 10W Haswell SoC will likely have a sub-2W active state, as I described above. In various lower power states, it should use sub-1W while displaying content and waiting for something to happen (input, network activity, etc).

Sure I do, you just don't understand what TDP is and that it's not power consumption. TDP is how much the cooling in a given device (or HSF) will have to dissipate for that specific processor. That's not power consumption, that's heat. Heat is the issue here.

A sub-2W active state is fine, but at load it's chewing up significantly more power and it's still rated for 10W TDP, meaning in a tablet that's just not going to work as there's not enough space to dissipate that heat. Intel can't just underclock their chips to reach any TDP they want and slap them in a smartphone or tablet because it doesn't work like that. You need appropriate cooling for a given processor and Haswell is still too beefy/hot to fit in a typical tablet form factor. Once they hit that sub-5W TDP (currently it's at 10W lowest) then OEMs can think about putting them within a tablet with no fan.
 
Intel can't just underclock their chips to reach any TDP they want and slap them in a smartphone or tablet because it doesn't work like that.
Why not? Why can't you drop the CPU multiplier to whatever you want and have a correspondingly lower TDP?

And as defaultluser already mentioned, Intel have configurable TDP (cTDP) now so just dial in whatever you want...

Anand: Ivy Bridge Configurable TDP Detailed
 
Why not? Why can't you drop the CPU multiplier to whatever you want and have a correspondingly lower TDP?

And as defaultluser already mentioned, Intel have configurable TDP (cTDP) now so just dial in whatever you want...

Anand: Ivy Bridge Configurable TDP Detailed

Because architecturally (and by process) you're still limited. It's the same reason you can't take your desktop chip and lower it's TDP by underclocking/undervolting until you reach sub-5W levels. At some point you hit a wall where any lower and the chip either isn't powering up anymore (baseline vcore) or the performance is abysmal and you're better off opting with another architecture.

There's always a set range for the TDP thresholds, and for Haswell the bottom end is 10W, such that binning any lower doesn't make sense for Intel.

For example, Ivy is binned at 17W for the ULV. Any lower isn't feasible so that's the ceiling. Likewise, it's also binned at 77W for the high end desktop variants (and it'll be different for server but then again the entire die is).

You get a range, but it's still a limited spectrum. For Ivy (and Sandy), Intel was able to drop the TDP down to 17W in consumer products and still retain a healthy bit of performance (though it still throttles due to hitting that TDP threshold often, particularly under GPU and CPU load [gaming] where the CPU often runs at idle clocks to give the GPU more freedom. Thus you still get a performance hit due to TDP thresholds being too constrictive). With Haswell that range has dropped to 10W for the low end, but 10W is still too high for a typical tablet form factor. It's definitely feasible for a passively cooled laptop type of design or a slightly thinner-than-Surface-Pro with a fan or no fan but beefier passive cooling.

--

For any given architecture there's an optimal range where if you stray outside of it you'll get significantly diminishing returns, whether that's going up or down in clocks and vcore. For example, you can't overclock an Ivy 3770K to infinity because at some point it's hitting it's ceiling, regardless of what type of crazy cooling you've got on hand. Contrastingly, Bulldozer has very high potential clocks but because of the chip size and sheer # of transistors, it also chews up a whole lot of wattage and requires massive amounts of vcore bumps so you can't overclock it forever either. That same concept holds true for the lower end with respect to binning and underclocking/undervolting.

This is why AMD and Intel design chips at ~35/45W and then bin from there. Haswell is Intel's first foray into sub-35W designs (for their "main" architecture and consumer products) at 17W being the design focus and they'll be binning from that 17W. Despite that seemingly low 17W focus, it's still limited to 10W on the bottom end.
 
Last edited:
Anand mentioned 10-15% bump over Ivy and that's what I recall from other sites/writers, so Kanter probably made a slight typo there.

I love Kanter articles :)

One point Kanter makes is that Haswell will go into areas where neither Sandy Bridge or Ivy Bridge made any inroads at all - SoC is, in fact, a major encroachment by Haswell.. (In other words, it is Haswell/derivative of Haswell that will replace Clover Trail, which is, in fact, Atom's successor; this would indeed explain the lack of designs featuring Clover Trail to date).

The last time Intel went this wide with a CPU design was, in fact, Conroe/Kentsfield followed by Wolfdale/Yorkfield - remember, both went from entry-level notebooks to servers - still, according to Kanter, Haswell will not succeed *just* Sandy Bridge, Ivy Bridge, or even Clover Trail/Atom - Haswell will have differences in implementation by target - some implementations will lack some of the features he outlines due to their sector not needing them. That's where the rubber will, in fact, meet the road with Haswell - product differentiation based on the common Haswell core.
 
The price is still too high and the chip is still too big, though. Intel is well aware of that and they've already lined up Atom successors. Look:

intel_atom_roadmap_anand.jpg


They had their Atom roadmaps well defined even prior to their laptop/server lines.

Haswell is certainly their first dip into "SoC everywhere," if you will; VRMs on-die and chipset. But it's still locked as far as price, TDP and power consumption are concerned. It can be stuck into tablets, but those tablets will cost a pretty penny and still consume a good bit of power under load. Remember, 10W is really low, but it's still double the sub-5W target that current tablets are gunning for.

Haswell's design goal was that 17W Ultrabook segment. Though the SKUs and TDPs will vary, they're still a bit too high (and it's too costly) for the ultramobile tablet/smartphone market, which is exactly where the Atom fits in.

I don't see Haswell being utilized in a lot of traditional tablets but rather large convertibles with a fan. That really limits the product line and implementation. I do think it's going to address certain quirks with the Ultrabook form factor, but the biggest one - price - will probably bump up and make worse rather than better. I do think Haswell will provide very very good performance at that 17W level, probably even reaching the 35W Ivy parts.

If we do see Intel's big cores reach into sub-5W levels it's not going to happen with Haswell. Broadwell perhaps, but not until 2014. It is definitely the first step into ultramobile by Intel with respect to their main architecture, but it's not a solution, instead a stopgap until they're truly able to drop their TDP down further. Thus far, and according to their own roadmaps, their main "big core" architecture isn't looking to replace their Atoms.

I still think their biggest hurdle going forward will be price. You can promise all the performance in the world, but if your chip costs 5-10x as much as an ARM SoC, the OEMs and consumers will pick the ARM SoC. And though Intel has showed they're willing to battle with respect to price in the Atom line, they've also showed they'd rather let fabs run idle at 22nm then lower prices and decrease margins on Ivy.

Mixed signals :p
 
The price is still too high and the chip is still too big, though. Intel is well aware of that and they've already lined up Atom successors. Look:

intel_atom_roadmap_anand.jpg


They had their Atom roadmaps well defined even prior to their laptop/server lines.

Haswell is certainly their first dip into "SoC everywhere," if you will; VRMs on-die and chipset. But it's still locked as far as price, TDP and power consumption are concerned. It can be stuck into tablets, but those tablets will cost a pretty penny and still consume a good bit of power under load. Remember, 10W is really low, but it's still double the sub-5W target that current tablets are gunning for.

Haswell's design goal was that 17W Ultrabook segment. Though the SKUs and TDPs will vary, they're still a bit too high (and it's too costly) for the ultramobile tablet/smartphone market, which is exactly where the Atom fits in.

I don't see Haswell being utilized in a lot of traditional tablets but rather large convertibles with a fan. That really limits the product line and implementation. I do think it's going to address certain quirks with the Ultrabook form factor, but the biggest one - price - will probably bump up and make worse rather than better. I do think Haswell will provide very very good performance at that 17W level, probably even reaching the 35W Ivy parts.

If we do see Intel's big cores reach into sub-5W levels it's not going to happen with Haswell. Broadwell perhaps, but not until 2014. It is definitely the first step into ultramobile by Intel with respect to their main architecture, but it's not a solution, instead a stopgap until they're truly able to drop their TDP down further. Thus far, and according to their own roadmaps, their main "big core" architecture isn't looking to replace their Atoms.

I still think their biggest hurdle going forward will be price. You can promise all the performance in the world, but if your chip costs 5-10x as much as an ARM SoC, the OEMs and consumers will pick the ARM SoC. And though Intel has showed they're willing to battle with respect to price in the Atom line, they've also showed they'd rather let fabs run idle at 22nm then lower prices and decrease margins on Ivy.

Mixed signals :p

ARM's real advantage is that it's not CISC (Atom is); that's why it's such an energy sipper. ARM in tablets is a semi-scaled-up smartphone CPU - Atom, however, (and its successors) are CISC/x86, with all the baggage that entails (and most of that baggage argues against energy sippage to the degree that tablets and slates require).
 
ARM's real advantage is that it's not CISC (Atom is); that's why it's such an energy sipper. ARM in tablets is a semi-scaled-up smartphone CPU - Atom, however, (and its successors) are CISC/x86, with all the baggage that entails (and most of that baggage argues against energy sippage to the degree that tablets and slates require).

Depends on the core design we're discussing here.

http://www.engadget.com/2012/11/29/samsung-exynos-5-linux-benchmarks/

The benchmarks are all over the place, but that's to be expected in an OS, Kernel and software stack that doesn't quite support it as well as x86 which has had years of maturation. Nevertheless it gives you a good idea as to how well it performs under such a strict TDP.

http://www.phoronix.com/scan.php?page=article&item=samsung_exynos5_dual&num=1

Samsung has an 8-core part for next year using ARM's big.LITTLE approach, where they couple big cores, like the A15, with smaller cores. The A15 is supposed to be followed up by the A57, a 64-bit compatible chip, in 2014, that is built to clock to 2-3ghz in higher TDP variants

It's the baggage that x86 has to bring along in order for true legacy support that ultimately detracts from it in the low power, high volume space. Consider that people using ARM are using a completely different set of software that is generally tended to and recompiled on a more regular basis than is x86. As a result of software lagging behind for years in the x86 space, and years of that sort of baggage adding up, you really need a higher powered x86 CPU in order to compete with an equivalent ARM chip. That's why Intel is pushing their core architecture into progressively lower TDPs.

Consider that if you can't run your workstation applications, and even high powered gaming in an x86 SoC, would you really consider it x86 at all? That's the sort of stigma Intel faced with their original Atoms. It's not as if they weren't powerful enough for most people - they were. It's that you're asking them to do different tasks than you are your tablet and as a result the chips faced an uphill battle. That hasn't changed.

In some sense, Intel and it's x86 approach is a victim of its own success. Programmer's have been historically lazy, the progression has mostly come on the back of IPC and clock speed gains that pushed inefficient software (and operating systems. Remember how thirsty Vista was?). That hasn't went away. Today most programming is done in Java, particularly in the consumer space, and ultimately ends up on running on an ARM SoC. If Intel wants to succeed here they need to have their "big cores" draw as much power as their small cores, but in order for that to occur they also need to maintain a huge lead in fabrication. If you're making a bigger thirstier chip that performs better, nobody's going to buy it because it's more expensive and it's bigger and thirstier :p Porting something over to another ISA isn't as big a deal as it was just 3-4 years ago. I mean, IBM and Oracle are still in business, aren't they? :p
 
Last edited:
One point Kanter makes is that Haswell will go into areas where neither Sandy Bridge or Ivy Bridge made any inroads at all - SoC is, in fact, a major encroachment by Haswell.. (In other words, it is Haswell/derivative of Haswell that will replace Clover Trail, which is, in fact, Atom's successor; this would indeed explain the lack of designs featuring Clover Trail to date).

The last time Intel went this wide with a CPU design was, in fact, Conroe/Kentsfield followed by Wolfdale/Yorkfield - remember, both went from entry-level notebooks to servers - still, according to Kanter, Haswell will not succeed *just* Sandy Bridge, Ivy Bridge, or even Clover Trail/Atom - Haswell will have differences in implementation by target - some implementations will lack some of the features he outlines due to their sector not needing them. That's where the rubber will, in fact, meet the road with Haswell - product differentiation based on the common Haswell core.

And, that's an area that is growing by the day thanks to ARM.

If Intel can get an x86 or even x86-64 CISC SoC core at the performance per watt level of ARM's SoC designs like the Cortex A15 or the upcoming A50 series, then they will be successful. Imagine something this powerful in a 10.1-inch Windows tablet that doesn't suck up much power.

It's always been historically known that a CISC design especially that using x86 is more power hungry than an RISC core. It's going to take a lot of optimizing, simplifying and miniaturizing to get to where ARM has succeeded extremely well over the past several years. They've gotten close with Atom Medfield SoCs, so hopefully they can get even closer with Haswell-based SoCs.
 
And, that's an area that is growing by the day thanks to ARM.

If Intel can get an x86 or even x86-64 CISC SoC core at the performance per watt level of ARM's SoC designs like the Cortex A15 or the upcoming A50 series, then they will be successful. Imagine something this powerful in a 10.1-inch Windows tablet that doesn't suck up much power.

It's always been historically known that a CISC design especially that using x86 is more power hungry than an RISC core. It's going to take a lot of optimizing, simplifying and miniaturizing to get to where ARM has succeeded extremely well over the past several years. They've gotten close with Atom Medfield SoCs, so hopefully they can get even closer with Haswell-based SoCs.

I wouldn't use CISC and RISC comparisons as those really don't apply anymore. Everything Intel/AMD make is essentially a RISC/CISC thing with an x86 decoder. The qualities we used to relate to CISC only are now both CISC/RISC, like out-of-order execution, which isn't seen in Intel's Atom chips yet you'll find in a Snapdragon S4 ARM-based SoC.

The issue I see Intel facing isn't necessarily perf-per-watt and trying to dismiss the extra 2-3% die space attributed to that x86 decoder (though the rest that tags along takes up more space), but rather the disparity in price. If Broadwell/Sky Lake perform incredibly well at low TDP but cost 5-20x more than an ARM SoC then they haven't accomplished anything because nobody would buy it. Intel has to rely on operating on comparatively sky high margins partly to maintain their fab lead (it takes tens of billions of dollars), but also because that's how they operate. They won't churn out as many wafers as a TSMC and they won't open up their fabs to competitors. Essentially, if Intel wants to lean on their fab advantage and rely on being a node+ ahead of everyone else then they'll also need people to buy Intel chips at much higher prices than everyone else. That just doesn't look all that likely.

Qualcomm has surpassed Intel in market cap, and that mainly has to do with having the best LTE on the planet and because they're fabless, Intel has neither, instead relying on x86, legacy and a current fab advantage that might not hold going forward.

What I'm saying is, we can look in awe at the benchmarks for Intel's future chips, but the price will determine whether or not people actually buy them en masse. OEMs won't have to go to x86 anymore, thus Intel now has to play ball with respect to pricing, something that it hasn't done since the AMD64 days when they attempted to cheat in order to keep from dropping their prices significantly.
 
I wouldn't use CISC and RISC comparisons as those really don't apply anymore. Everything Intel/AMD make is essentially a RISC/CISC thing with an x86 decoder. The qualities we used to relate to CISC only are now both CISC/RISC, like out-of-order execution, which isn't seen in Intel's Atom chips yet you'll find in a Snapdragon S4 ARM-based SoC.

The issue I see Intel facing isn't necessarily perf-per-watt and trying to dismiss the extra 2-3% die space attributed to that x86 decoder (though the rest that tags along takes up more space), but rather the disparity in price. If Broadwell/Sky Lake perform incredibly well at low TDP but cost 5-20x more than an ARM SoC then they haven't accomplished anything because nobody would buy it. Intel has to rely on operating on comparatively sky high margins partly to maintain their fab lead (it takes tens of billions of dollars), but also because that's how they operate. They won't churn out as many wafers as a TSMC and they won't open up their fabs to competitors. Essentially, if Intel wants to lean on their fab advantage and rely on being a node+ ahead of everyone else then they'll also need people to buy Intel chips at much higher prices than everyone else. That just doesn't look all that likely.

Qualcomm has surpassed Intel in market cap, and that mainly has to do with having the best LTE on the planet and because they're fabless, Intel has neither, instead relying on x86, legacy and a current fab advantage that might not hold going forward.

What I'm saying is, we can look in awe at the benchmarks for Intel's future chips, but the price will determine whether or not people actually buy them en masse. OEMs won't have to go to x86 anymore, thus Intel now has to play ball with respect to pricing, something that it hasn't done since the AMD64 days when they attempted to cheat in order to keep from dropping their prices significantly.

That's true. Many processors are RISC at the top end, CISC at the bottom going out if you look at an architecture like SB or IVB.

But, price decides everything in this market.

If they can't get a Haswell-based Atom SoC to the price of an ARM SoC, they cannot succeed. Consumers, smart ones, consider price first before they decide anything. Look at the Microsoft Surface tablets and compare the Windows 8 RT 64GB versus the Windows 8 Pro 64GB prices. It's a $200 difference-- $699 for the Win 8 RT using Tegra 3 versus $899 for the Win 8 Pro using an Intel Core i5. Now compare that to a 64GB iPad at $699 (without LTE), 64GB iPad at $829 (with LTE), and a 64GB Samsung Galaxy Note 10.1 at $699.

If it was me looking at the prices, I'd go with the Windows 8 RT or Samsung Galaxy Note 10.1. If I was looking at features and app store, it'd be the 64GB iPad without LTE or 64GB Galaxy Note 10.1.

Why in the world would the Intel Core i5 tablet cost so much?

Even the ASUS Vivo Tab that's supposed to be out by now, with 64GB of storage and an Intel Clovertrail Atom SoC is $799. If not only the cost of the Atom SoC, the other costs would have to be for the 11.6-inch display.

The Samsung Series 7 slate with an Intel Core i5 starts at $1099. The only reasonably priced tablet is the Samsung ATIV Smart PC 500T if you don't go for the keyboard attachment-- $649. That's the only one I've seen so far at that price with an Intel Atom SoC.

If Intel wants to make headway in the mobile tablet market, they need to get their SoC designs down to a reasonable price. People decide with their wallets especially in the current economy. You can't expect to make major headway in this market competing against ARM-based SoC designs at these prices. No matter how more efficient and faster your processor is compared to an ARM SoC, price decides how far it'll go in this market.
 
Just to give you an idea of what the price disparity is, a Tegra 3 SoC can be had for ~$20-$30?

Analyst Romit J. Shah from Nomura Securities recently asked Nvidia how much does Tegra 3 actually cost and the company said that its price sits somewhere between 15 and 25 US dollars (€11.3 to €18.9).

Nvidia’s CEO, Jensen Huang was actually even more specific and said “I don’t know which Tegra is $15. But there are no – if we’re still talking about Tegra 3, there is no Tegra 3 that’s $15. It’s higher than that.”

So there you have it. Nvidia’s quad-core Tegra 3 costs actually between $15 and $25, with the scale tipping more towards the latter.

The Atoms have to compete at those prices, but the "big core" x86 chips cost ~$70-$300. If that price difference means the ARM tablet packs a high-DPI IPS panel versus a shitty 1366x768 TN panel with the Intel-powered tablet, guess which one is going to sell? People don't give two shits about benchmarks.
 
That's true. Many processors are RISC at the top end, CISC at the bottom going out if you look at an architecture like SB or IVB.

But, price decides everything in this market.

If they can't get a Haswell-based Atom SoC to the price of an ARM SoC, they cannot succeed. Consumers, smart ones, consider price first before they decide anything. Look at the Microsoft Surface tablets and compare the Windows 8 RT 64GB versus the Windows 8 Pro 64GB prices. It's a $200 difference-- $699 for the Win 8 RT using Tegra 3 versus $899 for the Win 8 Pro using an Intel Core i5. Now compare that to a 64GB iPad at $699 (without LTE), 64GB iPad at $829 (with LTE), and a 64GB Samsung Galaxy Note 10.1 at $699.

If it was me looking at the prices, I'd go with the Windows 8 RT or Samsung Galaxy Note 10.1. If I was looking at features and app store, it'd be the 64GB iPad without LTE or 64GB Galaxy Note 10.1.

Why in the world would the Intel Core i5 tablet cost so much?

Even the ASUS Vivo Tab that's supposed to be out by now, with 64GB of storage and an Intel Clovertrail Atom SoC is $799. If not only the cost of the Atom SoC, the other costs would have to be for the 11.6-inch display.

The Samsung Series 7 slate with an Intel Core i5 starts at $1099. The only reasonably priced tablet is the Samsung ATIV Smart PC 500T if you don't go for the keyboard attachment-- $649. That's the only one I've seen so far at that price with an Intel Atom SoC.

If Intel wants to make headway in the mobile tablet market, they need to get their SoC designs down to a reasonable price. People decide with their wallets especially in the current economy. You can't expect to make major headway in this market competing against ARM-based SoC designs at these prices. No matter how more efficient and faster your processor is compared to an ARM SoC, price decides how far it'll go in this market.

If looked at on a sheer-performance basis, i5 would win - hands down.

However, for most users (or even most uses) i5 is (as much as we hate to admit it) overkill. It's not sheer performance or even performance per watt of power usage (i5 wins against all competition there), but price per performance-per-watt of power used (the combination of ARM's long battery life AND lower price per unit, even if it's not that low heads-up) which wins vs. any 3rd-generation Intel i-series.
 
http://www.anandtech.com/show/6491/the-anandtech-podcast-episode-12

They mentioned that Haswell will probably mimic the Ivy Bridge staggered release cycle, with mobile first followed by desktop later. Q2 '13 for mobile and Q3 for desktop but ramping up should mean we won't see significant volume until ~Q4ish (desktop. I'd imagine we won't see many mobile chips until Q3)

What was really interesting was that Anand claimed that Intel isn't expecting a significant % market share for Haswell in 2013 so that might imply short supply or slow ramp? I know Intel wanted everyone to transition over to 22nm Ivy as quickly as possible, gunning for 70% market penetration by the end of the year, but it doesn't seem like they're as eager to see Haswell replace the previous gen as they were to see Ivy replace Sandy.
 
http://www.anandtech.com/show/6491/the-anandtech-podcast-episode-12

They mentioned that Haswell will probably mimic the Ivy Bridge staggered release cycle, with mobile first followed by desktop later. Q2 '13 for mobile and Q3 for desktop but ramping up should mean we won't see significant volume until ~Q4ish (desktop. I'd imagine we won't see many mobile chips until Q3)

What was really interesting was that Anand claimed that Intel isn't expecting a significant % market share for Haswell in 2013 so that might imply short supply or slow ramp? I know Intel wanted everyone to transition over to 22nm Ivy as quickly as possible, gunning for 70% market penetration by the end of the year, but it doesn't seem like they're as eager to see Haswell replace the previous gen as they were to see Ivy replace Sandy.

Of course not. They're being pushed by their competition for Ivy Bridge... oh... sorry... what competition? :D

I wish that Intel had competition but since they don't... I don't think we'll be seeing a desktop Haswell until well into '14. And that really sucks since I was hoping that would be my next system... but I might just jump at IB-E when it premieres and skip Haswell altogether.
 
Back
Top