Intel's Iris Pro:

That's the 5800k, not the 6800k. So the older fastest. Considering any Iris pro model costs $400+, I'm not sure why AMD should be concerned. For that price, you can have an i3/FX6300 and a 7950 and go around taking dumps on any iGP/APU.
 
Amd still wins if price is a factor.

Yeah, cost is a big issue with Intel.

Don't expect to see Iris Pro laptops below $500 or $600. No way.

I can find a laptop with the latest Richland or Trinity APUs for around $400 to $600, or high $300 if on sale. A Haswell-enabled laptop runs about $500 to $1000-plus depending on the model of the laptop, features, touchscreen capable or not, and what kind of Haswell CPU you're getting.

I would not be surprised that an Iris Pro enabled laptop runs north of $900 to $1100-plus.
 
The disturbing thing (for AMD/Nvidia) is not the price, but that Intel is willing to continue to increase the performance, compatibility and stability of their drivers with every release.

Remember that the only difference between a low-end part and a high-end part is the number of cores and amount of bandwidth. If Intel wanted to, they could fabricate a full discrete GPU that could challenge a Titan; and it'd be more efficient to boot.
 
The disturbing thing (for AMD/Nvidia) is not the price, but that Intel is willing to continue to increase the performance, compatibility and stability of their drivers with every release.

Remember that the only difference between a low-end part and a high-end part is the number of cores and amount of bandwidth. If Intel wanted to, they could fabricate a full discrete GPU that could challenge a Titan; and it'd be more efficient to boot.

pretty much this. intel has proven that it can match AMD's best on the APU. imagine if they wanted to enter the discrete gpu market...I'm kind of curious to see what they could make,
 
The disturbing thing (for AMD/Nvidia) is not the price, but that Intel is willing to continue to increase the performance, compatibility and stability of their drivers with every release.

Remember that the only difference between a low-end part and a high-end part is the number of cores and amount of bandwidth. If Intel wanted to, they could fabricate a full discrete GPU that could challenge a Titan; and it'd be more efficient to boot.

Why would it be more efficient? Look at Iris vs standard HD 4600. They threw TONS more resources at it and got an "okay" performance boost, ~2x the cores for ~1.75x the performance? Even with the ESRAM cache. Intel tried to make a big GPU and that ended up turning into Xeon Phi. Their Slice architecture is horribly inefficient compared to GCN/Kepler, they have a long ways to go, and it isn't just "drivers". Look at the die size of Iris Pro, based on Anand's ballpark figure of ~175mm^2, Iris Pro is larger than Bonaire, with a smaller process, and not even CLOSE in performance.

Intel is improving, but they aren't anywhere NEAR making a big GPU.
 
They're not 'space' efficient yet, but the performance/watt is getting close. My point is that they can get close now- in something that is strapped to a CPU. The Xeon Phi isn't really a good comparison from a development standpoint; yeah, it didn't do what they wanted it to do, but it's still very competent for what it is doing, as is their actual graphics core technology.

Take that same core technology, on 22nm, with a dedicated 256bit memory bus backing up a large array of cores, and I'd be willing to bet that you'd get something competitive.
 
They're not 'space' efficient yet, but the performance/watt is getting close. My point is that they can get close now- in something that is strapped to a CPU. The Xeon Phi isn't really a good comparison from a development standpoint; yeah, it didn't do what they wanted it to do, but it's still very competent for what it is doing, as is their actual graphics core technology.

Take that same core technology, on 22nm, with a dedicated 256bit memory bus backing up a large array of cores, and I'd be willing to bet that you'd get something competitive.

The only issue is it would be monstrous (ie. Titan die size for GTX660 level performance), cost more than anything Intel has ever sold commercially, probably use a significant amount of power, and have to rely on (improving, but shitty) intel drivers. Slice is not the right architecture to make into a full GPU.

Performance/watt doesn't really matter when die size cost is so prohibitive that you would be able to make like 12 full dual core CPUs in the same area as a low end graphics card.
 
The disturbing thing (for AMD/Nvidia) is not the price, but that Intel is willing to continue to increase the performance, compatibility and stability of their drivers with every release.

Remember that the only difference between a low-end part and a high-end part is the number of cores and amount of bandwidth. If Intel wanted to, they could fabricate a full discrete GPU that could challenge a Titan; and it'd be more efficient to boot.

Like Larabee, Knights Corner/Ferry?
 
Last edited:
The only issue is it would be monstrous (ie. Titan die size for GTX660 level performance), cost more than anything Intel has ever sold commercially, probably use a significant amount of power, and have to rely on (improving, but shitty) intel drivers. Slice is not the right architecture to make into a full GPU.

Performance/watt doesn't really matter when die size cost is so prohibitive that you would be able to make like 12 full dual core CPUs in the same area as a low end graphics card.

I agree that my assessment may be a bit overzealous based on shipping parts, but part of it is based on watching Intel's masterful ability to manufacture amazing things and optimize them heavily when they want to. They have all the pieces, they just have to put them together.

A good example is the Core series- the P4 was going nowhere fast, as good of an idea as it was for certain things, they needed something different. Their savior was a P3 re-worked for efficiency and upgraded with the latest SSE functionality that they had developed to try to keep Netburst competitive.

Suddenly the Athlon X2's looked slow!
 
Their savior was a P3 re-worked for efficiency and upgraded with the latest SSE functionality that they had developed to try to keep Netburst competitive.
No, their savior was illegally bribing and coercing OEMs to not buy AMD CPUs when AMD had a large technological lead. (AMD had released the first x86 64 bit CPU long before Intel ever announced any plans of their own)
 
No, their savior was illegally bribing and coercing OEMs to not buy AMD CPUs when AMD had a large technological lead. (AMD had released the first x86 64 bit CPU long before Intel ever announced any plans of their own)

Yep. And as a result, Fuck Intel. May they as an entity rot in hell, and I for one hope AMD gets to collectively piss on their grave whenever the desire strikes them.
 
Had nothing to do with politics and everything to do with unconscionable tactics. For their actions in this regard, as well as for gaming the benchmarking suites to adversely affect AMD offerings, Intel will never receive one cent of my money. Add Nvidia to that list for some of the same under-handed reasons.
 
Had nothing to do with politics and everything to do with unconscionable tactics. For their actions in this regard, as well as for gaming the benchmarking suites to adversely affect AMD offerings, Intel will never receive one cent of my money. Add Nvidia to that list for some of the same under-handed reasons.

You and your followers have fun in your boxes :).
 
They're not 'space' efficient yet, but the performance/watt is getting close. My point is that they can get close now- in something that is strapped to a CPU. The Xeon Phi isn't really a good comparison from a development standpoint; yeah, it didn't do what they wanted it to do, but it's still very competent for what it is doing, as is their actual graphics core technology.

Take that same core technology, on 22nm, with a dedicated 256bit memory bus backing up a large array of cores, and I'd be willing to bet that you'd get something competitive.

1. They have a process advantage that gives them significantly lower power consumption/higher clocks/more die for graphics.

2. An easy way to lower power consumption on GPUs is to increase die area and create more graphics units, they just clock them lower as using a higher clock speed uses significantly more power.

3. As die size increases, cost increases from die space, but also yield decreases very quickly. This is one of the reasons chips like titan are so expensive, and why cut down versions of cards exist - to increase yield. The problem is that at a certain point, you can't get enough versions of the fully working chip to make a product (aka the 480) and your top version becomes a cut down version.

4. Increasing bus lane width adds a lot to the power and space to the die.

5. Bottlenecks quickly form when you start scaling up architectures like this and increasing the speeds of everything linearly has a non-linear cost in terms of die area- getting 4x performance will quickly cost 8x die area, because after scaling 2x performance relatively easily you hit a bottleneck.
 
I'd certainly hope it's faster, considering that the Iris Pro SKU/4950HQ (BGA only) costs around 660$ (tray cost) in large quantities to system integrators. That's what, 550$ more than a 6800k?

If anyone thinks they're going to pop an Iris Pro into a HTPC, Yeah, that isn't happening. These two products shouldn't even be compared because the 6800k is an LGA chip and the Iris Pro is BGA only which costs even more than a 3930k CPU. Nobody is going to buy this (Iris pro) for a HTPC build.

To my knowledge, the only product announced thus far to use Iris Pro is the 2013 retina Macbook Pro, which will cost around 2000$.
 
Last edited:
I'd certainly hope it's faster, considering that the Iris Pro SKU/4950HQ (BGA only) costs around 660$ (tray cost) in large quantities to system integrators. That's what, 550$ more than a 6800k?

If anyone thinks they're going to pop an Iris Pro into a HTPC, Yeah, that isn't happening. These two products shouldn't even be compared because the 6800k is an LGA chip and the Iris Pro is BGA only which costs even more than a 3930k CPU. Nobody is going to buy this (Iris pro) for a HTPC build.

To my knowledge, the only product announced thus far to use Iris Pro is the 2013 retina Macbook Pro, which will cost around 2000$.

Keep in mind that the Iris Pro is more of a proof of concept, and that there's no reason other than limiting possibility of failure (lower margins) for Intel to keep that technology away from other form factors.

At 14nm, Intel can integrate that 'L3' cache into the CPU and attach it to a 2c/4t design, and along with faster main memory and a few more GPU cores, could build a credible APU. Price is entirely up to their discretion.

Hell, they could do that now, if they wanted to- but Intel is stingy with their technology, and not a risk-taker.
 
1. They have a process advantage that gives them significantly lower power consumption/higher clocks/more die for graphics.

2. An easy way to lower power consumption on GPUs is to increase die area and create more graphics units, they just clock them lower as using a higher clock speed uses significantly more power.

3. As die size increases, cost increases from die space, but also yield decreases very quickly. This is one of the reasons chips like titan are so expensive, and why cut down versions of cards exist - to increase yield. The problem is that at a certain point, you can't get enough versions of the fully working chip to make a product (aka the 480) and your top version becomes a cut down version.

4. Increasing bus lane width adds a lot to the power and space to the die.

5. Bottlenecks quickly form when you start scaling up architectures like this and increasing the speeds of everything linearly has a non-linear cost in terms of die area- getting 4x performance will quickly cost 8x die area, because after scaling 2x performance relatively easily you hit a bottleneck.

You're right on all points- just remember that if AMD and Nvidia can make a small, efficient part using the same technology that they put in their top-end parts, there's no credible reason that Intel, the king of silicon mass-production, can't make a world-beating discrete GPU out of their currently low-end part.

It won't be a linear scaling- neither is AMD's nor Nvidia's- but Intel knows how to make chips. Hell, if they were to make a low-end discrete GPU, say HD7750/GT740-level as a proof of concept, even that would be really cool, and that's certainly within reach.
 
Keep in mind that the Iris Pro is more of a proof of concept, and that there's no reason other than limiting possibility of failure (lower margins) for Intel to keep that technology away from other form factors.

At 14nm, Intel can integrate that 'L3' cache into the CPU and attach it to a 2c/4t design, and along with faster main memory and a few more GPU cores, could build a credible APU. Price is entirely up to their discretion.

Hell, they could do that now, if they wanted to- but Intel is stingy with their technology, and not a risk-taker.

Proof of concept? No. Intel created their iGPU because Apple asked for it - and that is the same reason intel has steadily increased their GPU performance. Because of Apple. Anand had an editorial on this awhile back detailing the history of why, exactly, intel got involved in the iGPU business.

The GT3e is tailor made for the upcoming rMBP, you will not see many other systems using that part because it is incredibly expensive. In fact, a system with a 760m and a non iris pro quad mobile is actually cheaper than the GT3e, although power consumption will be far higher with a discrete 760m. Funny enough, apple is going for maximum battery life with the rMBP hence the reason they're using GT3e. You can speculate about proof of concept but right now it is what it is - apple asked intel for better graphics, and intel gave it to them. At a cost. It is also incredibly stupid to compare a sub 100$ 6800k to a near 700$ GT3e. Give me a break man.

That being said, intel is making great strides with their iGPU. But a more proper comparison is the HD4600, which is an LGA part designed with a more competetive price point. There is really NO point in comparing a GT3e to a 6800k because they are designed for different things at far different price points.
 
Again, correct- mostly :). Intel never stopped making iGPUs after the i740; they always had a platform with integrated graphics.

And yeah, there's a big connection between Apple and the GT3e. But part of that connection is that Apple's the only company that can reliably sell the things. Every physically different SKU costs Intel money, and their investors demand a return on that money.

By proof of concept, I'm mostly referring to the development side of things- great hardware is useless without great software. If Intel can get people interested in developing for GT3e on a Mac, which can run Windows and Linux just as well as a PC, they can prove the market demand for the technology and develop it into other markets. An i3 with GT3e? Haswell-E with it?

The discrete GPU idea is pretty far-fetched given the current state of things; I'm just pointing out that the building blocks are all in place :).
 
You're right on all points- just remember that if AMD and Nvidia can make a small, efficient part using the same technology that they put in their top-end parts, there's no credible reason that Intel, the king of silicon mass-production, can't make a world-beating discrete GPU out of their currently low-end part.

It won't be a linear scaling- neither is AMD's nor Nvidia's- but Intel knows how to make chips. Hell, if they were to make a low-end discrete GPU, say HD7750/GT740-level as a proof of concept, even that would be really cool, and that's certainly within reach.

Again, you're just making radical assumptions based on the fact that Intel is "the king of silicon mass production". So what? Here's a credible reason why they can't: THEY HAVE NEVER MADE A DISCRETE GPU BEFORE. Do you think AMD and Nvidia just start from scratch each time they make a card? No, they have engineers with years and years of experience, they have lessons learned from past designs, they have a good idea what graphics cards need to do their job properly. Intel has none of this. This is akin to me saying that "Nvidia makes awesome graphics cards, I'm sure if they wanted to they could design the best CPU in the world by next year. Just because they're awesome".

As said above, making something like a 7750 (close to 7790) would require acres of die space and probably cost hundreds/thousands of dollars (factoring in for exponentially higher yield loss). They would need like 4 Iris Pros all put together to get close, it would be hundreds of mm^2, and it would be stupid. Just flat out stupid. Can Intel maybe do it if they revamp their iGP architecture? Sure, it's possible they could get closer. With slice? No way in hell. Not ready for primetime, the only reason Iris Pro performs remotely well is because of the ridiculous die space required (to reiterate: GT650 is ~115mm^2, and performs above a GT3e. That same GT3e is ~180mm^2. Almost double the size, for less performance).
 
Discrete GPU? Intel's focus is completely on mobility and efficiency these days. I'd say if they make any type of addon card it would be a Xeon-Phi type co-processor for the far more lucrative HPC market.
 
Discrete GPU? Intel's focus is completely on mobility and efficiency these days. I'd say if they make any type of addon card it would be a Xeon-Phi type co-processor for the far more lucrative HPC market.

I don't think they'll actually do it- DDR4 (or whatever's after DDR3 for the desktop) along with smaller processes will allow them to steadily increase the performance of their integrated solutions, as will a focus on increasing the efficiency of their designs from a performance per transistor perspective.
 
Wow .. um...

I haven't read the whole thread yet but ... I'm just gonna leave this question here ...

Are we really congratulating intel for making an igpu that's around 20% faster on average than amd's last gen apu, while having the benefit of a gpu die that's almost as large as the hd7660d, at half the manufacturing process, a 70% higher clock speed AND being boosted by a far superior cpu?

I'm sorry that I don't seem to share everyone else's optimism, but this doesn't exactly look like a success story form intel.
 
It's the drivers that they really get credit for, and that they can make a product that's effective (if not die efficient).

It means that they have all of the building blocks needed to make GPUs at any scale they want.
 
Back
Top