Intel Gen11 Reportedly Takes iGPU Performance to Another Level

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
New benchmarks suggest Intel’s latest iGPU effort (Iris Plus Graphics 940) is impressive, performing up to 132% faster than Gen9 (Skylake) and leaving both the company’s own Core i5-8250U and competing Ryzen 2700U (Vega 10) in the dust. It also compares quite favorably to the Ryzen 5 2400G (Vega 11), even managing to better it in some tests. “Aside from improvements in 3D performance, Gen11 will also bring DisplayPort 1.4a and VESA's DSC support for 5K 120 Hz output.”

Intel touts up to one teraflop of 32-bit and two teraflops of 16-bit floating point performance with its Gen11 iGPU. The chipmaker's goal is to provide a decent gaming experience for its consumers. This, in the long run, can lead to more budget-friendly laptops as manufacturers wouldn't have to incorporate third-party discrete entry-level graphics cards, such as Nvidia's MX-series. As always, you should take synthetic benchmarks (and unofficial ones, at that) with a grain of salt as they are not a definite indication of real-world performance. However, you can't deny that Gen11 posts some very promising results.
 
I looked at the article and it is very misleading. The new 940 performs 213% times faster than the old 620. Their math... 620 => 1621, 940 => 3453. 3543/1621 = 2.13 Then they subtract off 1 for the old 620 and get the misleading 113%. The thing is more than twice as fast as the older version.
 
I looked at the article and it is very misleading. The new 940 performs 213% times faster than the old 620. Their math... 620 => 1621, 940 => 3453. 3543/1621 = 2.13 Then they subtract off 1 for the old 620 and get the misleading 113%. The thing is more than twice as fast as the older version.
I'm confused. 100% faster IS twice as fast... so 113% faster is a little over twice as fast.
 
I'm confused. 100% faster IS twice as fast... so 113% faster is a little over twice as fast.

Yeah not sure what is difficult in that math ? It has 213% of the performance of the previous gen but is also 113% faster from said gen. You can spin it they way you want it means the same thing.
I actually like that they used the smaller number and not the bigger one, maybe marketing missed something there ?
 
Yeah not sure what is difficult in that math ? It has 213% of the performance of the previous gen but is also 113% faster from said gen. You can spin it they way you want it means the same thing.
I actually like that they used the smaller number and not the bigger one, maybe marketing missed something there ?
Seems like odd semantics to fixate on. I don't think there's a global cabal trying to mislead the people about the true integrated gpu performance.
 
I have a feeling Intel is going to kick hard with their GPUs. Who are we kidding, they have the fabs and the people. Scary shit but someone has to do it
 
I'm confused. 100% faster IS twice as fast... so 113% faster is a little over twice as fast.

If you are confused... I suggest you read the post. The math is VERY explicit. If you don't understand the math then... There is nothing I can do to help you.
 
So I'm assuming the Intel's igpu will be priced commensurate to the performance increase over a ryzen, or will it be outrageously expensive? I'm betting the latter.
 
As someone currently stuck on integrated graphics, I would appreciate something faster...(on UHD 620)

I'm in just for adaptive sync- it's a must for my next ultrabook.

Would like to see 120Hz too, which this new IGP looks like it will be able to handle at 1080p on low-end titles.
 
I looked at the article and it is very misleading. The new 940 performs 213% times faster than the old 620. Their math... 620 => 1621, 940 => 3453. 3543/1621 = 2.13 Then they subtract off 1 for the old 620 and get the misleading 113%. The thing is more than twice as fast as the older version.
If you are confused... I suggest you read the post. The math is VERY explicit. If you don't understand the math then... There is nothing I can do to help you.

How can you and your buddies be so bad at math? A 113% difference is indeed twice as fast. Would they put 100% difference compared to Vega 11 even though it is about the same? No, it was around 0% difference like they show which may be able to help YOU.
 
I thought Iris was only available in laptops. Any chance we'll get this with the new Ice Lake desktop CPUs? :D
 
This really has me thinking about how well they can do with a discrete card.
If the discrete has Quicksync as a feature, even on the low end model, I would buy one for a plex server alone.
 
Is Intel supporting FreeSync? A feature so needed for this level of GFX.
Also how beefy do they want to make the iGPU when it is still so limited by RAM speeds? Anyways Nvidia looks to be the big looser in all of this.
 
Is Intel supporting FreeSync? A feature so needed for this level of GFX.
Also how beefy do they want to make the iGPU when it is still so limited by RAM speeds? Anyways Nvidia looks to be the big looser in all of this.
Supposedly it supports FreeSync but not Nvidia's version.
 
Is Intel supporting FreeSync? A feature so needed for this level of GFX.
Also how beefy do they want to make the iGPU when it is still so limited by RAM speeds? Anyways Nvidia looks to be the big looser in all of this.

On Freesync: this is the first GPU they've produced since it became 'a thing'. Supported is listed.
On RAM speeds: the tested part is an 'Iris'-class product, which comes with extra DRAM dedicated for graphics.

AMD and Nvidia both stand to lose a bit; Nvidia maintains a presence in laptops due to the efficiency of their products, even the lowest-end ones, versus AMD parts, while AMD APUs are typically behind in efficiency and only really make sense due to better graphics performance versus a bare Intel CPU. Really AMD stands to lose more assuming that Zen 2 / Ryzen 3 doesn't get a significant boost.
 
The built in GPU is all I normally order for the office, since there is no real business needed for something better, The built in GPU tends to be more reliable, runs cooler, and has less driver problems.

However, there is always someone who insists they need a high end GPU on their laptop, and if they have enough pull with management, they get their wish.
Having a better built in GPU will give me a better argument to make them stick with the standard system.
 
Is Intel supporting FreeSync? A feature so needed for this level of GFX.
Also how beefy do they want to make the iGPU when it is still so limited by RAM speeds? Anyways Nvidia looks to be the big looser in all of this.

Ram speed is what holds my 2400g back. I wonder how they will make the next gen APUs any better before DDR5 is released. Somebody mentioned onboard edram as like Broadwell, so we will see how they tackle it.
 
If you are confused... I suggest you read the post. The math is VERY explicit. If you don't understand the math then... There is nothing I can do to help you.
Sorry dude, the other poster is right, it is you who does not understand math. If something is twice as fast as something else, you can say "it has 200% of its speed" or you can say "it is 100% faster". It is delta vs absolute value, very basic concepts.
"213% times faster" is something nobody says, you wanted to say "213% faster", which is also wrong, as it means more than 3x the speed.
 
Agreed with Ecuador, btw do read up that one test was giving out extreme outliers and this was shifting the total average, without that test it sits 9% under Vega 11, and that is fine and dandy but gotta see what will the new apu's this year bring us at much cheaper prices than Iris Pros.

Iris Pro has never made too much sense to me, at the premium cost of it it has always made more sense to add a discrete gpu, but hey, to each their own.

In comparison AMD's APUs are igpus added to their consumer friendly products at a reduced total cost comparatively, this makes more sense to me.
 
Yeah, we've seen the hype of that Iris Plus/ Iris Pro stuff before, and then it never makes it out to anything nearly affordable. I remember the Core i7 5775C. It sounded great: much faster integrated graphics, more efficient processor on a new production process, and eDRAM that would act like both a cache for the processor and memory for the graphics. And then it comes down to being $100 more expensive than the previous gen processor with mostly the same stuff except the eDRAM. Then there were multiple presentations on eDRAM enhanced graphics on laptop chips and NUC type processors, only to come out with very limited availability and costing hundreds more than just slapping in a discrete graphics adapter.

Sure, this thing could be double the speed of the 2700U, or even the 3700U, but if it costs $200 more or isn't actually put in a product, then it still isn't a deal, and still won't sell.
 
I looked at the article and it is very misleading. The new 940 performs 213% times faster than the old 620. Their math... 620 => 1621, 940 => 3453. 3543/1621 = 2.13 Then they subtract off 1 for the old 620 and get the misleading 113%. The thing is more than twice as fast as the older version.
113% faster is also more than twice as fast...

Anyway for the math you provided:
3543/1621 = 218% as fast as ( Comparisons as a factor on the previous) aka facts 2.18) also known as 118% faster ( What it is more than the base 100%)

How did you get that division to be 2.13 to begin with?
 
I have a feeling Intel is going to kick hard with their GPUs. Who are we kidding, they have the fabs and the people. Scary shit but someone has to do it

We thought that back in the i740 days - that didn't pan out quite the way Intel wanted...

I have no doubts they have the capability of improving on their graphics, my concern is how GPUs fits in Intel's future strategies. Mind you things have changed considerably regarding GPUs since the 90's. If Intel sees GPUs as an extension of CPUs eg. compute purposes then I definitely can believe Intel can deliver a first-rate product as they would slam down all the resources necessary to make it happen (and their resources are vast).
 
We thought that back in the i740 days - that didn't pan out quite the way Intel wanted...

They failed in the discrete market, and went on to dominate the market with their integrated solutions that started with that technology...
 
On Tom's Hardware's benchmark compilation article, the 940 averaged out to 76.53% faster than the 620 and 62.97% faster than Vega 10. I never really paid much attention to AMD's iGPU Vega, but was Intel's 620 really very nearly as fast as Vega 10? That's what these benchmarks, as shown on Tom's, are implying and it doesn't seem right...
 
I am sceptical with Intels GPU's they have tried 3 times in the past and failed, the IRIS is expensive due to have SRAM on the GPU, this Intel GPU is pretty much on par with the VEGA 11 iGPU which doesn't have SRAM.
 
To me there's too little info to make a clear cut opinion, yet:
* What happened to Gen10?
* Is Gen11 to be used in desktop or laptop CPUs? Perhaps both?
* What's the power consumption?
* Is it the top end or mid range model that's been tested?
* When will it be available, and at what cost?

... AMD APUs are typically behind in efficiency and only really make sense due to better graphics performance versus a bare Intel CPU. Really AMD stands to lose more assuming that Zen 2 / Ryzen 3 doesn't get a significant boost.
The Ryzen APUs aren't that bad for desktop computers, and the Zen+ based APUs for laptops have just been announced. I expect a lot more/better from AMD in stores within a year.
 
If you are confused... I suggest you read the post. The math is VERY explicit. If you don't understand the math then... There is nothing I can do to help you.

Oh the irony, I think a phrase from Homer Simpson sums this up.

"DOH!!!"

You you gotta love it when somebody tries to act smarter than anyone else and then absolutely fails at 7th grade math... and have the audacity to come back with a smart ass attitude.
 
You you gotta love it when somebody tries to act smarter than anyone else and then absolutely fails at 7th grade math... and have the audacity to come back with a smart ass attitude.
I wasn't even trying to "get" the guy, I was literally just confused as the statements seemed contradictory... looks like it was for good reason though.
 
I wasn't even trying to "get" the guy, I was literally just confused as the statements seemed contradictory... looks like it was for good reason though.

no atomclock was the one bringing baxk an attitude after ppl where questioning the argument he brought forth

Funny part is when i posted the first time atomclock had 2 likes. Now its just one.
So somebody did a back pedaling. lol
 
To me there's too little info to make a clear cut opinion, yet:
* What happened to Gen10?
* Is Gen11 to be used in desktop or laptop CPUs? Perhaps both?
* What's the power consumption?
* Is it the top end or mid range model that's been tested?
* When will it be available, and at what cost?

The Ryzen APUs aren't that bad for desktop computers, and the Zen+ based APUs for laptops have just been announced. I expect a lot more/better from AMD in stores within a year.

In the Past IRIS graphics where reserved for very specific CPU's typically with the C suffix for the desktop, but changed with the last Gen, e.g Intel Core i7-8559U they are for low to mid range but at a higher cost and for mobile only, they command a hefty price premium due to the built in SRAM around $100 extra at which point discrete graphics can provide better performance for the same price. Power consumption wise it goes form a 15w part to a 28w part.

The Intel Core i7-8559U goes for $434 to OEM's
 
In the Past IRIS graphics where reserved for very specific CPU's typically with the C suffix for the desktop, but changed with the last Gen, e.g Intel Core i7-8559U they are for low to mid range but at a higher cost and for mobile only, they command a hefty price premium due to the built in SRAM around $100 extra at which point discrete graphics can provide better performance for the same price. Power consumption wise it goes form a 15w part to a 28w part.

The Intel Core i7-8559U goes for $434 to OEM's

Iris has always been a premium laptop part first, and a desktop part as an afterthought (the original Broadwell+eDRAM processors were designed for the 13" Macbook Pro). It eventually trickled down into a handful of really expensive Japanese ultrabooks, but never gained much traction elsewhere.
Those benchmarks are also rather questionable - they are fully synthetic benchmarks, so they might not be related to real-world performance. They also benchmark Gen11 Iris Plus against UHD 620, which is the iGPU equivalent of saying 'my RTX 2070 is way faster than my GTX 1060, Turing is a miraculous step forwards'. Realistically, Iris Plus is going to make its way into so few computers that the baseline for iGPU performance will be set by something 2x slower. Not necessarily a bad thing, 2x slower probably puts the next generation of UHD graphics on-par with the 2500U.
 
So a product we know nothing about, performs about the same as one thats been out a while. Well in the end it is progress.. or rather it will be progress.
 
My main concern with Intel's GPU products is their lacking track record when it comes to driver support. Performance means little if your game or PC keeps crashing, so I hope they don't skimp on this in the future.

They got allot to prove before I can take them seriously.
 
Back
Top