Intel Publishes In-House Security Fix Benchmarks for Desktop

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,532
If you are not now aware of the Meltdown and Spectre attacks, you are probably not reading this. Intel has now come out with client-side benchmarks (PDF pictured below) that show the impact of its security fixes using SYSmark, PCMark, and 3DMark on Coffee Lake, Kaby Lake and Skylake architecture based CPUs. The short story is that performance impacts of no more than 10% were seen in these synthetic workloads on both Windows 10 and Windows 7 systems. Assuredly that is good to hear, but I am still waiting for the other shoe to drop with data on server impact which is expected shortly.

We now have additional data on some of our client platforms, and we are sharing that with you today. This is part of our ongoing effort to keep you apprised through frequent updates. We plan to share initial data on some of our server platforms in the next few days. Please know we are working around the clock to generate the data that you want to see as fast as possible. As we endeavor to continue our pace, please understand that – as is common in testing of this type – our results may change as we conduct additional testing.
 
Massive grain of salt. Intel thought the same CPU gained 15% each generation if it was relaunched. Also always wait for third party benchmarks and make sure they updated the microcode and bios.
 
Yeah, desktop performance hits were never really the main concern with this. Database servers are getting it HARD by this patch, and web servers get a pretty big hit, too. SQL Server 2015 on Windows 2012r2 only takes a 10% hit, but IIS on Windows 2012r2 takes a 15% hit. The hardest hit are Linux servers with big databases, like PostGreSQL on Ubuntu takes a near 30% hit and on CentOS takes a 40% hit. I have not seen any reports on how Apache on Linux is affected by this.
 
Database servers are getting it HARD by this patch, and web servers get a pretty big hit, too. SQL Server 2015 on Windows 2012r2 only takes a 10% hit, but IIS on Windows 2012r2 takes a 15% hit.

So much for my recent SQL server upgrade. Looks like this will take a chunk of the increased speed.
Luckily the upgrade was just a couple new CPU's, memory and some new drives, so as long as I get another year out of it I'm good.

If I had spent $15,000 on a new server, I'd be upset, since a new server would have to last me several years.
 
Waiting for not only server results, but Ivy Bridge and up results. A LOT of servers out there running Ivy and Haswell CPU's that are no where near EOL.

Yeah, including my VM hosts, which are dual Xeon E5 2620v2 (IB-EP) machines. I have 2 SQL servers and 8 IIS servers that are going to take a hit. We expect to have to increase the number of virtual cores from 1 to 2 on all web servers, but our SQL servers should handle it as is. We have the CPU resources to handle it, but it is going to make future expansion troublesome.
 
Is this a case where, like some diseases, rather than inoculate against the potential threat, you roll the dice and opt to treat the symptoms instead? So instead of running a 24/7 fix you instead run something that monitors for the exploits occurring.....aka: don't prevent it (so you don't get impacted by the speed hit of the fix) but monitor for shenanigans instead and have a on-deck exploit shutdown fix ready to go if/when the breach happens (?)......
 
Is this a case where, like some diseases, rather than inoculate against the potential threat, you roll the dice and opt to treat the symptoms instead? So instead of running a 24/7 fix you instead run something that monitors for the exploits occurring.....aka: don't prevent it (so you don't get impacted by the speed hit of the fix) but monitor for shenanigans instead and have a on-deck exploit shutdown fix ready to go if/when the breach happens (?)......

Not sure how you'd monitor reading memory and timing execution since just about every program out there does that without creating a ton of false positives.
 
Waiting for not only server results, but Ivy Bridge and up results. A LOT of servers out there running Ivy and Haswell CPU's that are no where near EOL.

This is the truth. We have 76 1-2 year old blades running v3 era CPUs and we won't be grading for at least 3 more years. I don't care so much about Desktop performance in most cases for me at least I have the performance overhead to absorb the hit the servers are another story.
 
I wonder why according to supplied benchmark results the hardest hit is the responsiveness test from SYSMark 2014 - up to 21% - from what I know of that test it would suggest the fix impacts quality of interaction with computer, as in lags and increased wait time for computer response to user input.
 
I want to see ryzen 7 vs fixed Intel personally. HardOCPs definitive ryzen 7 gaming benchmarks may suddenly not be so definitive (though that article was very well done, I enjoyed it a lot).

Gaming with frametime plots and analysis, specifically- anything else takes however long it takes for most consumers, but longer frametimes you feel!
 
I wonder why according to supplied benchmark results the hardest hit is the responsiveness test from SYSMark 2014 - up to 21% - from what I know of that test it would suggest the fix impacts quality of interaction with computer, as in lags and increased wait time for computer response to user input.

User input is usually handled by the kernel and drivers running in kernel space. Since these fixes are mostly isolating kernel memory from application memory, there's a lot more context switching going on now.
 
Is this a case where, like some diseases, rather than inoculate against the potential threat, you roll the dice and opt to treat the symptoms instead? So instead of running a 24/7 fix you instead run something that monitors for the exploits occurring.....aka: don't prevent it (so you don't get impacted by the speed hit of the fix) but monitor for shenanigans instead and have a on-deck exploit shutdown fix ready to go if/when the breach happens (?)......
Monitoring for how it is used is potentially a much bigger performance hit than disabling the entire feature.
 
A CPU hardware operation is replaced by OS software to overcome a design defect, more wait states, etc. A 10% hit sounds very optimistic.
 
Yeah, including my VM hosts, which are dual Xeon E5 2620v2 (IB-EP) machines. I have 2 SQL servers and 8 IIS servers that are going to take a hit. We expect to have to increase the number of virtual cores from 1 to 2 on all web servers, but our SQL servers should handle it as is. We have the CPU resources to handle it, but it is going to make future expansion troublesome.

This is the truth. We have 76 1-2 year old blades running v3 era CPUs and we won't be grading for at least 3 more years. I don't care so much about Desktop performance in most cases for me at least I have the performance overhead to absorb the hit the servers are another story.

All of our hosts are Cisco B200-M3's with E5-2650 v2's in them. 10 of them to be precise. Not very old. 90% of our workload is SQL. We've not implemented the patches just yet until more data is available. If we take a 15% hit across the board we'll be looking at adding another host or two to spread the workload out.
 
I am amused how some operations on my 2-yr-old platform (i7-6700K+NVMe SSD (+GTX1080+GTX1060), Windows 10) actually got FASTER: SysMark data/financial analysis, and 3DMark Sky Diver. It's in the +/- 3% noise, though.

So Office Productivity and Web applications will run at 90% of previous speed. I don't think I'll notice Word being slower, and I don't use Web applications.
All in all, the flaw and it's patches do not give me even the slightest reason to change my hardware-- as I expected.

And there's the problem for the PC industry, BTW: I have a two-year old machine, and no reason to upgrade it in the foreseeable future.
 
I consider 10% to be significant. For the average user, a blip. For power users who push their systems and even overclock them, this is a big deal.
 
I consider 10% to be significant. For the average user, a blip. For power users who push their systems and even overclock them, this is a big deal.

I thins its more of $$$. Its pretty pricey to go from a same gen i5/i7 to the same gen i5/i7 with a 10% higher clock speed.
 
All of our hosts are Cisco B200-M3's with E5-2650 v2's in them. 10 of them to be precise. Not very old. 90% of our workload is SQL. We've not implemented the patches just yet until more data is available. If we take a 15% hit across the board we'll be looking at adding another host or two to spread the workload out.

I sympathize. You have to convince people up the chain of command that you need to spend a substantial amount of money over this issue, which is never an easy thing.

I also don't get to know what it is like to have such a big cluster, and am a little envious. I wish I had that money to spend. My predecessor made the decision to make our VM hosts a cluster, but they don't have central storage, only local raid arrays. He also capped out the memory with 4GB DIMMs, so that in order to increase memory, we'd have to replace all the existing DIMMs. He got fired because of many things, and these were not the least of them. (I could go on and on, but in short, I inherited a very troubled network, and do not have much budget to improve it.)
 
Wasn't there a new blurb in the last day or two that already stated that Skylake and newer weren't affected to nearly the same level as the older silicon? IF that was true then what we are seeing is simply Intel marketing showing best case scenarios with their breakdown. Not saying it's the end of world even for the older systems just that I'm reserving judgement till we see what the Server based v2 Xeons performance looks like in practice.
 
I sympathize. You have to convince people up the chain of command that you need to spend a substantial amount of money over this issue, which is never an easy thing.

I also don't get to know what it is like to have such a big cluster, and am a little envious. I wish I had that money to spend. My predecessor made the decision to make our VM hosts a cluster, but they don't have central storage, only local raid arrays. He also capped out the memory with 4GB DIMMs, so that in order to increase memory, we'd have to replace all the existing DIMMs. He got fired because of many things, and these were not the least of them. (I could go on and on, but in short, I inherited a very troubled network, and do not have much budget to improve it.)

It is sad fact, but the bean counters at the top, along with the board executives do not ever see "IT as an investment that keeps the business going in a modern day environment." They only view it as an expense, so they are constantly looking at ways to cut IT cost.

A good IT engineer or specialist can be expensive. But if they are competent and do a good ROI estimate, the optimizations done can more than pay for themselves with time. I saved the company more than my pay cost per year for every year foreseeable on product development... yet to receive any significant recognition 2 years later. *sighs*
 
Does anyone know if they tested only with the KB or also with the BIOS update ?
We keep seeing Benchmarks without the BIOS fix...
 
I sympathize. You have to convince people up the chain of command that you need to spend a substantial amount of money over this issue, which is never an easy thing.

I also don't get to know what it is like to have such a big cluster, and am a little envious. I wish I had that money to spend. My predecessor made the decision to make our VM hosts a cluster, but they don't have central storage, only local raid arrays. He also capped out the memory with 4GB DIMMs, so that in order to increase memory, we'd have to replace all the existing DIMMs. He got fired because of many things, and these were not the least of them. (I could go on and on, but in short, I inherited a very troubled network, and do not have much budget to improve it.)

Yeah, spending money on upgrades is a tough sell, especially when our environment is about 2 years old.

What is the point of your cluster if you're not using a SAN? I suppose from a management standpoint it might be a little easier, but you're missing 95% of the benefits which is HA. Document everything in detail. Every. Last. Thing. Make your higher ups aware via email. When something fails your ass will be covered, at least. lol
 
I can't see how the 3DMark tests are worth anything. The system configs they state don't include any discrete GPUs - so all benchmarks are done using the integrated GPU?!?!?!

You think that might have been a slight bottleneck???
 
Does anyone know if they tested only with the KB or also with the BIOS update ?
We keep seeing Benchmarks without the BIOS fix...
Excellent point. You'd think they want to make it very clear. I'm holding off any BIOS update on my personal machines, and the Windows fix stays blocked for the next 30 days (as long as Win10 lets me, basically.) Waiting for more data.
 
Excellent point. You'd think they want to make it very clear. I'm holding off any BIOS update on my personal machines, and the Windows fix stays blocked for the next 30 days (as long as Win10 lets me, basically.) Waiting for more data.
Get VMWare microcode update driver (and latest microcode update from Intel), install and safely test the impact of the update without flashing new bios file. You can stop/remove it at any time :)
 
It is sad fact, but the bean counters at the top, along with the board executives do not ever see "IT as an investment that keeps the business going in a modern day environment." They only view it as an expense, so they are constantly looking at ways to cut IT cost.

A good IT engineer or specialist can be expensive. But if they are competent and do a good ROI estimate, the optimizations done can more than pay for themselves with time. I saved the company more than my pay cost per year for every year foreseeable on product development... yet to receive any significant recognition 2 years later. *sighs*

I know how that goes. At my last company, I went through the lab and got rid of all inoperable and outdated equipment: 17 pallets and 14,000lbs worth. It saved the company more than $75,000 per year in county property taxes. (The county doesn't consider any asset completely worthless, so several HP PA-RISC Unix servers were still listed in the books as being worth over $50,000 each, even though they were over 15 years old and not even operating. It was even worse with the old Sun test units. We were getting taxed on them at 4% on their assessed value annually.) Then I reorganized the entire lab to make room for over 120 extra rack units, eliminating the need to spend on another lab. Then I reorganized the network and DNS so that they could be utilized company wide instead of just locally. (They had 17 DNS domains around the company when I started, and most didn't work site to site. I created one contiguous DNS domain, a child domain to the production one, that was available across the company.) After that, I revamped the FC interconnect system so that all the servers and libraries fed into a single patch panel with all the switches in one place, saving tons of time that the testers were using to manually move the test servers and switches close to the libraries. (I had intended to make the testers' lives easier so they could do more, and the company decided it was reason to eliminate testers.) And yet, in 4.5 years employed with them, they gave me a single 1% raise, not even a cost of living adjustment, 2 years in. I was essentially making $11 per week more when I left in 2016 than when I started in 2011.

My new company has a policy of never doing annual raises unless the person has taken on new responsibilities or has proven to go "above and beyond" their job.

Yeah, I am looking for a new job, again. This is just to get me by until I can get to a decent company. I hope this year sees a lot of job creation with companies that are actually worth working for.
 
All of our hosts are Cisco B200-M3's with E5-2650 v2's in them. 10 of them to be precise. Not very old. 90% of our workload is SQL. We've not implemented the patches just yet until more data is available. If we take a 15% hit across the board we'll be looking at adding another host or two to spread the workload out.

Yea... our DC is 100% run on 9 Cisco B200 M4's with E5-2699 v3's. Most of our workload is hosting a virtual shared desktop and published apps all via Citrix, an electronic medical record backed up by MySQL and of course the SQL heavy lifting done by our financial department. We're not even at 100% cpu provisioning yet, but I really hope we don't take too much of a hit.

EDIT: Something I really wonder about... our Pure SAN and our Rubrik NAS are where all the IO lives, and they're basically just Intel servers with a very specific job. I hate to think what's going to happen if we lose up to 10-30% of our IO.
 
Last edited:
I'm just interested if Intel knew the security risk all this time, but kept it to keep a greater edge over AMD.
 
Wish they had tested everything back to Sandy Bridge for client workloads. (My Desktop is still on Sandy-E)

My KVM/LXC server is still on Westmere-EP, wish I knew a little more about the server load impact there...
 
I consider 10% to be significant. For the average user, a blip. For power users who push their systems and even overclock them, this is a big deal.
All a matter of perspective. We bitched about a measly 5-10% increase from generation to generation and now we are bitching about a 10% loss from the patch.
 
Wish they had tested everything back to Sandy Bridge

My storage server is a Core i5 2500k retired from my older sister's old machine. I have every generation represented from Sandy Bridge to Broadwell represented in my systems. I know what you mean.
 
I'm just interested if Intel knew the security risk all this time, but kept it to keep a greater edge over AMD.

I think there's been a concern that speculative execution could be exploitable for some quite time now, but no one had been able to find a way to make an attack that was practical (i.e. actually doable in a reasonable amount of time). It's pretty similar to the SHA-1 vulnerability that was found a few years ago. We knew it was vulnerable, but didn't know how until someone did it. So there wasn't a big push to replace it until a "practical" exploit became known, and then everyone rushed to switch to SHA-256 and higher.
 
All a matter of perspective. We bitched about a measly 5-10% increase from generation to generation and now we are bitching about a 10% loss from the patch.

That's because Intel was asking us to fork out a lot of dough for meager increases in performance. If I could buy a whole new system for $100 for 10% increase, sure I would be all over that. Another $2000 for a top of the line system....forget it.

Now we have to fork out a lot of dough if we want to or not if our servers are not up to snuff. Now as an IT admin, you may not have a choice.

How's that perspective?
 
Were you forced to upgrade every time Intel put out a new CPU?

No, but Intel being Intel touted it as the new sliced bread. And as enthusiast we wanted more than Intel could give us to justify our spending.

Back in the old days, upgrade cycles were commonly every couple years at most. Today most people would be lucky to notice a difference between Sandy Bridge and the new u arch. And retail sales prove this. Intel is a victim of their own success.

But if we didn't give Intel the evil eye, that's the same thing as passive acceptance isn't it?
 
  • Like
Reactions: mat9v
like this
'Intel touted'

Do you expect Intel to produce new products and not market them? Do you believe all marketing when confronted with it?

:)
 
No, but Intel being Intel touted it as the new sliced bread. And as enthusiast we wanted more than Intel could give us to justify our spending.

Back in the old days, upgrade cycles were commonly every couple years at most. Today most people would be lucky to notice a difference between Sandy Bridge and the new u arch. And retail sales prove this. Intel is a victim of their own success.

But if we didn't give Intel the evil eye, that's the same thing as passive acceptance isn't it?


Yeah, back in the old days when a new few CPUs would come out, like the release of the P2 333, 350, and 400, that saw an increase in performance of 40-60%, a decrease in power usage by 50%, and a price decrease of nearly a third, or like the release of the 486, which saw an increase in performance at the same clock speed of 30-40% at the same price point the 386 held a year before. A whole new generation would actually mean huge performance gains. Those days have obviously ended.
 
Back
Top