CPU Usage Differences after Applying Meltdown Patch at Epic Games

The idea is that people say the typical consumer has nothing to worry about this, but the typical consumer buys i3's and Pentiums. Meanwhile the gaming benchmarks I've seen are with 6 core 12 threads. Bring out the Core i3's and Pentium's.

Gotcha! But because they aren't running IO heavy tasks as an "average user" they probably won't be affected.
 
Sitting pretty over here with my 1600.

What, if any, recourse here do enterprise clients and consumers have with Intel? I would think if it can be shown that Intel knew about this for 6 months (or longer) at least those people or businesses that had purchased Intel in that time would be due compensation?
 
Too small to be for sure but the valley and peak shown on the far right allude to it being scarier than just x2 performance hit, it looks like a non-linear impact....meaning as the load increases the performance hit increases. We go from about 18% difference in the valley to about 36% difference at the peak.

Looks relatively linear. Look at the data points: 10 vs 27 (2.7x hit), 22 vs 58 (2.6x hit)
 
This looks bad for the entire server world. Next week is going to be hell.

Why next week? Amazon already applied patches last week. And many servers already experience some slowdowns. Mine servers got ~1-2% hit. Not much luckily
 
Private researchers just haven't discovered the analogous AMD vulnerability yet. General purpose computing devices without backdoors are not going to be allowed.



Funny how it is always a wild "conspiracy theory" until later proven true, at which point it somehow magically morphs into "no big deal". That which yesterday was so outlandish as to discredit anyone espousing it is somehow tomorrow not even newsworthy when demonstrated as fact...most curious.


This is a bit different then string theory or plate tectonics or for that matter anything doing with natural law.

I am curious to understand, do you have any examples of this happening I can review?
 
I am curious to understand, do you have any examples of this happening I can review?

Lol!, How far back do you want to go? How much time do you have?

Let's begin with the Gulf of Tonkin incident. If you get through that you can start in on the Church committee reports detailing Cointelpro and MK Ultra.

I bring up these, rather old, examples to demonstrate one of the primary mechanisms at work here. By their very nature these initiatives are "secret" and as such most are only convinced about them later, by which time they are "old news" and everything just goes on as if nothing had happened.

If, 10 years ago, I had posited that a 3 letter agency was gathering metadata on every cellphone I would have been (and was) branded a kook...but here we are.

You are, of course, welcome to keep your head in the sand, but the Vault7 dump is out there.
 
This is going to hit EVERY datacenter hard. Disaster recovery plans? Wipe your ass with them. Think you have the resources to lose a VM host? You did, but not now.

You have to understand that most of these systems are virtual. You'll take a hit at the host level (bios), at the hypervisor level (ESXi / Hyper-V) and then again at the VM OS level.

This is going to cost billions of dollars in infrastructure upgrades.


sounds like intel is going to make even more money then!
 
I've heard claims that say AMD cpu's are effected. The only reason AMD keep saying AMD cpu's are not effected is because of future lawsuits. Deny it enough and people look the other way or keep rolling with intel.
 
I've heard claims that say AMD cpu's are effected. The only reason AMD keep saying AMD cpu's are not effected is because of future lawsuits. Deny it enough and people look the other way or keep rolling with intel.
But do AMD cpus lose performance? That's part of the lawsuit. If it was patched without any loss in performance then it's fine. Intel can't make that claim.
 
People need to wait until the patching and fixes start winding down, way too much stuff coming out all at once that needs to be settled before benchmarks start showing conclusive info about performance hits.
 
I think there's a misunderstanding here about who and what is affected by this flaw/patch. Applications that have to make a lot of calls to kernel are where the biggest impact is going to be. Typical home users (including us as gamers) are not going to see a huge impact because the applications we run don't make that many calls to the kernel address space.
 
Home users will be fine, commercial users are fucked by what I can see so far.
 
Home users will be fine, commercial users are fucked by what I can see so far.
Depends on the business' workload. Some applications will be affected more than others. I'm not from the coding side of IT so I could be wrong but my bet is on DB applications being hit the hardest.
 
And Fortnite loses more fps due to lack of sli scaling than KB4056892.
 
My virus scan times have shot through the roof. I'm looking at the kernel time on the win 10 performance monitor and it looks obscene on ivy bridge.
 
  • Like
Reactions: N4CR
like this
what about DCers? we run our boxen 100% 24-7
DC as in domain controller? I imagine you are going to see a hit, AD is a database after all. You guys need to resize your environment regardless...100% utilization 24/7 is terribad.
 
DC as in domain controller? I imagine you are going to see a hit, AD is a database after all. You guys need to resize your environment regardless...100% utilization 24/7 is terribad.

Distributed computing. Folding@home and such.
 

I wonder if Ubisoft is suffering a problem related to Meltdown and Spectre? Their servers went belly up a few hours ago. (No evidence, just idle speculation)
 
I'm no expert, but reading through the Epic post, there was a decent explanation here:
https://www.epicgames.com/fortnite/...services-stability-update?p=132713#post132713

I'm no expert, but in my caveman layman understanding it sounds like typical day to day data isn't really going to be affected at all. If you run an encrypted connection (SSH tunnel, VPN, etc), you may see some impact. The Epic servers are seeing a large impact because a server runs many multiple encrypted streams simultaneously.

That explanation is literally about as wrong and incorrect as can be, fyi. Neither of Spectre/Meltdown have anything to do with encryption.
 
But do AMD cpus lose performance? That's part of the lawsuit. If it was patched without any loss in performance then it's fine. Intel can't make that claim.

AMD is apparently unaffected by the particular issue involved in Meltdown but is affected by the issues presented by Spectre.
 
lol my bad, was using the wrong set of acronyms. I guess you guys may see a small hit but nothing substantial I imagine.

I should have been more specific, this space is acronym heavy like no other
 
Should be effectively zero impact. Almost entirely CPU bound with little to no I/O.
My only disagreement here is that we are being told there is a performance hit somewhere in the area of 5-30%. Consumers shouldn't see it because they don't use their systems anywhere near full capacity. In this guy's case running it 100% full bore, he is going to see an impact. The OS is going to be slower and on those "occasional" kernel calls it is going to be slower. I could be wrong but my gut says even a purely compute load like folding is going to feel it.
 
Depends on the business' workload. Some applications will be affected more than others. I'm not from the coding side of IT so I could be wrong but my bet is on DB applications being hit the hardest.

If this impact SQL significantly it will be causing me pain. I upgraded our SQL server last year (doubled the CPU & Memory). I'd hate to have to upgrade again.
 
AMD is apparently unaffected by the particular issue involved in Meltdown but is affected by the issues presented by Spectre.
So far nobody can fix Spectre on any CPU, which does include AMD. But only Intel has to deal with Meltdown. Meltdown gives hackers the ability to read kernel memory, which is nastier than plain Spectre. Spectre generally can't be fixed without replacing the CPU entirely, and even then it seems that the future of CPU's have to be totally redesigned. Meltdown is Spectre but a very nasty version of it. So nasty that Intel's fix is sacrificing a good 20% of the performance but disabling Branch Prediction. That's literally a feature found on the CPU that increases performance.

Here's the 3 exploits that are effecting CPUs. Intel is effected by all 3, and Variant #3 is Meltdown. AMD doesn't have #3 at all, but does have #1 which "maybe" already fixed. Number 2 has a near zero chance to effect AMD so it doesn't apply to their CPUs. Number 1's only fix is to make the OS more secure. Number 3 can run through a javascript through the web browser. You can see why Intel is down right fucked right now.

  • CVE-2017-5753 (variant #1/Spectre) is a Bounds-checking exploit during branching. This issue is fixed with a kernel patch. Variant #1 protection is always enabled; it is not possible to disable the patches. Red Hat’s performance testing for variant #1 did not show any measurable impact.

  • CVE-2017-5715 (variant #2/Spectre) is an indirect branching poisoning attack that can lead to data leakage. This attack allows for a virtualized guest to read memory from the host system. This issue is corrected with microcode, along with kernel and virtualization updates to both guest and host virtualization software. This vulnerability requires both updated microcode and kernel patches. Variant #2 behavior is controlled by the ibrs and ibpb tunables (noibrs/ibrs_enabled and noibpb/ibpb_enabled), which work in conjunction with the microcode.

  • CVE-2017-5754 (variant #3/Meltdown) is an exploit that uses speculative cache loading to allow a local attacker to be able to read the contents of memory. This issue is corrected with kernel patches. Variant #3 behavior is controlled by the pti tunable (nopti/pti_enabled).

 
My only disagreement here is that we are being told there is a performance hit somewhere in the area of 5-30%. Consumers shouldn't see it because they don't use their systems anywhere near full capacity. In this guy's case running it 100% full bore, he is going to see an impact. The OS is going to be slower and on those "occasional" kernel calls it is going to be slower. I could be wrong but my gut says even a purely compute load like folding is going to feel it.

The 5-30% is on server workloads that do significant I/O. Things like folding basically do no I/O and won't really feel it at all.
 
This is a bit different then string theory or plate tectonics or for that matter anything doing with natural law.

I am curious to understand, do you have any examples of this happening I can review?

Watch the now twenty year old movie Enemy of the State starrring Gene Hackman and Will Smith (NSA also has a starring role).
 
Back
Top