CPU Usage Differences after Applying Meltdown Patch at Epic Games

A while back Google Drive stopped rendering web pages, prompting you instead to download a copy and run it locally in your browser. So Google has obviously known for a while the JavaScript is a security risk for their servers.

Intel & Google were keeping this private. But when others found it, it was no longer possible to keep it a dirty little secret. How come some German students can test Intel CPUs better than Intel?

Then Intel PR tried to obfuscate and confuse the public by claiming that it's AMD's problem as well, and then said, "oh we were just about ready to announce it ... really, we just needed time to dump our Intel stock."
 
Epic's figures tie in with what I'm seeing on my VM Server, except my figures are even worse (closer to 50% performance loss).

If I don't replace the whole box the only real option I have is to move from a SATA SSD based ZFS SSD cache to an NVMe one, but even then the Ethernet access on the box will still hurt plenty...

Looks like I need the NVMe based ZFS cache + a real TOE Ethernet adapter to work around this shit, the trouble is I only have one spare PCIe slot :(
 
  • Like
Reactions: N4CR
like this
Time to snag some AMD stock.

I'm awaiting code PoC to test my old Magnycours server.
 
If this impact SQL significantly it will be causing me pain. I upgraded our SQL server last year (doubled the CPU & Memory). I'd hate to have to upgrade again.

Virtualize the box. CPU and memory upgrades become relatively trivial afterward.
 
So.... Think meltdown will make dickhead publishers and or lazy devs program their software to use more than 1-4 fucking cores?...



NAHHHHHHHHHHHHHHH.
 
So.... Think meltdown will make dickhead publishers and or lazy devs program their software to use more than 1-4 fucking cores?...



NAHHHHHHHHHHHHHHH.
Because coding thread-safe code is oh so easy... We have already seen the fsckup that can occur as meltdown&spectre are chip level multi-threaded memory management bugs
 
Ahh the excuses, like the roofer making excuses why the new roof is leaking, or the painters making the excuse they are losing money painting your building because they don't realize you know the neighbors are suing them for damaging their lawn.

If it's too hard, find a new career.
 
So.... Think meltdown will make dickhead publishers and or lazy devs program their software to use more than 1-4 fucking cores?...



NAHHHHHHHHHHHHHHH.
This has nothing to do with the Meltdown/Spectre vulnerabilities.
 


Easy to crow about the sky falling in, but GamersNexus tested and showed absolutely no performance degradation outside of margin of error, for consumer tasks.
 


Easy to crow about the sky falling in, but GamersNexus tested and showed absolutely no performance degradation outside of margin of error, for consumer tasks.

Least affected use cases showed the least change? Shocked. Have they tested multiplayer games? AV scans? Network tests? Have they tested anything with more I/O and kernel calls?
 
People will have their misery, doom and gloom and it will not be stopped!
 
People will have their misery, doom and gloom and it will not be stopped!

:)
In all seriousness, this news story has provided a great litmus test in separating those who wait to react and those who revel in knee-jerk reactions.
Redditors running around like chicken little - no surprise. Seeing the same reaction from many on this forum was certainly more disappointing.

But it wasn't everyone - I guess that's the good news.
 
My only question is do we use kb4056892 and kb4056894 now, or wait till tuesday for the actual rollout?
 
Least affected use cases showed the least change? Shocked. Have they tested multiplayer games? AV scans? Network tests? Have they tested anything with more I/O and kernel calls?
Why don't you watch the video and find out, instead of trying to fearmonger while also admitting you literally don't know the content of the video?
 
Why don't you watch the video and find out, instead of trying to fearmonger while also admitting you literally don't know the content of the video?
It's a useless test in a useless format. The only thing it did, besides wasting 9 minutes of my life, is show that they have no idea what the problem is about and where the potential slowdowns are.

If they had ran these tests to check, but explained what the issue is and where the potential pitfalls are, it would be ok (still a useless format, though), but to run this and conclude there is no impact is lazy and disinformative.
 
Last edited:
It's a useless test in a useless format. The only thing it did, besides wasting 9 minutes of my life, is show that they have no idea what the problem is about and where the potential slowdowns are.

If they had ran these tests to check, but explained what the issue is and where the potential pitfalls are, it would be ok (still a useless format, though), but to run this and conclude there is no impact is lazy and disinformative.
I feel like you have somehow missed the explanation that the video is pointless because they expected these results given the workloads being tested, and his own explanation that they did it because their test suite is automated and they may as well have done, since they were clearly going to get questions about what the patch meant for users.

Spoiler: For most users, absolutely nothing, just like we were told before the patch even hit.

Purpose of video: Prevent misinformation spreading (Like this thread) where people hear about a performance hit that doesn't apply to them, then they go screaming around the internet claiming everyone suddenly has a glorified 486 in their computer and go throw it out to replace it with AMD Ryzen.

But by all means, carry on screaming around the internet claiming everyone suddenly has a glorified 486 in their computer and go throw it out to replace it with AMD Ryzen.
 
I feel like you have somehow missed the explanation that the video is pointless because they expected these results given the workloads being tested, and his own explanation that they did it because their test suite is automated and they may as well have done, since they were clearly going to get questions about what the patch meant for users.

Spoiler: For most users, absolutely nothing, just like we were told before the patch even hit.

Purpose of video: Prevent misinformation spreading (Like this thread) where people hear about a performance hit that doesn't apply to them, then they go screaming around the internet claiming everyone suddenly has a glorified 486 in their computer and go throw it out to replace it with AMD Ryzen.

But by all means, carry on screaming around the internet claiming everyone suddenly has a glorified 486 in their computer and go throw it out to replace it with AMD Ryzen.
No, they claimed there is no performance impact for normal users while testing pure CPU and GPU oriented benchmarks which are known to be the least affected. Not one use case was tested that had the most potential to be impacted.

"From the perspective of advancing knowledge and building a baseline for the next round of tests – those which will, unlike today’s, factor-in microcode patches – we must eventually run the tests being run today. This will give us a baseline for performance, and will grant us two critical opportunities: (1) We may benchmark baseline, per-Windows-patch performance, and (2) we can benchmark post-patch performance, pre-microcode. Both will allow us to see the isolated impact from Intel’s firmware update versus Microsoft’s software update. This is important, and alone makes the endeavor worthwhile – particularly because our CPU suite is automated, anyway, so no big time loss, despite CES looming."

They've established a baseline for things that are the least to be affected. Where is heavy multiplayer game testing? AV runs? NAS performance? More disk and network oriented use cases?
 
It does in the sense of getting application performance back. Parallelization may be the only way forward at this point.
That's kind of a silly assumption, the pre-execution engine on processors isn't a make or break CPU design that stops progress on increasing CPU performance. Nor does this flaw mean that pre-execution is a bad concept that cannot still be utilized, it only means that the current design is flawed and needs to be refined. As I said previously, I'm no coder so I don't the work involved but is the time invested worth the performance gain? I'm betting most companies will say no. If Intel hasn't identified an easy architecture fix on their next CPU release, they'll have something on the release after that.
 
Virtualize the box. CPU and memory upgrades become relatively trivial afterward.

It already IS virtualized.
Just don't like telling the boss we need to spend $20K due to a CPU bug.
 
Because coding thread-safe code is oh so easy... We have already seen the fsckup that can occur as meltdown&spectre are chip level multi-threaded memory management bugs

No they are not chip bugs. All chips are working as designed and as specified. If anything they are OS level bugs as the OSes were relying on functionality that wasn't specified and didn't always exist.
 
It's a useless test in a useless format. The only thing it did, besides wasting 9 minutes of my life, is show that they have no idea what the problem is about and where the potential slowdowns are.

If they had ran these tests to check, but explained what the issue is and where the potential pitfalls are, it would be ok (still a useless format, though), but to run this and conclude there is no impact is lazy and disinformative.

You should see them slap an AIO onto a Titan V, falls into the same category of useless.
 
No they are not chip bugs. All chips are working as designed and as specified. If anything they are OS level bugs as the OSes were relying on functionality that wasn't specified and didn't always exist.

It is not an OS bug.

Hardware bug or hardware design flaw is an accurate statement.

Ya done messed up A-A-Ron (Sorry, couldn't resist)

You are both correct actually. The hardware contains a design flaw based on a flawed implementation of a pre-execution engine. The chip is operating the flawed design perfectly well and good. I'd like you all to take note, all the patching that is being done doesn't actually remove the flaw completely, it just makes exploitation more difficult. The only actual, complete fix for this is replacement hardware....that doesn't yet exist from Intel.

Oh, I'd also like to note that the OS isn't fully aware this is going on. The pre-execution engine operates below the OS level and is a function in the chip.
 
It is not an OS bug.

Hardware bug or hardware design flaw is an accurate statement.

Historically, side channel issues exposed via software are considered software bugs. This is the way that it has always worked wrt crypto side channel attacks (and security in general), fyi. The hardware in all cases here is working as specified across all the different vendors and ISAs.
 
No, they claimed there is no performance impact for normal users while testing pure CPU and GPU oriented benchmarks which are known to be the least affected. Not one use case was tested that had the most potential to be impacted.

"From the perspective of advancing knowledge and building a baseline for the next round of tests – those which will, unlike today’s, factor-in microcode patches – we must eventually run the tests being run today. This will give us a baseline for performance, and will grant us two critical opportunities: (1) We may benchmark baseline, per-Windows-patch performance, and (2) we can benchmark post-patch performance, pre-microcode. Both will allow us to see the isolated impact from Intel’s firmware update versus Microsoft’s software update. This is important, and alone makes the endeavor worthwhile – particularly because our CPU suite is automated, anyway, so no big time loss, despite CES looming."

They've established a baseline for things that are the least to be affected. Where is heavy multiplayer game testing? AV runs? NAS performance? More disk and network oriented use cases?
zzzzZZZZzzzzzz
 
Intel's decade of known hardware design flaw, thanks Intel. Average home users will be affected but only by a few percentage losses. Servers will be greatly impacted, and your online experience will drop in quality once millions of servers are updated. Online gaming, cloud, shopping, VPNs, universities, Dogecoin, etc. Billions of dollar loss! Oh the humanity!
 
Intel's decade of known hardware design flaw, thanks Intel. Average home users will be affected but only by a few percentage losses. Servers will be greatly impacted, and your online experience will drop in quality once millions of servers are updated. Online gaming, cloud, shopping, VPNs, universities, Dogecoin, etc. Billions of dollar loss! Oh the humanity!
Can you please provide the source that told you Intel have known about this for ten years, rather than the 6 months Project Zero says?

Or at least, can you confirm that you HAVE a source?

Cuz without those, I'm just gonna have to throw your post in the "Conspiracy bin".

I don't like Intel and I am WELL aware of how shitty they've been over the years, but blindly assuming that because Intel are nasty, they sat on this for 10 years, is simply way beyond any reasonable level of mistrust - unless you have some sort of source or evidence.
 
testing optane 900p after Win10 meltdown "patch"... 70% reduction in IO speed :( :( (4k random reads) and 50% reduction for 4k writes...
 
Can you please provide the source that told you Intel have known about this for ten years, rather than the 6 months Project Zero says?

Or at least, can you confirm that you HAVE a source?

Cuz without those, I'm just gonna have to throw your post in the "Conspiracy bin".

I don't like Intel and I am WELL aware of how shitty they've been over the years, but blindly assuming that because Intel are nasty, they sat on this for 10 years, is simply way beyond any reasonable level of mistrust - unless you have some sort of source or evidence.

I am afraid that you have to put my post in your conspiracy bin, because I honestly don't have information or sources. I wrote that in recalling Intel's call gate ( https://en.wikipedia.org/wiki/Call_gate_(Intel) ) function when I took a course years ago. Please note that I am no expert and I am just a computer enthusiast. Maybe someone more knowledgeable can shed light to confirm if Intel have known this for a decade or not.
 
IINM, lots of folks have known about this potential side channel for a good long while now.

It was just not considered performant, ie. CPU designers figured one could only scan a bit every few seconds,
which makes it a non-worthwhile attack vector that is also easy to spot (malicious thread at 100% triggering
cache misses like a mothef*cker).

Now we know that this attack has a read speed of ~0.5MB/s, which makes it a lot more feasible.
 
https://www.theverge.com/2018/1/9/16868290/microsoft-meltdown-spectre-firmware-updates-pc-slowdown

Microsoft warns Windows 7, 8 users that performance may be noticeable following Meltdown and Spectre patches on their machines, especially Haswell and older processors. (There is a smaller performance decrease for Windows 10 users with Haswell and older) The article notes that Microsoft may be throwing Intel under the bus to prevent accusations of slowdown against Microsoft. I wonder if maybe this is another cynical ploy to get Windows 10 adoption up again, but as a 4690k owner, I'm somewhat annoyed. Its notable that Windows is acknowledging this cause this lends a lot of credibility. Also, note that without microcode/BIOS updates, those patches downloaded from Windows Update aren't the full implementation of the fixes against Meltdown and Spectre.
 
Epic's figures tie in with what I'm seeing on my VM Server, except my figures are even worse (closer to 50% performance loss).

If I don't replace the whole box the only real option I have is to move from a SATA SSD based ZFS SSD cache to an NVMe one, but even then the Ethernet access on the box will still hurt plenty...

Looks like I need the NVMe based ZFS cache + a real TOE Ethernet adapter to work around this shit, the trouble is I only have one spare PCIe slot :(

NVME will not help you as the patches will just kill it's performance; you can verify this by testing latency with ram disk with and without patch.
 
NVME will not help you as the patches will just kill it's performance; you can verify this by testing latency with ram disk with and without patch.

Good thing ram prices are nice and low. We can all max our systems, use a small ram drive for key. ... oh ram started going up just about 6 months ago? Sure it is just a coincidence, right?
 
Back
Top