and has a PR team with graphic artists?
It helps make consumers interested. Heartbleed was the same.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
and has a PR team with graphic artists?
It helps make consumers interested. Heartbleed was the same.
OK, just seems creepy, like it's a product
If this impact SQL significantly it will be causing me pain. I upgraded our SQL server last year (doubled the CPU & Memory). I'd hate to have to upgrade again.
Because coding thread-safe code is oh so easy... We have already seen the fsckup that can occur as meltdown&spectre are chip level multi-threaded memory management bugsSo.... Think meltdown will make dickhead publishers and or lazy devs program their software to use more than 1-4 fucking cores?...
NAHHHHHHHHHHHHHHH.
This has nothing to do with the Meltdown/Spectre vulnerabilities.So.... Think meltdown will make dickhead publishers and or lazy devs program their software to use more than 1-4 fucking cores?...
NAHHHHHHHHHHHHHHH.
This has nothing to do with the Meltdown/Spectre vulnerabilities.
Easy to crow about the sky falling in, but GamersNexus tested and showed absolutely no performance degradation outside of margin of error, for consumer tasks.
People will have their misery, doom and gloom and it will not be stopped!
Why don't you watch the video and find out, instead of trying to fearmonger while also admitting you literally don't know the content of the video?Least affected use cases showed the least change? Shocked. Have they tested multiplayer games? AV scans? Network tests? Have they tested anything with more I/O and kernel calls?
It's a useless test in a useless format. The only thing it did, besides wasting 9 minutes of my life, is show that they have no idea what the problem is about and where the potential slowdowns are.Why don't you watch the video and find out, instead of trying to fearmonger while also admitting you literally don't know the content of the video?
I feel like you have somehow missed the explanation that the video is pointless because they expected these results given the workloads being tested, and his own explanation that they did it because their test suite is automated and they may as well have done, since they were clearly going to get questions about what the patch meant for users.It's a useless test in a useless format. The only thing it did, besides wasting 9 minutes of my life, is show that they have no idea what the problem is about and where the potential slowdowns are.
If they had ran these tests to check, but explained what the issue is and where the potential pitfalls are, it would be ok (still a useless format, though), but to run this and conclude there is no impact is lazy and disinformative.
No, they claimed there is no performance impact for normal users while testing pure CPU and GPU oriented benchmarks which are known to be the least affected. Not one use case was tested that had the most potential to be impacted.I feel like you have somehow missed the explanation that the video is pointless because they expected these results given the workloads being tested, and his own explanation that they did it because their test suite is automated and they may as well have done, since they were clearly going to get questions about what the patch meant for users.
Spoiler: For most users, absolutely nothing, just like we were told before the patch even hit.
Purpose of video: Prevent misinformation spreading (Like this thread) where people hear about a performance hit that doesn't apply to them, then they go screaming around the internet claiming everyone suddenly has a glorified 486 in their computer and go throw it out to replace it with AMD Ryzen.
But by all means, carry on screaming around the internet claiming everyone suddenly has a glorified 486 in their computer and go throw it out to replace it with AMD Ryzen.
That's kind of a silly assumption, the pre-execution engine on processors isn't a make or break CPU design that stops progress on increasing CPU performance. Nor does this flaw mean that pre-execution is a bad concept that cannot still be utilized, it only means that the current design is flawed and needs to be refined. As I said previously, I'm no coder so I don't the work involved but is the time invested worth the performance gain? I'm betting most companies will say no. If Intel hasn't identified an easy architecture fix on their next CPU release, they'll have something on the release after that.It does in the sense of getting application performance back. Parallelization may be the only way forward at this point.
Virtualize the box. CPU and memory upgrades become relatively trivial afterward.
Because coding thread-safe code is oh so easy... We have already seen the fsckup that can occur as meltdown&spectre are chip level multi-threaded memory management bugs
It's a useless test in a useless format. The only thing it did, besides wasting 9 minutes of my life, is show that they have no idea what the problem is about and where the potential slowdowns are.
If they had ran these tests to check, but explained what the issue is and where the potential pitfalls are, it would be ok (still a useless format, though), but to run this and conclude there is no impact is lazy and disinformative.
No they are not chip bugs. All chips are working as designed and as specified. If anything they are OS level bugs as the OSes were relying on functionality that wasn't specified and didn't always exist.
No they are not chip bugs. All chips are working as designed and as specified. If anything they are OS level bugs as the OSes were relying on functionality that wasn't specified and didn't always exist.
It is not an OS bug.
Hardware bug or hardware design flaw is an accurate statement.
It is not an OS bug.
Hardware bug or hardware design flaw is an accurate statement.
zzzzZZZZzzzzzzNo, they claimed there is no performance impact for normal users while testing pure CPU and GPU oriented benchmarks which are known to be the least affected. Not one use case was tested that had the most potential to be impacted.
"From the perspective of advancing knowledge and building a baseline for the next round of tests – those which will, unlike today’s, factor-in microcode patches – we must eventually run the tests being run today. This will give us a baseline for performance, and will grant us two critical opportunities: (1) We may benchmark baseline, per-Windows-patch performance, and (2) we can benchmark post-patch performance, pre-microcode. Both will allow us to see the isolated impact from Intel’s firmware update versus Microsoft’s software update. This is important, and alone makes the endeavor worthwhile – particularly because our CPU suite is automated, anyway, so no big time loss, despite CES looming."
They've established a baseline for things that are the least to be affected. Where is heavy multiplayer game testing? AV runs? NAS performance? More disk and network oriented use cases?
Can you please provide the source that told you Intel have known about this for ten years, rather than the 6 months Project Zero says?Intel's decade of known hardware design flaw, thanks Intel. Average home users will be affected but only by a few percentage losses. Servers will be greatly impacted, and your online experience will drop in quality once millions of servers are updated. Online gaming, cloud, shopping, VPNs, universities, Dogecoin, etc. Billions of dollar loss! Oh the humanity!
Can you please provide the source that told you Intel have known about this for ten years, rather than the 6 months Project Zero says?
Or at least, can you confirm that you HAVE a source?
Cuz without those, I'm just gonna have to throw your post in the "Conspiracy bin".
I don't like Intel and I am WELL aware of how shitty they've been over the years, but blindly assuming that because Intel are nasty, they sat on this for 10 years, is simply way beyond any reasonable level of mistrust - unless you have some sort of source or evidence.
Not having an Intel system I have not read much on optane, is it nvme?
Epic's figures tie in with what I'm seeing on my VM Server, except my figures are even worse (closer to 50% performance loss).
If I don't replace the whole box the only real option I have is to move from a SATA SSD based ZFS SSD cache to an NVMe one, but even then the Ethernet access on the box will still hurt plenty...
Looks like I need the NVMe based ZFS cache + a real TOE Ethernet adapter to work around this shit, the trouble is I only have one spare PCIe slot![]()
NVME will not help you as the patches will just kill it's performance; you can verify this by testing latency with ram disk with and without patch.