New Speculative Execution Bug Allegedly Affects Intel CPUs

' cause its hard to do, doesn't sound like a reasonable reason to ignore the seriousness of this design choice.
Im surprised the consequences are so insignificant for Intel so far.
All purchases of Intel hardware SHOULD be frozen by most entities. Im surprised government hasn't announced such move, maybe they might.
No rush Intel, no rush./s
 
I wouldn't go sticking my head in the sand.
Do you wear a hard hat on the street every time? No? Then you're sticking your head in the sand! A tile might fall off a building and hit your head. Actually that's more likely to happen than someone trying to use speculative execution to gain access to bits of your computer's RAM.
 
Up until this, you needed local admin perms to do any real damage, thus it was more a home user issue. In the enterprise, needing admin privileges makes it almost a none issue, as it would take a foolish senior engineer or high mucky-muck to propagate. So it could be lethal in an an environment that it was not likely to be effective against.

This is much different, from what I've gleaned so far. This is simple user space code that can pull data from the hardware layer directly. It doesn't really matter at that point WHAT data they can pull, because it could be ANY data that they pull. As noted, it's not just useful as a direct payload, but as a recon package to make already nasty, known payloads more effective.

And this may just be java now, but it's a hardware exploit. So it's only a matter of time before this could be leveraged using other vectors. It requires hardware or at least microcode remediation.

Yay, job security. See ya, gotta go patch some hypes.... again.
 
Further consider that for the first while intel _flat out denied_ the security issue.

No bones about it, a lot of work will need to go into silicon to solve this. Because this issue isn't going away otherwise. And honestly, we're given ZERO indication that intel is actually going to solve this at the silicon level.

On the one hand I agree with you, this is unacceptable, should have dumped everything they had into a hardware level fix immediately. But on the other, I think the reality is this would take a redesign that will take years, if not decades, to implement. In the mean time, should they just shut down the fabs, produce nothing? When a solution is finally revealed that completely abandons speculative execution (the only real way to protect against this,) it will likely be many years from now, perform much worse than today's chips, and cost a fortune.
 
I have one and it sucks balls for what I do most, book publishing. Perhaps the next gen 7nm will give me a Woodrow ... I'm not holding my breath but I'm hopeful.

I'm just curious, which AMD CPU you have, in what configuration, and for what software and workflow are you using that system? Thank you.
 
(n)
Do you wear a hard hat on the street every time? No? Then you're sticking your head in the sand! A tile might fall off a building and hit your head. Actually that's more likely to happen than someone trying to use speculative execution to gain access to bits of your computer's RAM.
 
Until they get hit with an exploit...

Well, if you look at Intel's in silicon security bugs, they're all performance related. Spectre affects Intel, AMD, and ARM CPUs. Probably anything that's a CPU and does speculative execution. It's a conceptual error, meaning that it's how engineers are thought in school (Computer Science), and then they probably also copied each other. Because when you design CPUs for a living, it's not an enthusiast's hobby, but a chore. It's your day job. And your boss, or bosses, will all demand, yes, you guessed it, more performance. Now Meltdown, that one is downright embarrassing, and all you need to do is look around on the Web to gauge the dissatisfaction of folks who suffered a massive downgrade in the performance of their servers after applying the mitigation patches. Now, if AMD had this bug, I would say screw them, they just copied Intel's design. These days it would be hard to do due to how complex CPUs are, but still. However, AMD CPUs don't suffer from the Meltdown security bug. I guess that more careful engineering went into their CPUs.

I'm defending Intel or AMD or any company. They're all out to make a profit. Like many here, I've been around a lot of computer hardware. I run both Intel and AMD systems, Skylake-X and Threadripper. So I have a pretty good understanding of how reviewers get those Cinebench R15 scores, especially the high single threaded scores on Intel. The problem is that those high turbo clocks, especially when you enable MultiCore Enhancement, are not sustainable. They will get you through the benchmark, and that's it. It may be a bit better on the so-called 9th gen Skylake-X with the soldered heat spreader, but on the 7900X that I run it's not.
The problem with the "toothpaste" is obvious. Still, I can run it on all cores at 4.0GHz all day long at comfortable temperatures at 100%, and I have AVX512 set to 3.3GHz and AVX to 3.6GHz. I also have it set to turbo to 4.7GHz on four cores, 4.5GHZ on 6 cores, and 4.3 on eight cores. I admit that this isn't something I can do on the 1950X. And that's how I get those nice single-threaded Cinebench scores of over 190 points. Even funnier about X299 motherboards is that they all want to overclock your CPU right out of the box. I actually had to learn a few settings to get my CPU to run at stock - Intel specified - settings. Now, if you do that, it will never hit that 4.5GHz turbo on 2 cores because there is always some process running in Windows that will prevent it, so benchmarks look a lot different. Yep, most reviewers compare overclocked Intel vs. stock AMD, because it's easy, and because they probably get perks from Intel. And yes, I believe that Intel silicon is superior because it can easily clock higher with less voltage than AMD. But keeping it cool is a different story. Not that AMD is easy to cool either, but now I'm getting way off topic.
 
I'm just curious, which AMD CPU you have, in what configuration, and for what software and workflow are you using that system? Thank you.
It's in my signature. Ryzen 1700 at 3.9 Ghz. Single thread performance sucks compared to an Intel processor at any clock speed. However, an Intel 9600 at 5GHz is pretty damn fast for the older apps I am working with. I have had a number of processors over the years as well, always constantly building PCs for someone or myself. I upgrade my main CPU (Intel) every generation save for Ryzen's move to 12nm, which was more of a clock bump than anything.

The Ryzen 1700 would choke in my desktop publishing applications across the board. Gaming was also noticeably impacted even though the differential at 4K is small. It tended to dip more in frames per second than Intel in my experience. At the time I was running it I was using a GTX 1080 then 1080Ti as my graphics card. Most of my issues disappeared on the Intel platform (even my 7600 i5 Laptop ran my desktop applications faster). I have also used the Ryzen 2200 processor and it performed admirably for a friend who I built it for. However, she doesn't use it for what I do.

There's nothing wrong with AMD processors if you're using them for the right application. I suspect this will change with the next Gen Ryzen 2 on 7nm. I hope they can maintain their lead, we need a strong competitor to Intel in the market place.
 
Well, if you look at Intel's in silicon security bugs, they're all performance related. Spectre affects Intel, AMD, and ARM CPUs. Probably anything that's a CPU and does speculative execution. It's a conceptual error, meaning that it's how engineers are thought in school (Computer Science), and then they probably also copied each other. Because when you design CPUs for a living, it's not an enthusiast's hobby, but a chore. It's your day job. And your boss, or bosses, will all demand, yes, you guessed it, more performance. Now Meltdown, that one is downright embarrassing, and all you need to do is look around on the Web to gauge the dissatisfaction of folks who suffered a massive downgrade in the performance of their servers after applying the mitigation patches. Now, if AMD had this bug, I would say screw them, they just copied Intel's design. These days it would be hard to do due to how complex CPUs are, but still. However, AMD CPUs don't suffer from the Meltdown security bug. I guess that more careful engineering went into their CPUs.

I'm defending Intel or AMD or any company. They're all out to make a profit. Like many here, I've been around a lot of computer hardware. I run both Intel and AMD systems, Skylake-X and Threadripper. So I have a pretty good understanding of how reviewers get those Cinebench R15 scores, especially the high single threaded scores on Intel. The problem is that those high turbo clocks, especially when you enable MultiCore Enhancement, are not sustainable. They will get you through the benchmark, and that's it. It may be a bit better on the so-called 9th gen Skylake-X with the soldered heat spreader, but on the 7900X that I run it's not.
The problem with the "toothpaste" is obvious. Still, I can run it on all cores at 4.0GHz all day long at comfortable temperatures at 100%, and I have AVX512 set to 3.3GHz and AVX to 3.6GHz. I also have it set to turbo to 4.7GHz on four cores, 4.5GHZ on 6 cores, and 4.3 on eight cores. I admit that this isn't something I can do on the 1950X. And that's how I get those nice single-threaded Cinebench scores of over 190 points. Even funnier about X299 motherboards is that they all want to overclock your CPU right out of the box. I actually had to learn a few settings to get my CPU to run at stock - Intel specified - settings. Now, if you do that, it will never hit that 4.5GHz turbo on 2 cores because there is always some process running in Windows that will prevent it, so benchmarks look a lot different. Yep, most reviewers compare overclocked Intel vs. stock AMD, because it's easy, and because they probably get perks from Intel. And yes, I believe that Intel silicon is superior because it can easily clock higher with less voltage than AMD. But keeping it cool is a different story. Not that AMD is easy to cool either, but now I'm getting way off topic.

Yeah my point was: its all vulnerable or will be at some point. Act accordingly.
 
Someone on Hardocp have a suggestion? Aside from going AMD?
Either buy bottle of Vodka, or don't read scary news. Rowhammer is old stuff, this vulnerability was there for a long time.

If you are datacenter you can run around in circles screaming why didn't we bought Atom, then look at performance per electricity per installed CPU figures, lack of ECC support, and you'd know why not.
 
It's in my signature. Ryzen 1700 at 3.9 Ghz. Single thread performance sucks compared to an Intel processor at any clock speed. However, an Intel 9600 at 5GHz is pretty damn fast for the older apps I am working with. I have had a number of processors over the years as well, always constantly building PCs for someone or myself. I upgrade my main CPU (Intel) every generation save for Ryzen's move to 12nm, which was more of a clock bump than anything.

The Ryzen 1700 would choke in my desktop publishing applications across the board. Gaming was also noticeably impacted even though the differential at 4K is small. It tended to dip more in frames per second than Intel in my experience. At the time I was running it I was using a GTX 1080 then 1080Ti as my graphics card. Most of my issues disappeared on the Intel platform (even my 7600 i5 Laptop ran my desktop applications faster). I have also used the Ryzen 2200 processor and it performed admirably for a friend who I built it for. However, she doesn't use it for what I do.

There's nothing wrong with AMD processors if you're using them for the right application. I suspect this will change with the next Gen Ryzen 2 on 7nm. I hope they can maintain their lead, we need a strong competitor to Intel in the market place.

I can game on my 1950X at stock speed (don't even have Ryzen Master installed) at 1440P with maxed out settings without any issues. I have a 1080 Ti SLI setup. IIRC Doom is capped at 200FPS and doesn't use SLI. That's what I get with it. I had my fair share of issues with this system, mostly due to AGESA. But AMD finally updated it, and now RAM compatibility is good. Overclocking isn't great, but, with so many cores, I don't need to. My point is that something might be wrong with your AMD system, you have to find what it is. I wonder what motherboard you're running. What I saw is that different motherboard manufacturers update AGESA at their own pace, sometimes, if ever. Gigabyte updated to 1.1.0.2 back in October their X399 Aorus Xtreme, but the other X399 boards they sell aren't updated to this day. ASUS just updated to AGESA 1.1.0.2 my Zenith Extreme the other day, and I could see an improvement in RAM compatibility right away. So it all depends. AMD isn't a priority for any motherboard manufacturer, Intel is. That's where they make their fat stacks, especially on the mainstream platforms. Just look how many Z390 motherboards are out there. But if you had to cherry pick, there is maybe a couple or so worth buying.
 
I can game on my 1950X at stock speed (don't even have Ryzen Master installed) at 1440P with maxed out settings without any issues. I have a 1080 Ti SLI setup. IIRC Doom is capped at 200FPS and doesn't use SLI. That's what I get with it. I had my fair share of issues with this system, mostly due to AGESA. But AMD finally updated it, and now RAM compatibility is good. Overclocking isn't great, but, with so many cores, I don't need to. My point is that something might be wrong with your AMD system, you have to find what it is. I wonder what motherboard you're running. What I saw is that different motherboard manufacturers update AGESA at their own pace, sometimes, if ever. Gigabyte updated to 1.1.0.2 back in October their X399 Aorus Xtreme, but the other X399 boards they sell aren't updated to this day. ASUS just updated to AGESA 1.1.0.2 my Zenith Extreme the other day, and I could see an improvement in RAM compatibility right away. So it all depends. AMD isn't a priority for any motherboard manufacturer, Intel is. That's where they make their fat stacks, especially on the mainstream platforms. Just look how many Z390 motherboards are out there. But if you had to cherry pick, there is maybe a couple or so worth buying.

Nothing is wrong with my AMD system. It is a fact of reality that single thread performance on the Ryzen lags well behind Intel processors. The applications I am using were optimized for Intel Processors and that further impacts the performance of my Ryzen. Then there's the fact that My intel processor is running at 5 Ghz and the Ryzen 1700 I had could only maintain 3.9 stable and 2933 stable (I once had it running at 4.1 and 3200, but it was wildly unstable). I don't game at 1440P, I game at 4K max settings always. So, any dips in performance on my 60Hz display can be instantly felt. The Ryzen is plain and simply not as good for anything I would use it for. Intel is the right processor for ME. I have had this discussion many times. It's all about finding the CORRECT processor for your needs. The Ryzen I have is not the correct CPU for my current requirements.
 
The above being said, i'm currently building an Athlon 200GE up as a file server with an Adaptec raid controller , 16 GB of RAM and it will host a 20TB array. If that CPU isn't up to my needs, I have that 1700 to drop into it and undervolt the hell out of it for a really ballsy file server with 8 cores / 16 threads if I don't like 2 / 4.

I like AMD, especially the price. Then again, I'm not rocking a 9900 either on Intel. I try to keep my requirements affordable because I upgrade often. Perhaps next gen AMD will be my next one, gonna wait and see for a good while before I jump on that bandwagon and verify it actually fits my needs.
 
Nothing is wrong with my AMD system. It is a fact of reality that single thread performance on the Ryzen lags well behind Intel processors. The applications I am using were optimized for Intel Processors and that further impacts the performance of my Ryzen. Then there's the fact that My intel processor is running at 5 Ghz and the Ryzen 1700 I had could only maintain 3.9 stable and 2933 stable (I once had it running at 4.1 and 3200, but it was wildly unstable). I don't game at 1440P, I game at 4K max settings always. So, any dips in performance on my 60Hz display can be instantly felt. The Ryzen is plain and simply not as good for anything I would use it for. Intel is the right processor for ME. I have had this discussion many times. It's all about finding the CORRECT processor for your needs. The Ryzen I have is not the correct CPU for my current requirements.
You game at 4k at max settings, right there you knew you needed the top performance. I only do 1440p but I have a Gsync 165hz monitor, so I know how you feel, my main rig is Intel. I STILL am limited by single thread performance on many of my work activities (cad, dealing with pdf files, etc.) I agree that the Ryzen system has its place and it has made great strides. I put a 1770x in my Plex server, because it can make far greater use of the multiple threads. Gf's pc is a ryzen 2600 because it was an exceptional value for an occasional 1080p gamer.
 
You game at 4k at max settings, right there you knew you needed the top performance. I only do 1440p but I have a Gsync 165hz monitor, so I know how you feel, my main rig is Intel. I STILL am limited by single thread performance on many of my work activities (cad, dealing with pdf files, etc.) I agree that the Ryzen system has its place and it has made great strides. I put a 1770x in my Plex server, because it can make far greater use of the multiple threads. Gf's pc is a ryzen 2600 because it was an exceptional value for an occasional 1080p gamer.
Yeah, I may discover that the 200GE isn't up to the task in the File Server / Plex role. However, I'm just one guy and I only need the media capability for watching my digital library. If I need more threads I've got that 1700.

I have high hopes for next gen Ryzen. I would be amazed if they manage to match single thread Intel. It's incredible how many applications are still designed to run that way... You would think software devs would be using at least 2 cores these days.
 
Yeah, I may discover that the 200GE isn't up to the task in the File Server / Plex role. However, I'm just one guy and I only need the media capability for watching my digital library. If I need more threads I've got that 1700.

I have high hopes for next gen Ryzen. I would be amazed if they manage to match single thread Intel. It's incredible how many applications are still designed to run that way... You would think software devs would be using at least 2 cores these days.

For the Ryzen 7 1700, 1700X and 1800X, which for all intents and purposes are the same CPU basically, AMD used the Intel Core-i7 6900K as the benchmark. AMD wanted to match its performance while selling the Ryzen at half the price. And looking back now, the $1000 Core-i7 6900K wasn't a great CPU, to begin with. At the beginning of 2017 it was still the most "affordable" HEDT CPU, but by the end of 2017, you couldn't get a chicken for it if you had one for sale. So comparing 1700 to the latest Coffee Lake CPUs is a bit meh.

Also, Intel managed, mostly via influencers, to drill this entire "single-threaded" performance BS into people's heads. It is only partly true, and it really depends on how you use your computer. If you only game on your PC, and you have a bare minimum Windows install, then an 8700K overclocked to 5.0GHz is the best option for you. Given how wide cores are now and how short the pipelines are, HyperThreading is kind of a must, otherwise, a lot of resources go unused in that CPU. Your thermal load will also be lower without HyperThreading of course.

Now, if you have $500 or so to spend on a CPU, and you like to multitask and also play games, then a 9900K is a bad investment. The 1950X is a no brainer in that case. Because when you have cores sitting idle that you can use to play a game, while you allocate half of your CPU for other tasks, is not the same when you share your entire CPU for everything. So for someone like to do multiple things at the same time, including running VMs, a 9900K is not a good investment.

I also worked in web hosting, and I can tell you that back in those days we ran only Intel servers. And quad core Xeons at over 3GHz always performed better for a handful of VMs than 12 core dual socket Xeons at 2.1GHz for a ton of VMs. So the clock speed has its advantages. But Threadripper is fast enough to perform really well as a multi-purpose CPU. And sure as hell it's a better value than anything on X299/socket 2066. Now if the 9900K was $300, then Intel would have annihilated AMD's mid-range and forced them into lowering their prices even more. I would love myself a 9900K for $300 :D But Intel just can't help themselves... being greedy.
 
The researchers claim they were able to get successful attacks within seconds though. If it was taking weeks to perform a single successful attack then you'd have a point.


Not really.

A fast executing attack means the attack can be re-ran over and over and over quickly without issue and "eventually" (in scare quotes because we're probably talking about a time frame of minutes not hours or days or weeks) the attacker will find what they want.

I think you need to dig more into what it takes to get that "successful attack" and what they potentially get out of it. They also comment on it in their research. There is a lot that goes into doing these speculative attacks, you need access to the system and need to be able to run software on the system. They note this in their paper. They also note that even though they can achieve this through javascript, it requires very specific conditions and its hard even with those conditions. Also what you get from the attack is potential information on physical RAM. You only get the information that is currently being stored on that physical RAM, and you have no idea what that information is.

As for the fast executing attack running over and over quickly, I think you forgetting that systems have methods for detecting this kind of activity. This is why wehave behavioral anomaly detection, to see if something is abnormally accessing resources or running an unfamiliar program consistently. This also requires the person to maintain some connection to the host site that is hosting the javascript. So you don't really have the capability of running it over and over continuously for long.

Like I said, there is a lot that goes into these attacks, and you aren't even sure if anything you get will be worth anything. The risk of detection is also high for little gain. At this time, it's not really worth it when there are far better vulnerabilities and methods for getting data and access which are far easier to pull off with less detection.
 
Back
Top