Another Intel vulnerability!

If history is a predictor of the future, I suspect we'll find AMD has hardware vulnerabilities as well, just not the same ones intel has. With the number of identified intel exploits and that AMD has a price/performance advantage in the server market I'd definitely recommend buying AMD, but security problems happen in hardware and software all the time.
 
If history is a predictor of the future, I suspect we'll find AMD has hardware vulnerabilities as well, just not the same ones intel has. With the number of identified intel exploits and that AMD has a price/performance advantage in the server market I'd definitely recommend buying AMD, but security problems happen in hardware and software all the time.
True, AMD has had some, but when you say security problems happen all the time, literally for Intel it does seem all the time. :)
 
side-channel exploits really need to be considered in context. Not all of them are due to lax security standards like many of intel's spectre / meltdown type of vulnerabilities have been.

Example: Monitoring the electrical use of your house and determining what you're typing on your keyboard (which causes tiny changes in that electrical use) is a side-channel attack. Most of these are only going to provide information in extremely controlled laboratory situations or situations where a massive amount of baseline data has been accumulated and significant information about the victim's system is known ...

Now granted, the one in the article is doing something you shouldn't be able to do, get info about secure processes from an insecure one because the insecure one has direct access to L3 cache which is shared by the secure process .... it's probably not as damning as allowing you to directly read another processes unused registers like meltdown etc since it's very indirect information.

Do not think for a second intel doesn't pay a team of people to search for these kinds of security flaws and others in their competition. In addition to all the free/university/thirdparty/government funded teams that are looking on various hardware platforms.

The prevalence of intel being the faulty party so often has more to do with intel taking shortcuts to win the fastest cpu race than it does about nobody taking the time to look at amd.
 
If history is a predictor of the future, I suspect we'll find AMD has hardware vulnerabilities as well, just not the same ones intel has. With the number of identified intel exploits and that AMD has a price/performance advantage in the server market I'd definitely recommend buying AMD, but security problems happen in hardware and software all the time.

It will be a lot harder tho, if you have read a bit about them, their epyc processors come out of the gate with mitigations built in like natural memory encryption and the user does not have access to the encryption keys, they are locked away in the hardware, this actually should slow down the effectiveness of a great deal of the attacks used on Intel by orders of magnitude.
 
For the home user, most of these (as I understand them) will require elevated permissions, so it's a moot point.

This particular one is specific to certain Xeons. The regular Core chips don't have DDIO so aren't vulnerable.
 
side-channel exploits really need to be considered in context. Not all of them are due to lax security standards like many of intel's spectre / meltdown type of vulnerabilities have been.

Example: Monitoring the electrical use of your house and determining what you're typing on your keyboard (which causes tiny changes in that electrical use) is a side-channel attack. Most of these are only going to provide information in extremely controlled laboratory situations or situations where a massive amount of baseline data has been accumulated and significant information about the victim's system is known ...

Now granted, the one in the article is doing something you shouldn't be able to do, get info about secure processes from an insecure one because the insecure one has direct access to L3 cache which is shared by the secure process .... it's probably not as damning as allowing you to directly read another processes unused registers like meltdown etc since it's very indirect information.

Do not think for a second intel doesn't pay a team of people to search for these kinds of security flaws and others in their competition. In addition to all the free/university/thirdparty/government funded teams that are looking on various hardware platforms.

The prevalence of intel being the faulty party so often has more to do with intel taking shortcuts to win the fastest cpu race than it does about nobody taking the time to look at amd.
These types of speedups, sharing cache between processes that can see each other. AMD has stated their cache does not allow this behavior. Similar statements about other exploits, they avoided speedups that could lead to security issues. They aren't perfect, but at least it seems they aren't blatantly reducing security for performance like Intel has been doing. Some of these it feels like they were just like, "nobody will probably notice this, let's go with it".
 
These types of speedups, sharing cache between processes that can see each other. AMD has stated their cache does not allow this behavior. Similar statements about other exploits, they avoided speedups that could lead to security issues. They aren't perfect, but at least it seems they aren't blatantly reducing security for performance like Intel has been doing. Some of these it feels like they were just like, "nobody will probably notice this, let's go with it".

They probably also didn't expect cloud computing to blow up the way it did, where multiple unrelated administrative users can be legitimately utilizing a system at the same time.
 
for sure. plenty of what intel has done in their cpu's related to these exploits is due to cutting corners. But every side-channel exploit needs to be fully read about and understood because you can call anything a sidechannel exploit if you're able to guess at what is going on in B by looking at A. A can be anything.....from measuring the time it takes for a certain opcode to execute to measuring the power draw from outside your home etc. Sites will post these "findings" to draw traffic far more than posting something because it's important and meaningful.

There's a massive difference for the end user between remote code able to pull in protected data via a website and being able to guess protected data in a controlled laboratory setting after hours and hours of prep and brute force attempts. There's a huge difference between issues that impact servers running virtual machines run by various people other than the owner ....and issues that would impact anyone using the web.

They should not be lumped together even if the way you resolve them is the same. I'm not excusing intel, ... I'm just saying, be critical of any of these exploits because every layer of security always includes some amount of overhead and not all of it is necessary for everyone.


They probably also didn't expect cloud computing to blow up the way it did, where multiple unrelated administrative users can be legitimately utilizing a system at the same time.

Servers have been around forever. In an even less secure fashion. Cloud computing was intended to be even more secure by having them sandboxed at the hardware level. So no, that's not what happened.
 
Servers have been around forever. In an even less secure fashion. Cloud computing was intended to be even more secure by having them sandboxed at the hardware level. So no, that's not what happened.

Actually I disagree. Servers indeed have been around for a long time. But hypervisor's on Intel hardware have not. That is something pretty new really.

The fact that someone can pose as a business. Get accounts in regional Amazon cloud centers, then start sniffing the memory of hosts in order to see what is out there, closing and re opening accounts as they learn what needs get them on different hosts... It is a threat. Sure it's a fishing expedition for the criminals that want to do this. But it is cheap to free for them and if they get some paydata then it is win win.

To say Virtualization with Hypervisors on Server class hardware has been around a long time is a misnomer.

Now if you are talking larger heavier I/O systems, RS/6000, AS/400, Mainframe and such. Sure that's been around a much longer time.
 
nobody said virtualization has been around. I said that the advent of it was supposed to be even more secure than the previous method for servers which was just everyone made a user to the host operating system and chrooted into their own environments.

Believing that hosting businesses on a single server is tied to virtualization and thus is new ignores how the internet worked before hypervisors were a thing.

edit: the point being, that virtualization's main focus was security from the beginning. So trying to imply that intel made decisions that damaged the security focused model of the hypervisor and virtual computing because they weren't prepared for it's popularity is extremely unlikely. It was the entire point to be a security focused feature.
 
nobody said virtualization has been around. I said that the advent of it was supposed to be even more secure than the previous method for servers which was just everyone made a user to the host operating system and chrooted into their own environments.

Believing that hosting businesses on a single server is tied to virtualization and thus is new ignores how the internet worked before hypervisors were a thing.


Ok I got what you were saying in your first line and you were correct depending on the environment.

Now on your second line starting with Believing... I really don't understand what you are trying to say.

Was it:
a. Businesses have been selling access to servers to other various companies to use for a long time so vitualization is just a newer better way of doing that with resource caps an such.
or...
b. Small businesses often ran everything for the company on a single server and were prone to tragedy whenever those servers failed...
or
c. THE CLOWNS THE CLOWNS OH SWEET MOTHER OF GOD THE CLOWNS?!

I'm presuming it's point a more or less.

Sure that was being done in a very limited fashion with web page hosting and such. I get you on that.

BUT... in that time data wasn't the new oil. Data is the most valuable commodity on the planet today as such it's security and viability are of utmost importance to the businesses and citizens of the world. Whenever a single CPU manufacturer dominates a market for decades THEN has it's advantageous compute methods proven to be rife with vulnerabilities.... for over a decade... that's a problem.
 
the fact that data is so much more popular and so you have such a huge explosion of shared servers compared to before wouldn't impact the design of a cpu that utilizes a feature who's entire purpose of existing was for sandboxing and security. You could say "oh, it wasn't a high priority" but that would only be true for consumer pc cpu's. They have a dedicated line of server cpu's that share the same vulnerabilities where that argument wouldn't be valid at all.

The issues that are due to architecture design that are only found in intel cpu's are almost definitely there due to intentional design decisions to prioritize speed and or reduced complexity (which rolls back to speed usually) over security, not because they weren't aware of what they were doing was undermining security. CPU engineers are well aware of the tradeoffs one decision has over another before it gets implemented. Every design choice is a conscious one where the needs on one side outweigh the drawbacks on the other.

For the server cpu's. ...this is inexcusable, since they have been and always are intended for shared environments where features like virtualization were purposely added and advertised for security, yet they allowed various other design decisions to undermine that security.

And these issues fall down to other aspects of the cpu even if you ignore virtualization (which I could agree, would be lower priority to care about keeping absolutely secure in non-server cpu's) where they impact the very basic nature of access control that all software depends on.

I'm fully confident that these issues and concerns were brought up in early design meetings by the engineers tasked to design these cpu's the way they currently are and they were intentionally sidelined just to have that much more of an edge over their competition. They may not have had the idea of a working exploit ...but within the confines of a given thing you're designing....you're well aware of why you do something one way vs another. As evidence by how much of the issues are specifically avoided in equivalent hardware from their competition.
 
the fact that data is so much more popular and so you have such a huge explosion of shared servers compared to before wouldn't impact the design of a cpu that utilizes a feature who's entire purpose of existing was for sandboxing and security. You could say "oh, it wasn't a high priority" but that would only be true for consumer pc cpu's. They have a dedicated line of server cpu's that share the same vulnerabilities where that argument wouldn't be valid at all.

The issues that are due to architecture design that are only found in intel cpu's are almost definitely there due to intentional design decisions to prioritize speed and or reduced complexity (which rolls back to speed usually) over security, not because they weren't aware of what they were doing was undermining security. CPU engineers are well aware of the tradeoffs one decision has over another before it gets implemented. Every design choice is a conscious one where the needs on one side outweigh the drawbacks on the other.

For the server cpu's. ...this is inexcusable, since they have been and always are intended for shared environments where features like virtualization were purposely added and advertised for security, yet they allowed various other design decisions to undermine that security.

And these issues fall down to other aspects of the cpu even if you ignore virtualization (which I could agree, would be lower priority to care about keeping absolutely secure in non-server cpu's) where they impact the very basic nature of access control that all software depends on.

I'm fully confident that these issues and concerns were brought up in early design meetings by the engineers tasked to design these cpu's the way they currently are and they were intentionally sidelined just to have that much more of an edge over their competition. They may not have had the idea of a working exploit ...but within the confines of a given thing you're designing....you're well aware of why you do something one way vs another. As evidence by how much of the issues are specifically avoided in equivalent hardware from their competition.

I agree 100% I'd agree 200% if there were more than one of me!
 
They probably also didn't expect cloud computing to blow up the way it did, where multiple unrelated administrative users can be legitimately utilizing a system at the same time.

Intels highest profit segment has always been server chips. Cloud or no cloud... thousands of people running software on the same hardware has been the norm for those chips since the 70s.

The cloud may have exasperated the issue a bit... still IBM and the honestly even smaller player in that ring AMD have designed there cache systems with far better security. Don't get me wrong both IBM and AMD needed firmware changes to mitigate the side channel stuff. Its just there chips only needed a minor change... where as Intels chips where not even bothering to check program permissions prior to execution which was both a major speed advantage and a major security black hole that no half way decent team of engineers put into silicon without having some idea how stupid it was. IBM lost about 4-5% in performance after the multiple side channel fixes and AMD around the same. Intel has pretty much had to redesign, with some of the worst examples of their cheating cache chips loosing 20-30% in performance as the only real safe fixes have involved complete cache wipes ect.

There are companies with 100s of thousands of Intel server chips that are only 2-3 years old that have frankly had to turn their machines into CeleronAs basically disabling cache spaces as they just can't risk not.
 
Intels highest profit segment has always been server chips. Cloud or no cloud... thousands of people running software on the same hardware has been the norm for those chips since the 70s.

The cloud may have exasperated the issue a bit... still IBM and the honestly even smaller player in that ring AMD have designed there cache systems with far better security. Don't get me wrong both IBM and AMD needed firmware changes to mitigate the side channel stuff. Its just there chips only needed a minor change... where as Intels chips where not even bothering to check program permissions prior to execution which was both a major speed advantage and a major security black hole that no half way decent team of engineers put into silicon without having some idea how stupid it was. IBM lost about 4-5% in performance after the multiple side channel fixes and AMD around the same. Intel has pretty much had to redesign, with some of the worst examples of their cheating cache chips loosing 20-30% in performance as the only real safe fixes have involved complete cache wipes ect.

There are companies with 100s of thousands of Intel server chips that are only 2-3 years old that have frankly had to turn their machines into CeleronAs basically disabling cache spaces as they just can't risk not.
So I went to checking various articles here and there, and its true, full mitigation can cost as high as 40% performance loss. Can't imagine how happy some would be if you are on that boat.
I only reply because i have read claims similar to yours and always think meh, its probably less than what the poster is saying... But no, it CAN be that bad... Impressive Intel. Are people getting fired yet for buying Intel? Probably still not.
 
So I went to checking various articles here and there, and its true, full mitigation can cost as high as 40% performance loss. Can't imagine how happy some would be if you are on that boat.
I only reply because i have read claims similar to yours and always think meh, its probably less than what the poster is saying... But no, it CAN be that bad... Impressive Intel. Are people getting fired yet for buying Intel? Probably still not.

For the biggest customers that really do have 100s of thousands in Intel server chips they get some token 20-30% discounts on new chips and everyone is happy. I'm sure plenty of customers replaced tons of chips and came away happy that they got discounted upgrades 2-3 years in. lol Ya Intel has had to deal with shortages what they don't like to talk about is that those shortages are self made not through sales but discount "upgrade" deals to keep the big fish happy.

As crazy as it seems I'm sure in many ways Intel has cemented some relationships with some of their whale clients. If your Intel I guess selling a bunch of server chips at real production costs if need be isn't that big a deal short term. Would be funny to see some of those PO #s with top of the line Xeons going for 10-20% of their retail price to the big banks ect.
 
They probably also didn't expect cloud computing to blow up the way it did, where multiple unrelated administrative users can be legitimately utilizing a system at the same time.
If you don't think these performance enhancement, some of which are only for these workcenter processors, wasnt with this in mind, you have not been keeping up. You think direct link between network card and cache (ddi) for Xeon processors was because they didn't know it'd be used in cloud computing? Im sure not all of them were known at the time, but some of them are completely dubious. Honestly for me it's just to many shortcuts being discovered to be a coincidence, then to hear another manufacturer say they don't do that stuff due to security concerns (true or not) sure gives an impression. Like I said, some (majority even) were oversight, but your telling me this many people and engineers didn't see any of these items as issues and question anything?
 
Back
Top