cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,061
A study by the U.S. Government Accountability Office (GAO) has shown how vulnerable U.S. weapon systems under the control of the Department of Defense (DOD) have become. This is due to the weapon systems becoming more networked and software dependent and the DoD is still in the early stages of placing an emphasis on learning about weapon cybersecurity. The DoD has typically concentrated efforts to keep its network secure, but not the actual weapon systems.

The GAO found mission-critical systems that had various vulnerabilities, but program officials brushed the threats off as unrealistic as they were adamant that their systems were secure. Passwords that took only 9 seconds to guess and unencrypted communications allowed testers to commandeer weapons systems undetected. One test team reported that "they caused a pop-up message to appear on users' terminals instructing them to insert two quarters to continue operating." Test teams reportedly looked up the default password for open-source software on the internet to take control of weapons systems. The GAO report didn't even consider related issues such as Internet of Things, contractor facilities, microelectronics, contracting, and industrial control systems. The 50 page document is quite interesting to read.

Further complicating matters, weapon systems are dependent on external systems, such as positioning and navigation systems and command and control systems in order to carry out their missions--and their missions can be compromised by attacks on those other systems. A successful attack on one of the systems the weapon depends on can potentially limit the weapon's effectiveness, prevent it from achieving its mission, or even cause physical damage and loss of life. Due to this lack of focus on weapon systems cybersecurity, DOD likely has an entire generation of systems that were designed and built without adequately considering cybersecurity.
 
Yikes, at least change the default passwords.

The theory is largely that access to the systems themselves are secure, so default passwords make the systems easy to access for configuration/diagnostic reasons. The real issue is when you connect these systems to other external systems, opening up an access point to your previously secured network.
 
... A successful attack on one of the systems the weapon depends on can ... cause physical damage and loss of life...
Isn't "causing physical damage and loss of life" what the weapons are meant to do when fully functional?
If anything attacking the systems can prevent that from happening...
 
Isn't "causing physical damage and loss of life" what the weapons are meant to do when fully functional?
If anything attacking the systems can prevent that from happening...
Well say that a bomb was loaded and ready to go. Then a foreign adversary hacked the weapon system and set it off while it was still on American soil. It would cause physical damage and a loss of life.
 
I wonder how many people here commenting actually worked in the industry...my guess? Not many.
 
Isn't "causing physical damage and loss of life" what the weapons are meant to do when fully functional?
If anything attacking the systems can prevent that from happening...

Imagine a missile defense system is installed somewhere in California. It's purpose would be to prevent physical damage and loss of life by taking out a missile coming over the Pacific before it could reach it's destination. Normally that system would sit idle for 99.9% of it's life. But if it were compromised, someone would be able to launch missiles into the heart of Los Angeles causing a lot of the physical damage and loss of life that it was intended to prevent.
 
I always wondered how the "Lowest price, technically acceptable" philosophy the military adopted during the early years of the Obama administration would work out. Now we know!
 
I always wondered how the "Lowest price, technically acceptable" philosophy the military adopted during the early years of the Obama administration would work out. Now we know!

You forgot lack of accountability because in government there is no accountability.
 
There is no cyberwarfare threat. And if there was there's clearly no need to defend from it. Also, we need to partner with other countries to update our information infrastructure.
 
Skynet, is that you!?
You figure we spend so much on the military that not only still out spend, what the next 4-6 countries combined, would be a wee little bit more secured.
 
Soooo, those third world countries do not need to manufacture their weapons, they can just take control of ours and use them as they wish. Brilliant!
Yes and No.

The report is talking about how networked within a weapon. In other words, the system that regulates flight can hack the system that controls detonation. It isn't that the weapon is vulnerable to external attack. It would need to be a Trojan horse piece of hardware.

This would be less of a problem if we didn't purchase foreign components, have massive visa programs in critical industries.
 
I always wondered how the "Lowest price, technically acceptable" philosophy the military adopted during the early years of the Obama administration would work out. Now we know!

I would like to point out that it was the House, under Republican control at the time, who made "Lowest Technically Acceptable Bid" law. Thankfully, we've moved heavily to Firm Fixed Price, as well as a push towards Commercial Off The Shelf equipment when possible.
 
The theory is largely that access to the systems themselves are secure, so default passwords make the systems easy to access for configuration/diagnostic reasons. The real issue is when you connect these systems to other external systems, opening up an access point to your previously secured network.

Yes and No.

The report is talking about how networked within a weapon. In other words, the system that regulates flight can hack the system that controls detonation. It isn't that the weapon is vulnerable to external attack. It would need to be a Trojan horse piece of hardware.

This would be less of a problem if we didn't purchase foreign components, have massive visa programs in critical industries.

So part of the problem is what is vulnerable and how. And most of these have to do with systems that were not networked with anything outside the system, but now are being forced to be networked with other systems. So the initial design wasn't flagged because they were not significantly vulnerable until the mandate to connect to other devices and networks.
 
So part of the problem is what is vulnerable and how. And most of these have to do with systems that were not networked with anything outside the system, but now are being forced to be networked with other systems. So the initial design wasn't flagged because they were not significantly vulnerable until the mandate to connect to other devices and networks.

Pretty much. Nobody cares about the user/password combination if there's no way to obtain physical access. It's when you connect that system to a system that is itself exposed that you end up with problems.
 
The theory is largely that access to the systems themselves are secure, so default passwords make the systems easy to access for configuration/diagnostic reasons. The real issue is when you connect these systems to other external systems, opening up an access point to your previously secured network.


Yes and no.

Truth is, without specifics I can't say one way or the other. There is a huge range of possibilities depending on what you are actually talking about. Obviously we are talking about weapon systems and you have to focus on that as your scope cause that's the topic. Default passwords are not kept for easy access. As soon as you start telling Joe Snuffy he can change them you wind up completely locked out of systems and sometimes they have to be sent back to the manufacturer to unlock. This is a no-shit issue where risk must be correctly weighted against the impact of compromise/fuck-ups. If I am a commander of an M1A2 Tank Brigade I am not going to risk having my tank's combat computers being locked out because someone is worried that someone will steal one and try to use the weapons systems. In garrison inside the USA these tanks don't have ammo loaded, same in any base even overseas, they don't take on a load until the roll out to use them, and at that point on their are under positive control of the crew. This is an example but it highlights what I am saying.

I am sure that in some situations there could be improvements, but in others, there is no problem. Each one must be addressed individually and with a strong dose of common sense and a solid risk analysis performed.
 
Imagine a missile defense system is installed somewhere in California. It's purpose would be to prevent physical damage and loss of life by taking out a missile coming over the Pacific before it could reach it's destination. Normally that system would sit idle for 99.9% of it's life. But if it were compromised, someone would be able to launch missiles into the heart of Los Angeles causing a lot of the physical damage and loss of life that it was intended to prevent.


I know this is just an example, but it's mostly an unrealistic one. The missiles for most air-defense systems can't be employed in an Air-to-Ground role, they are Air-to-Air only. You can't take control of an Anti-Aircraft Missile systems which is designed to received target tracking data to close with an airborne target, acquire, and guide itself into terminal guidance and attack, and target a ground location for attack. If you launched it it would at worst, find another aircraft to kill, and at best, just fly through the air until it ran out of fuel and fell to earth, probably self detonating harmlessly as it fell to earth. Next issue, the warheads are pretty small, doesn't take much to kill a plane. Bigger ones for anti-ballistic missile targets, but still, wrong guidance/target profile.

And they don't ever sit idle. Such systems are always on and get weekly maintenance checks to ensure serviceability. Just because they are not actively being used to engage targets doesn't mean they are idle, or that they sit around unused and forgotten. There are people who have a job of maintaining them and keeping them ready for the day they are needed and must function. Or at least, that is how it's supposed to work.
 
Skynet, is that you!?
You figure we spend so much on the military that not only still out spend, what the next 4-6 countries combined, would be a wee little bit more secured.
20 countries.
And yeah, government IT is a balkanized joke outside the NSA. And maybe even inside it.
 
You forgot lack of accountability because in government there is no accountability.

If you are talking about the Military then you are wrong. Not saying no one ever get's away with anything, but people get shit hung on them all the time too.
 
So part of the problem is what is vulnerable and how. And most of these have to do with systems that were not networked with anything outside the system, but now are being forced to be networked with other systems. So the initial design wasn't flagged because they were not significantly vulnerable until the mandate to connect to other devices and networks.


Um, let me put it into perspective in a loosely related way.

I am a sysad on three different software development networks, and we must follow all the same security guidance everyone else in DoD does. Chasing down and managing vulnerabilities is a never ending job as most of you know. Therefore we have vulnerabilities and if the GAO did a serious assessment they would come away with a nice list of issues and there would be numerous meetings and maybe even a newspaper article would be written on how the networks and systems used for developing Military Intelligence software are full of vulnerabilities ........

Our networks are not connected to anything outside the building, and the building is TEMPESTed, and it's a SCIF. No external lines, no stray signals, controlled access, etc.

Sure we have vulnerabilities, but a real risk assessment wouldn't put anyone in actual trouble, the risks are too low because reality is not blind.
 
Um, let me put it into perspective in a loosely related way.

I am a sysad on three different software development networks, and we must follow all the same security guidance everyone else in DoD does. Chasing down and managing vulnerabilities is a never ending job as most of you know. Therefore we have vulnerabilities and if the GAO did a serious assessment they would come away with a nice list of issues and there would be numerous meetings and maybe even a newspaper article would be written on how the networks and systems used for developing Military Intelligence software are full of vulnerabilities ........

Our networks are not connected to anything outside the building, and the building is TEMPESTed, and it's a SCIF. No external lines, no stray signals, controlled access, etc.

Sure we have vulnerabilities, but a real risk assessment wouldn't put anyone in actual trouble, the risks are too low because reality is not blind.

Yes, but that is not true of all systems. Starting a number of years back there was an mandate to start connecting weapon systems to the information grid. There are a number of reasons for this that I will not get into here. The problem with mandates is they usually come in the "One size fits all" form. This is where bureaucracy takes over and some programs are pressured to connecting to networks they were never meant to connect to, thus opening up new possible vulnerabilities. Add in that some of these programs have been around for a long time and none of the current people know the original code they were written in and cannot properly assess all the dangers or correct them before connecting. Fortunately, that typically means not many people outside truly understand how to take advantage of these vulnerabilities either.

Note also, while many people outside may not be able to get access initially, that doesn't mean it is always the case. Insider threat, and even more important what I call "Insider Stupidity" is a real thing. I can't express how many times I have had to tell someone they couldn't connect something a certain way because they were breaking fundamental security rules and creating a far broader attack vector to the system.
 
Um, let me put it into perspective in a loosely related way.

I am a sysad on three different software development networks, and we must follow all the same security guidance everyone else in DoD does. Chasing down and managing vulnerabilities is a never ending job as most of you know. Therefore we have vulnerabilities and if the GAO did a serious assessment they would come away with a nice list of issues and there would be numerous meetings and maybe even a newspaper article would be written on how the networks and systems used for developing Military Intelligence software are full of vulnerabilities ........

Our networks are not connected to anything outside the building, and the building is TEMPESTed, and it's a SCIF. No external lines, no stray signals, controlled access, etc.

Sure we have vulnerabilities, but a real risk assessment wouldn't put anyone in actual trouble, the risks are too low because reality is not blind.

It that how one of our predator drone systems was infected with a virus? (I'm being a little sarcastic, but to say there aren't some serious issues is just ludicrous.)

https://www.wired.com/2011/10/virus-hits-drone-fleet/
 
Yes, but that is not true of all systems. Starting a number of years back there was an mandate to start connecting weapon systems to the information grid. There are a number of reasons for this that I will not get into here. The problem with mandates is they usually come in the "One size fits all" form. This is where bureaucracy takes over and some programs are pressured to connecting to networks they were never meant to connect to, thus opening up new possible vulnerabilities. Add in that some of these programs have been around for a long time and none of the current people know the original code they were written in and cannot properly assess all the dangers or correct them before connecting. Fortunately, that typically means not many people outside truly understand how to take advantage of these vulnerabilities either.

Note also, while many people outside may not be able to get access initially, that doesn't mean it is always the case. Insider threat, and even more important what I call "Insider Stupidity" is a real thing. I can't express how many times I have had to tell someone they couldn't connect something a certain way because they were breaking fundamental security rules and creating a far broader attack vector to the system.


Not saying such cases don't exist, just that the assessment process frequently uncovers numerous vulnerabilities which do not translate into actual risks.
 
It that how one of our predator drone systems was infected with a virus? (I'm being a little sarcastic, but to say there aren't some serious issues is just ludicrous.)

https://www.wired.com/2011/10/virus-hits-drone-fleet/


The system was infected by a fuck up, no doubt about it. Someone screwed up. Someone somewhere in the chain didn't follow proper procedures or there is a malicious actor involved.

That being said, because of the nature of the DoD Communications network security, they can continue to operate the systems. If the infection keeps returning then it's resident somewhere and they haven't located it yet, or they aren't getting rid of all of it.

It's a keylogger right, that can't phone home, far more important is how the system was infected, and that was either an insider threat or as I said, someone fucked up and did something they shouldn't have. That is the real threat here, not the virus. No system is immune to such things is it? I mean really, do you know of any secure system that can't become compromised do to user stupidity or an insider threat?
 
The system was infected by a fuck up, no doubt about it. Someone screwed up. Someone somewhere in the chain didn't follow proper procedures or there is a malicious actor involved.

That being said, because of the nature of the DoD Communications network security, they can continue to operate the systems. If the infection keeps returning then it's resident somewhere and they haven't located it yet, or they aren't getting rid of all of it.

It's a keylogger right, that can't phone home, far more important is how the system was infected, and that was either an insider threat or as I said, someone fucked up and did something they shouldn't have. That is the real threat here, not the virus. No system is immune to such things is it? I mean really, do you know of any secure system that can't become compromised do to user stupidity or an insider threat?

Well, technically if they used PROM's, hardware validation, and no other software/code modifications allowed after deployment that would be a lot more secure. Would cost more and be harder to maintain.
 
Well, technically if they used PROM's, hardware validation, and no other software/code modifications allowed after deployment that would be a lot more secure. Would cost more and be harder to maintain.


Say this phrase for me ........ "continuous monitoring" :D
 
Say this phrase for me ........ "continuous monitoring" :D

Sure, but how? You cannot continuously monitor something that is not in your presence to monitor. And those that have it, do not always have the tools or training necessary.
 
Sure, but how? You cannot continuously monitor something that is not in your presence to monitor. And those that have it, do not always have the tools or training necessary.

Not sure what you mean. The Army's RMF process for cyber security requires "continuous monitoring" which doesn't really mean what it sounds like it means, but I digress.

A computer virus has infected the cockpits of America's Predator and Reaper drones, logging pilots' every keystroke as they remotely fly missions over Afghanistan and other warzones.
It's not the drones themselves that were infected but the control systems back in, what's that Air Force Base in Vegas, the US where the Drones are operated from. So we are talking commercial off-the-shelf hardware and a standard Operating System, probably based on an Army Gold Master build for starters. The OS has to be STIG'd, then the custom Apps and other software loaded, STIG'd, etc, and configured for the networks, and put on-line.

Now they have to do the continuous monitoring part where they are subjected to HBSS and ACAS, and whatever else, patches must be applied, and the system must be kept in compliance with the STIG even as it changes over time.

Someone intentionally, unknowingly, or incompetently fucked up and introduced that virus as part of this process. Maybe it was someone using infected media to move patches, or an application update delivered from the life-cycle support team for the special applications used in the cockpits was infected because someone screwed up. As an example, say a software developer needs an updated version of some simple little developer tool, Note++ sounds good. I'll slander their product for a moment and say this developer decided he wanted the latest version of Note++ and the guy that downloads it for him doesn't get it from an official repository and decides to download it from CNet, ...... and now this developer has a virus on his workstation and he burns their newest update to optical media for delivery and now the original source media for the Application update has an infected file on it.

Wipe your infected systems, reinstalled, OMG the virus is back, how?

This process sounds really good, and it is, I mean it makes sense, keep your shit up to date, and continue to run it through the STIG. But it also means you must keep introducing changes via updates and patches and you can not get away from the math on this. The more you introduce new media to your systems, the more likely you'll get burned.

Human error will not be denied.
 
Not sure what you mean. The Army's RMF process for cyber security requires "continuous monitoring" which doesn't really mean what it sounds like it means, but I digress.

First off, it isn't the Army's RMF process, it is the DoD's, who got it from NIST. And I am well acquianted with both Army and DoD requirements especially in regards to these specific situations... Let me put it another way. How do you monitor something that is out in the field? More importantly, how do you monitor something that is out somewhere in the field but it is no longer being tracked properly? For instance, let's just go with some more common systems like specialized laptops that connect to weapon systems. You go out to a forward base, on your list there are supposed to be 10 laptops there, but when you arrive, you find 8, and of those 8, only 6 are actually the laptops you sent. They now have 2 laptops from another site and they gave 2 of their laptops to a crew on a mission. Out in the field, soldiers will do what they need to do to get the job done regardless of your operating procedures.
 
First off, it isn't the Army's RMF process, it is the DoD's, who got it from NIST. And I am well acquianted with both Army and DoD requirements especially in regards to these specific situations... Let me put it another way. How do you monitor something that is out in the field? More importantly, how do you monitor something that is out somewhere in the field but it is no longer being tracked properly? For instance, let's just go with some more common systems like specialized laptops that connect to weapon systems. You go out to a forward base, on your list there are supposed to be 10 laptops there, but when you arrive, you find 8, and of those 8, only 6 are actually the laptops you sent. They now have 2 laptops from another site and they gave 2 of their laptops to a crew on a mission. Out in the field, soldiers will do what they need to do to get the job done regardless of your operating procedures.


Hold on now, if we are talking about this article about the UAVs being infected with a virus, then we are not talking about "out in the field".
https://www.wired.com/2011/10/virus-hits-drone-fleet/


These shelters are set up at Nelis AFB, just outside of Las Vegas, NV.
images?q=tbn:ANd9GcT6WyQt-NtWunMencTyDMQRuBCvwhMOrYzhjPITJD4gUMf9hpB3.jpg



If you are talking in a more generic sense as in the generalized statements from the source article of this topic, then yes, there can be instances where some weapon systems may not receive any software maintenance attention.

So I think it's important here to clarify which you are talking about, make sure we don't have a disconnect going.

Oh, and of course you are right, it's not just the USA, it's all of DoD and headed up, controlled by DISA.
 
Back
Top