Why don't They Just Update?

Your phone works Great, system update, wow my phone is sluggish and yelling low memory... time for a new phone

Apple evil laughter can be heard in the background

Fixed that for you :D

My Note 4 phone worked better/faster after I updated it to 6.0.
Same with other family members google phones. They seemed faster after the update.

My only complaint the vendors (even Google) abandoning their older phones, and having to install non-official versions of Android if you want a newer version.
They argue that the older phones are not fast enough for the new OS, yet both the older phone and older tablet I rooted and updated ran better with the newer OS.
 
Why don't people update? Well, I'm no IT expert, but here's my crappy story.

I recently wanted an extra laptop, so I found something used on craigslist. Lenovo, Haswell i3, win 8.1, works great, and just $125. Even had the original box. It only had 4 GB of RAM, but I found a matching stick on ebay so now it's up to 8GB. 4GB is too low in my opinion.
The guy selling said he did a system restore so it was ready to go.

Naturally I'm paranoid and don't trust that so I did a restore myself. Lenovo has that one touch key thing, so it's easy. It took a while, but it worked. Fresh Lenovo stock factory install of win 8.1 is ready and working.

It all has been downhill since then. I got it connected to the internet and tried to get updates. It said searching for updates for 24 hours, no progress, nothing, so I knew something was wrong. I rebooted a few times and tried searching again each time, no dice. This is wired or wireless, no difference.

I decided, well, fuck it, I'll just update to win10 directly. I did that and after updating it blue screened and I was stuck. After trying a few things to get into windows I gave up and did the restore again so I was back to the factory win8.1.

Now that I know going direct to win10 is going to fail I try to update 8.1 again. The process was the same. Searching for updates for a day, no progress. I go to google and find one video on youtube that describes a way to deal with this. I have to download an update to windows update, then disconnect from the internet, go to task manager, stop the windows update, then install the update to windows update. It installs and demands a reboot, so I reboot. After that, sucess! It says there are about 197 updates, so I let it download all those. These are only the critical ones, not all updates. I tried to go easy on it.

After downloading all the updates, and then even installing them I felt like I'm home free. It's time for the reboot, so I reboot. Now it's been stuck at "Updating your system (0%)" for about 12 hours now.
So now what the fuck do I do?
I've had insane patience, and this is only possible since it's an extra computer. If this was my main computer I would have lost my shit by now.
 
It also depends on the software they are running as well. Even when updating some items in Windows it can break how applications work. ... a vulnerability patch can break certain applications. (especially if its not the most current version)
That's reality at my work, within the healthcare sector.
Even having "the most current version" doesn't help much when that version is several years old.
All patches and other changes to the OS must be tested against a battery of critical programs before being applied to "live" computers. That testing can take months...
 
Because IT jobs have most of its time been lazy overpaid jobs. Why fix something that isn't broken.
 
Question: You are in the hospital for a life threatening condition. You are informed that you can't be treated "for a couple of days" because a machine necessary for the procedure you need has been infected with ransomware.

You OK with this situation? I think not. So perspective please.
Mission critical machines like this will never be connected to the internet. I know this from experience as I used to install the servers and networks that connected all that together in hospitals. Patches necessary for the software will be sent on USB sticks and deployed on the hospital network after being tested on stand alone machines. Any business that has mission critical machines unpatched and connected to the internet are asking to get screwed.
 
Where I work, patching servers means requesting downtime from production.
And these downtime are hard to get, we are lucky if we get all 4 quarterly maintenance window we request for every year.

Just today we had a meeting with our production users. Thanks to the urgency of this ransomware, they finally agreed to a downtime this week despite a potential move loss for the production.
Under normal situation, they would never have agreed.

Of course, now we are all crossing our fingers that our application will not break after this emergency patching.
 
The author suggests regulation as one answer. Require organizations key to healthcare, and key infrastructure to keep long term service contracts on their operating systems and software implementations, and better educate non-critical organizations such that they better understand the implications and factor them into the financial calculations associated with keeping software updated. With a little luck, this outbreak will serve as part of that education, and convince organizations to better prioritize and budget for security updates, but I'm not convinced it will. We humans have a long history of conveniently disregarding the unpopular and costly, and instead letting the pendulum swing between non-action and crisis-mode. Personally I have very little faith this will change anything, but I guess it doesn't hurt to be hopeful?

Having worked with a fair number of massive healthcare companies... this will NEVER happen. We had a hard enough time convincing them to abandon their mainframes (no joke) and were forced to support ie8 on at least one webapp because one company refused to update past that (in 2016). I wouldn't be shocked if more than 50% of healthcare companies are still rocking XP. And regulations won't change anything, they ALWAYS have perpetual grace periods or workarounds for the big players.
 
Exactly, if you cannot secure it, close all doors accessing it as best possible, or just remove it from being connected. Space industry stuff, like antenna controllers, ground station equipment, etc, mostly cannot be patched. We have been asking/begging vendors to put some type of patching ability into the equipment they sell, and it still is not prevalent in the market. So you build the systems, get them to work, then close all the doors because once they work, you really don't touch them again...

I really feel like this hasn't been discussed enough. Is there a reason that all your servers and desktop pc's can communicate with each other? And particularly via SMB or RDP? If you design around default deny it dramatically reduces the potential impact of something like this.
 
This is the classic no-win argument for anyone in network security. If you bust your ass and ensure all your servers are patched, your workstations are updated, and everything is configured properly, you'll never have a security problem.

Then next year your budget will get cut, because the bosses will wonder why they have to spend so much on IA when the company has never had a security problem.

Everything works: "What are we even paying you for?"

Everything breaks: "What are we even paying you for?!"
 
I work IT for major overpriced rock retailer and I can tell you that every other month we have at least one major crippling issue or another with our production servers after standard maintenance Windows patching. Our applications simply can't keep up with the frequent updates. (Or maybe we can't keep up...that's totally possible too) This usually involves rolling back the update and waiting for our application vendors to figure out what the issue is and how to fix it within our environment. It takes way too much time and when we do reach a resolution and the update can be applied it's sometimes months out which puts us behind.

It's not so simple to keep up to date in the workplace especially when this particular exploit was only patched less than two months ago.

On a side note.....I'm super guilty of hitting the 'remind me in 4 hours' box for a straight month after updates get downloaded to my work machine. I ain't got time to wait for the 30 minutes my shitty work PC takes to restart, install updates, and get back to the desktop for me to get back to work.

I do that too, but at the end of the day before I leave I let it click the restart and install updates. Is there a reason you can't do that?
 
Because patching costs time=money, that's why businesses don't want to do it unless they have to. Unless you can shit money on demand? If not, there you go.
 
In my experience, the more well educated the users (Doctors are the worst, with Lawyers and oddly accountants coming in a close second), the more likely they are to be self taught "experts" (ex being a has been, and spurt being a drip under pressure), who refuse to listen to the advice/carefully worded emails from the lowly IT department.
 
I do that too, but at the end of the day before I leave I let it click the restart and install updates. Is there a reason you can't do that?

Given how busy I usually get (and my terrible short term memory due to my crazier college days...) I forget that the updater nagged me twice earlier in the day.

So when I leave I just get up and go. Then it begins a remind-me-later Groundhog Day for the next month until the reminder catches me in a situation where I'm about to leave for lunch or for the day at which point I do it.
 
Having worked with a fair number of massive healthcare companies... this will NEVER happen. We had a hard enough time convincing them to abandon their mainframes (no joke) and were forced to support ie8 on at least one webapp because one company refused to update past that (in 2016). I wouldn't be shocked if more than 50% of healthcare companies are still rocking XP. And regulations won't change anything, they ALWAYS have perpetual grace periods or workarounds for the big players.

FDA is surprisingly uncompromising on manufacturers, so if they were to get the mandate, I feel pretty strongly that they could make this enforcement happen.

There would certainly be a phase-in period though.
 
Coming from the healthcare industry, could you guys that want to force healthcare companies to upgrade "overnight" please explain how they're going to pay for the hundreds of millions of dollars worth of new medical equipment which has no ability through the manufacturer to replace/upgrade the OS?

You can replace the servers (sometimes) quickly but the 5yr old ultrasound machine, that thing isn't getting touched until its at least 10yrs old, and likely just passed down to a less important department or facility.

I'm all for regulations, but when you push them down and force a hospital to get up to standard, then take away their funding if they don't; you create a situation where they are forced to make huge cuts...and they start with employees.
 
In my experience, the more well educated the users (Doctors are the worst, with Lawyers and oddly accountants coming in a close second), the more likely they are to be self taught "experts" (ex being a has been, and spurt being a drip under pressure), who refuse to listen to the advice/carefully worded emails from the lowly IT department.

It's just so weird. When I read through these threads I'm think just how what I hear is the antithesis of what we do. Our business folks have input but they simply aren't allowed to control IT functions, they aren't even allowed to developed their own applications, they have hand that off to their technology partner aligned with their business function.

As much as it can be annoying and bureaucratic I've become to appreciate just how much better that process is compared to environments that don't have even basic stuff like routine update procedures. We deployed this the week after it came out because it was a critical remote execution flaw. And there's really nothing that patch could have done that would be anywhere close to having us hit with this crap.
 
Mission critical machines like this will never be connected to the internet. I know this from experience as I used to install the servers and networks that connected all that together in hospitals. Patches necessary for the software will be sent on USB sticks and deployed on the hospital network after being tested on stand alone machines. Any business that has mission critical machines unpatched and connected to the internet are asking to get screwed.

I have to wonder if the British NHS was so hard hit by this because of a top-down information availability or other IT assertion. I've worked in hospital IT administration in assessing individual systems' vulnerability risks for HCA; they practice a lot of the network insulation you mention, and that seems to have paid off in this instance.
 
Coming from the healthcare industry, could you guys that want to force healthcare companies to upgrade "overnight" please explain how they're going to pay for the hundreds of millions of dollars worth of new medical equipment which has no ability through the manufacturer to replace/upgrade the OS?

You can replace the servers (sometimes) quickly but the 5yr old ultrasound machine, that thing isn't getting touched until its at least 10yrs old, and likely just passed down to a less important department or facility.

I'm all for regulations, but when you push them down and force a hospital to get up to standard, then take away their funding if they don't; you create a situation where they are forced to make huge cuts...and they start with employees.


If there is a regulatory requirement to stay updated, it will factor into both purchasing decisions (purchase systems that are upgradeable and have extended service contracts) and manufacturer commitments to manufacture and support such systems.

You are correct. It would create a nightmare if enforced all of a sudden overnight, but if it were to be made a requirement for new systems, it would be phased in over time and would result in a more secure future state.
 
but if it were to be made a requirement for new systems, it would be phased in over time and would result in a more secure future state.

EMR regulations/reimbursement requirements are getting pushed through now and the hardware we have is certainly not ready. So between now and 10-15 years when everything is compliant, we spend a few million to get a company to pull the data with a dozen different dongles and half-working methods. yay.
 
It's just so weird. When I read through these threads I'm think just how what I hear is the antithesis of what we do. Our business folks have input but they simply aren't allowed to control IT functions, they aren't even allowed to developed their own applications, they have hand that off to their technology partner aligned with their business function.

As much as it can be annoying and bureaucratic I've become to appreciate just how much better that process is compared to environments that don't have even basic stuff like routine update procedures. We deployed this the week after it came out because it was a critical remote execution flaw. And there's really nothing that patch could have done that would be anywhere close to having us hit with this crap.

+1 We dropped it yesterday. Forced. Couldn't get out of it. It's absolutely upper management. And to some extent, it can be IT too. IT needs to be able to push, in an environment, for a seat at the table. A lot of times, I just see IT folks roll with the punches because it's easier to do. I can't work that way. I have to have some basic structure that IT has some say in what happens. When I didn't get that at my last environment, I left. I can understand specialized equipment and we have some here. But it's totally isolated from the rest of the network and has no internet access. It really all comes down to that business just needs to take IT seriously and not allow 'expert home users' to control a business environment. They are not equal.
 
Because IT jobs have most of its time been lazy overpaid jobs. Why fix something that isn't broken.

WHAT the FUDGE ARE YOU TALKING ABOUT? Do you know HOW OFTEN IT HAS TO SCREAM FOR RESOURCES. You want to know why IT doesn't jump to take care of a damn issue. Because they have been browbeaten by lack of any real budget to get something done that works right. So they are forced to settle for.. "well it works"

I've worked for a company that can't get resources at all.

I now work for a critical part of the company and we spend what we should to keep our systems current both hardware and software wise. We don't go crazy but our stuff has to be rock solid so it is. We are all working as a team and work hard to make sure our stuff is the best it can be. We are able to do this because we are critical to our company. And the company recognizes it.

Whenever you're some poor outsourced disconnected talking butthole forced to act as a contractor with no actual pull in the organization and given the bare minimum of actual resources to get a job done.. Yea.. you're going to be sitting on your hands because you can't execute. To blindly blame the IT staff is just what a exec would do that doesn't understand that it is their own damn fault.
 
Where I work, patching servers means requesting downtime from production.
And these downtime are hard to get, we are lucky if we get all 4 quarterly maintenance window we request for every year.

Just today we had a meeting with our production users. Thanks to the urgency of this ransomware, they finally agreed to a downtime this week despite a potential move loss for the production.
Under normal situation, they would never have agreed.

Of course, now we are all crossing our fingers that our application will not break after this emergency patching.


Because these critical systems are clearly hampered by a lack of budget. in a real environment you would have A and B pairs of servers for each server role (or whatever number you needed.) And then you would be able to stay current patching your offline. Oh and don't forget a full set of servers in Development so you can test your deployments before just having to blind thrust them into PROD. So it's clear to me your IT department doesn't have appropriate resources to do a job right, or the systems you support are actually not critical and the rest of the company has to suck an egg while you do your needed work.

It's bullshit.
 
I just make my case to the accounting department and show them with clear charts and numbers that continuing to keep the old systems going is more expensive over a 5 year period than the upgrade and replacement, if you can show an accountant an ROI after 3 years on something they will generally find the money to make it a feasible project. Then once I have accounting approval we can work together with Managers and what not to schedule outages accordingly or bring in outside help to lessen down time during key hours. Or hell even work with the local union to rework hours so work can be done outside normal hours to remove almost all down time. Yes it means that I have to get my hands dirty with a lot of non IT work back scratching and some off hours research but it also means that my Win NT only software is now mysteriously running in server 2016, and our windows XP only software is running in Enterprise 2016 LTSB and those Vista and Win 8 machines that nobody liked are either gone or also replaced with LTSB (which is very nice). Point is I don't really remember my point any more, this whole thing shouldn't have affected businesses to begin with it should have only been a thing that was bound to poorly cracked XP and Win 7 installs.
 
Because these critical systems are clearly hampered by a lack of budget. in a real environment you would have A and B pairs of servers for each server role (or whatever number you needed.) And then you would be able to stay current patching your offline. Oh and don't forget a full set of servers in Development so you can test your deployments before just having to blind thrust them into PROD. So it's clear to me your IT department doesn't have appropriate resources to do a job right, or the systems you support are actually not critical and the rest of the company has to suck an egg while you do your needed work.

It's bullshit.

While bullshit it may be, you're not quite accurate about this. I'm in a 'real' environment. When we have a production run, that data has to be captured for compliance reasons. If you don't capture the production run data, you can fall out of compliance. In our case runs take a few hours so we're in better shape than some places. But the PLC's normally only want to talk to one serial bus while it's going. So you're not taking it offline for a patch. I suppose with a hell of a lot of effort, you could split out each pin from the PLC and have two computers capturing the data, but that is going overboard for a failure that effectively may never happen. And the dual computer thing would introduce more issues as well. Ours do get patched though as we do have mandatory downtime just for these kinds of issues.
 
While bullshit it may be, you're not quite accurate about this. I'm in a 'real' environment. When we have a production run, that data has to be captured for compliance reasons. If you don't capture the production run data, you can fall out of compliance. In our case runs take a few hours so we're in better shape than some places. But the PLC's normally only want to talk to one serial bus while it's going. So you're not taking it offline for a patch. I suppose with a hell of a lot of effort, you could split out each pin from the PLC and have two computers capturing the data, but that is going overboard for a failure that effectively may never happen. And the dual computer thing would introduce more issues as well. Ours do get patched though as we do have mandatory downtime just for these kinds of issues.


If you had A/B pairs you could update the offline. Let a run finish while all new runs are dorected to the previous offline side. Once the last run finishes on the new offline then patch it. I don't see the issue. Bit of work from whomever is sending the signals over serial but not impossible. We deal with this all the time where I work. Damn serial concentrators. ;)
 
If you had A/B pairs you could update the offline. Let a run finish while all new runs are dorected to the previous offline side. Once the last run finishes on the new offline then patch it. I don't see the issue. Bit of work from whomever is sending the signals over serial but not impossible. We deal with this all the time where I work. Damn serial concentrators. ;)

Right? We may need to in the near future. They may want double redundancy. It might not be too long that we have production for medical things so I don't know what we may have to record for that. Right now, it doesn't make sense for as much as that would take. This is not an easy undertaking because of where all the stuff is. Still, the mandatory down time is where patching stuff should happen.
 
Back
Top