Admin Deletes His Entire Company With A Single Command

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
Not only is this the screw-up of all screw-ups, this is a career ending blunder. I'm not sure how but, not only did it wipe all the servers, it got all the offsite backups too. :eek: Thanks to Bill M. for the link.

A man appears to have deleted his entire company with one mistaken piece of code. The problem command was "rm -rf": a basic piece of code that will delete everything it is told to. The “rm” tells the computer to remove; the r deletes everything within a given directory; and the f stands for “force”, telling the computer to ignore the usual warnings that come when deleting files.
 
seems like it has the high chance of being a hoax just from googling around. Also if there were no backups to the data was this company really going to last forever anyways? What was their plan for a power surge and it takes out a hard drive?
 
This isn't believable because it's not technically feasible. The one command won't travel through the entire network, delete offline backups, or anything like that. In a larger infrastructure this just can't be done. I've also never seen a larger infrastructure that's entirely Windows or entirely Linux or any one OS. In a small environment I suppose it's possible, but I'm not even a Linux guy and I know that command and what it does. I can't imagine anyone with the authority to run it ever would where it could do any real damage. You could fuck up a single server that way, or even a few of them if done from a virtual machine host, but it would have to be a small company that makes very poor decisions to leave themselves that vulnerable.
 
I don't buy his story. For one thing, the rm -rf command requires a -no-preserve-root switch in order to do what he's describing even on a single system. In other words, you'd have to know what you were doing in order to do it.

For another thing, Ansible is generally secure and well-designed enough to prevent someone from doing that in the first place. There's a level of protection at the shell level, and at the Ansible level. I mean, do you really think business-class software that orchestrates actions over multiple nodes has no built-in protection from screw-ups? No one would use it if it didn't.

Maybe a hacker that obtained the root password could do something like this on purpose, but no one is going to do that on accident. I have to wonder if he just wanted an excuse to shut down his business...
 
This is obviously a hoax, but even then it would just be a great chance to show why you spend so much money on backup.
 
The closest thing I came to this in my world was running a diskpart (clean wiped and formated) and wiped my d: full of dev stuff... that hurt. I was trying to wipe a USB drive. DOH!
 
This is just stupid.

If a command does delete your backups then they're not by definition "off-line" are they?

Even if this command worked the way they claim what stopped him from just unplugging the computer for hitting CTRL-C to save the majority of the documents.
 
Just for the sake of completeness, that is not actually an accurate description of the command #rm -rf.

rm -rf means remove files or directories
-r (recursive means includes/follows subdirectories)
-f (force means ignore nonexistent files, and it suppresses prompts for confirmation)

Calling a command code is somewhat inaccurate, but what's more important is that this command will not run without a target.

If you only type rm -rf the command will not execute, you have to tell the command what to remove. To delete the file named test you must type rm -rf test

Wild cards can be used, the most famous or infamous is rm -rf * or *.* which deletes all files and folders.

This isn't believable because it's not technically feasible. The one command won't travel through the entire network, delete offline backups, or anything like that. In a larger infrastructure this just can't be done. I've also never seen a larger infrastructure that's entirely Windows or entirely Linux or any one OS. In a small environment I suppose it's possible, but I'm not even a Linux guy and I know that command and what it does. I can't imagine anyone with the authority to run it ever would where it could do any real damage. You could fuck up a single server that way, or even a few of them if done from a virtual machine host, but it would have to be a small company that makes very poor decisions to leave themselves that vulnerable.

Are you sure?

This depends entirely on how things are set up. It is not likely, in fact, it is completely unlikely, but it is not impossible because you could engineer an entire environment so that you could in fact wipe out everything with a single rm command. But you would have to engineer it that way on purpose.
 
This is just stupid.

If a command does delete your backups then they're not by definition "off-line" are they?

Even if this command worked the way they claim what stopped him from just unplugging the computer for hitting CTRL-C to save the majority of the documents.

If you are in the root directory of a file system and do a rm -rf * than "control+D" (unix linux, windows is control +C), would wipe out the file structure very fast. In Unix/Linux, directories are just files that list contents and addresses of other files. Wipe out the primaries under / and the rest is pretty much toast.
 
I did this to my own computer once about 12 years ago. I had Linux and Windows setup in a dual boot configuration and I was messing around in Linux and decided to remove the whole system for fun/learning purposes figuring I could just boot back into Windows and reinstall Linux later. I forgot I had mounted the Windows partition though. :D

Good times.
 
So in the article it states he ran a Bash script that contained rm –rf command to automate some process. Well depending on what that script was doing ,this story holds a little more plausibility. If the script was auto-mouting certain volumes and doing so with root privilages (Which is a big no, no.) Then yes it could have wipe all data on any drive it mounted. I have in my past years of being in IT seen some stupid scripting mistakes end up creating some serious cluster fucks before, but never this bad.

I highly doubt this story is real though because it would take some serious lack of following best practices to achive this outcome.
 
Last edited:
don't think companies agree with you.

I have to agree with you on this one. Can't say who I work for, or exactly what's happening (legal issues), but.. With us issues of missing data seem to be a bigger deal than leaking the identitiy of tens of thousands of people. Whoops...
 
I don't buy his story. For one thing, the rm -rf command requires a -no-preserve-root switch in order to do what he's describing even on a single system. In other words, you'd have to know what you were doing in order to do it.

For another thing, Ansible is generally secure and well-designed enough to prevent someone from doing that in the first place. There's a level of protection at the shell level, and at the Ansible level. I mean, do you really think business-class software that orchestrates actions over multiple nodes has no built-in protection from screw-ups? No one would use it if it didn't.

Maybe a hacker that obtained the root password could do something like this on purpose, but no one is going to do that on accident. I have to wonder if he just wanted an excuse to shut down his business...

Exactly. No way it could have happened, much less by accident.
 
There was a Reddit thread on this a couple days ago, and one guy (name_censored_)said this:
Don't be so hasty - it could have been

rm -rf {foo}/{bar}/*

which would resolve to

rm -rf //*

which enumerates the contents of /, and does not require --no-preserve-root. And I can entirely see someone failing to mention the asterisk - the null variables were important; the wildcards to sidestep a recent protection mechanism, not so much.

Anyone have a linux vm to test this out on?
 
Had a DBA delete an entire database at work on Tuesday. We have backups but even with those it still hasn't been restored to working order at this time. Big fucking mess costing lots of $$$.
 
I have to agree with you on this one. Can't say who I work for, or exactly what's happening (legal issues), but.. With us issues of missing data seem to be a bigger deal than leaking the identitiy of tens of thousands of people. Whoops...


Got to love SOX compliance. "Hey, don't delete that file that was mistakenly created due to a scripting conflict in an unrelated job and only contains every file in the directory (which were already archived) making it consume every spare byte of disk space allotted to your application because it might look like you're trying to hide evidence of wrong-doing."
 
The closest thing I came to this in my world was running a diskpart (clean wiped and formated) and wiped my d: full of dev stuff... that hurt. I was trying to wipe a USB drive. DOH!

I had something almost exactly like this happen yesterday, but the opposite way around. Was getting ready to reload some Hyper-V servers and a coworker plugged in a USB drive that had all of our ISOs and software for building the system as I was starting the OS install, and apparently it decided that the USB drive became drive 0 and erased everything on the drive. Fortunately for him something in the back of his mind made him decide to copy everything off of the drive to his laptop the day before.
 
FFS. When I read the thing I thought, how effed up can a company be that one RM-RF command can recursively destroy everything?
I knew it had to be either 1.Way overblown or 2. a hoax.

I know this isn't slashdot but seriously, giving any kind of attention to this is lame.
 
a better question would be why he had write permission to everything he was hosting... that means that every site he hosted he could change anything on the sites without logging into their accounts... makes you wonder if he was trying to cover up having illegal stuff on one of the sites and no eula to say it is not his fault if his users upload it. they should have copies on their home machines if no where else... though if he used a what you see is what you get and had parts of the back end that he wiped out then their websites would not longer either. I have at least seen all the back end code that supports my website. I help write some of it even, but still if they wiped out the site I would still have to put back together my sql data base and a dozen other things that there so that it looks right on every one machines... I have to think that people might want to go back to renting hosting shares on machines and other people might want to go back to renting the whole machines and just using a proxy server for the majority of the traffic. That way if some idiot managers to run this on shared storage websites more people don't lose money while their sites are down. It means hosting gets more expensive but most people hosting commercial sites are on there own blade in a rack of servers... which is why it is some much more money a month... they basically act like network storage but the computer doing the computations is not running code from some one else's domain on your particular rack... well it is running code form the web server's domain but not another person hosting a site.

My guess is that people are going off the assumption that since the one case where the company did not have the emails for seven years and the ceo did not go to prison, then companies will simply shift the blame around until they are sipping cocktails on beach in a non extradition treaty country. Kinda like enron... so back up your site find out how your stuff is hosted and maybe consider that del tree works on windows and remove can be run with wild cards... maybe there needs to be an ethic component to cs, ts, and anyone storing people's personal data much like you have to pass an ethics exam to sell insurance or be a lawyers or an attorney at law. though technically paralegals may have never passed the bar exam, only leaded how to do research on legal cases. I passed two of them one a golden ticket one, I just get yelled at when ever I help anyone so I don't post legal advise anymore.
 
a better question would be why he had write permission to everything he was hosting... that means that every site he hosted he could change anything on the sites without logging into their accounts... makes you wonder if he was trying to cover up having illegal stuff on one of the sites and no eula to say it is not his fault if his users upload it. they should have copies on their home machines if no where else... though if he used a what you see is what you get and had parts of the back end that he wiped out then their websites would not longer either. I have at least seen all the back end code that supports my website. I help write some of it even, but still if they wiped out the site I would still have to put back together my sql data base and a dozen other things that there so that it looks right on every one machines... I have to think that people might want to go back to renting hosting shares on machines and other people might want to go back to renting the whole machines and just using a proxy server for the majority of the traffic. That way if some idiot managers to run this on shared storage websites more people don't lose money while their sites are down. It means hosting gets more expensive but most people hosting commercial sites are on there own blade in a rack of servers... which is why it is some much more money a month... they basically act like network storage but the computer doing the computations is not running code from some one else's domain on your particular rack... well it is running code form the web server's domain but not another person hosting a site.

My guess is that people are going off the assumption that since the one case where the company did not have the emails for seven years and the ceo did not go to prison, then companies will simply shift the blame around until they are sipping cocktails on beach in a non extradition treaty country. Kinda like enron... so back up your site find out how your stuff is hosted and maybe consider that del tree works on windows and remove can be run with wild cards... maybe there needs to be an ethic component to cs, ts, and anyone storing people's personal data much like you have to pass an ethics exam to sell insurance or be a lawyers or an attorney at law. though technically paralegals may have never passed the bar exam, only leaded how to do research on legal cases. I passed two of them one a golden ticket one, I just get yelled at when ever I help anyone so I don't post legal advise anymore.


Or he was running the backups scripts as root....
 
but that would still be the entire custer of machines 1500 sites had to be more than one machine... wouldn't you noticed that site after site went dark? backing up that many sites would take hours. even if they just said hello world.
 
"Hmm.. it seems it is missing sudo in front for the rm -rf on the scheduled backup script. No wonder it takes so long to backup. Stupid programmer, lemma fix eet."
 
As someone else said already this aint technically feasible; you could in theory do it with a script but only if its done on purpose
 
Probably a hoax, however to everyone wondering how its not feasible, at least all the online systems. Ansible is a config management system, if he updated a playbook that was a core playbook for his whole infrastructure with that bad piece of code, it would theoretically run on every single host the rm -rf //* and pretty much wipe everything out. Although there are far more bad things needing to already being in place for him to lose every piece of data.
 
When I started reading this, I was thinking of something like this:

exploits_of_a_mom.png
 
Are you sure?

This depends entirely on how things are set up. It is not likely, in fact, it is completely unlikely, but it is not impossible because you could engineer an entire environment so that you could in fact wipe out everything with a single rm command. But you would have to engineer it that way on purpose.

Fair enough. You could engineer it that way on purpose but it's highly unlikely that anyone would.
 
Code:
$ docker run -it ubuntu:14.04 bash
root@cfe41821029a:/# rm -rf $a/$b
rm: it is dangerous to operate recursively on '/'
rm: use --no-preserve-root to override this failsafe

Ubuntu is a nanny-ist *nix distro with training-wheels if there ever was one.
 
You get the same result on CentOS. Alpine will go a wreckin', though.

Guess you'd better stick with Ubuntu.

LOL...we were laughing on the Arch Forums when Ubuntu first instituted PEKAC protections like this. I want to say that was 7 years ago now I want to say.
 
Back
Top