Damage From Atlanta's Huge Cyberattack Worse Than First Thought

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
13,500
Back in March we reported that Atlanta had been hit by a ransomware attack that crippled some of their systems. Well, it appears the damage done was worse than first thought and at least a third of their systems remain offline. Of these systems 30 percent are considered mission critical. Considering how widespread the damage really is it might have made sense for them to just pay the $50,000 in bitcoin the hackers asked for. Anyway, the IT head is asking for an additional $9.5M to clean up the mess. Yep, that's a pretty big number. Who knows what the final cost is going to be?

Atlanta’s administration has disclosed little about the financial impact or scope of the March 22 ransomware hack, but information released at the budget briefings confirms concerns that it may be the worst cyber assault on any U.S. city.
 
I know restoring from backups isn't the fastest/easiest thing to do, but they didn't have backups of all this stuff? Even if it is a week or two back, that's still something.
 
I know restoring from backups isn't the fastest/easiest thing to do, but they didn't have backups of all this stuff? Even if it is a week or two back, that's still something.

Some stuff is hard to back up. For example dashcam/body cam footage. It generates a lot of data. It costs to store, and given the volume and time to rotate stuff, backing up to something cheap like tape for ALL of it is problematic as it is both a volume of data and bandwidth for transfer issue.

Most of the money is probably less to do with cleaning up stuff than with funding the ability to try to prevent it the next time. If it is like government in general, a lot of their cost cutting strategy was maximal neglect. The problem with that is when something catestrophic happens, very often you can't repair and can't replace. So you must both replace and upgrade.

As for data and lawyers.. it sucks, especially in the public sector. At BEST, you are dealing with a mixed environment of new stuff is handled ok-ish but you are dealing with a layer cake of suck that has at least 7 years of previous bad practices baked in.
 
I'm sure lost bodycam footage isn't affecting city services.... silly they even brought it up. They didn't talk about what critical services are actually offline and to what extent. It's pretty vague. Hopefully it's a wake-up call to others to get your **** together.
 
I know restoring from backups isn't the fastest/easiest thing to do, but they didn't have backups of all this stuff? Even if it is a week or two back, that's still something.

Windows machines, relying on antivirus, no backup - classical threesome of mistakes.
 
Their backups were encrypted first.

Sadly this is true for some of the RansomeWare attacks. For setups that do backups of the VM's, data, etc...a lot of them apparently don't put them offsite somewhere , so when RansomeWare is running wild in there environment, the backup server gets encrypted over. We can't afford offsite backups at some place, but I have a duplicate server with enough storage to keep the backups and I keep it unplugged from the network. Every Friday, I plug it in for a few hours and copy over the weeks worth of backups and then unplug it from the network again (not even iLO access is enabled).
 
I interviewed for a senior position with the city of Atlanta some years back. I can attest that what they previewed to me showed it was a disorganized mess then and given what they offered me, doesn't surprise me it wasn't fixed.
 
Their backups were encrypted first.
backups from tape are multi version and accessed via password with any credible program. Unless the attackers got a hold of that password (highly unlikely) then ruining backups is unlikely. A good backup plan takes tapes and stores them off site in a vault to be pulled for emergencies like this (and natural disasters like fire)

IT admin in charge of this should have been fired or contractor sued.
 
backups from tape are multi version and accessed via password with any credible program. Unless the attackers got a hold of that password (highly unlikely) then ruining backups is unlikely. A good backup plan takes tapes and stores them off site in a vault to be pulled for emergencies like this (and natural disasters like fire)

IT admin in charge of this should have been fired or contractor sued.

People hired because of who they know, and managers who would rather spend money on pay increases/benefits. What could go wrong?

A proper backup should not be crippled by a ransomware attack.
Unless your backup server is behind on security updates, or someone opens an infected email/link on the backup server, it should not get infected.
Also, if the backup server is locked down securely, there should be no shares to get infected.

Backup frequency should be based on how much of your data you can afford to lose.
Some of my servers are backup up once a week, due to the large amounts of data, and how little the data changes.
Other servers are more mission critical, like database servers. I take an incremental backup of these servers multiple times per day.
Everything is backed up to disk, and then copied off to tape. (All 30-40 TBs)

The backup servers are secured, and no regular logins have access to the backup drives, so it's very unlikely a ransomware attack would hit the backup server.
Even if it did, I still have 2 sets of tapes off site to fall back on.
 
People hired because of who they know, and managers who would rather spend money on pay increases/benefits. What could go wrong?

A proper backup should not be crippled by a ransomware attack.
Unless your backup server is behind on security updates, or someone opens an infected email/link on the backup server, it should not get infected.
Also, if the backup server is locked down securely, there should be no shares to get infected.

Backup frequency should be based on how much of your data you can afford to lose.
Some of my servers are backup up once a week, due to the large amounts of data, and how little the data changes.
Other servers are more mission critical, like database servers. I take an incremental backup of these servers multiple times per day.
Everything is backed up to disk, and then copied off to tape. (All 30-40 TBs)

The backup servers are secured, and no regular logins have access to the backup drives, so it's very unlikely a ransomware attack would hit the backup server.
Even if it did, I still have 2 sets of tapes off site to fall back on.

You win my golden seal approval award.
 
Sadly this is true for some of the RansomeWare attacks. For setups that do backups of the VM's, data, etc...a lot of them apparently don't put them offsite somewhere , so when RansomeWare is running wild in there environment, the backup server gets encrypted over. We can't afford offsite backups at some place, but I have a duplicate server with enough storage to keep the backups and I keep it unplugged from the network. Every Friday, I plug it in for a few hours and copy over the weeks worth of backups and then unplug it from the network again (not even iLO access is enabled).

While your way is very much the best way (SHTF backups need to be airgapped), none of the enterprise backup solutions I've ever used had direct disk/share access from a server or workstation (admittedly, my career is fairly short). The only way to get data on or off was through the agent. The backup software would contact the machine, tell it to initiate the backup, and it would pull the data over. Unless I'm misunderstanding how ransomware works, it either needs some kind of SMB/NFS share it can attach to before it is able to begin the encryption process, or to actually infect the machine so it can encrypt itself. If our org were hit with ransomware, I'm fairly confident we would be dead in the water for a couple days while we get the servers reimaged/VMs back in place and reimage workstations, and lose maybe 2-3 days worth of work.

Our workstation backups are much softer targets (wouldn't be surprised if we lost most if not everything...), but we've informed all the employees and management that workstations are best effort only, and they need to keep important docs on the share.

If it takes weeks to encrypt all the data and show the ransom message, I'd understand that, but otherwise, I still would expect them to have something.

I won't say its impossible to get the backups caught up in a ransomware situation, but for us, it would have to be specifically targeted.
 
While your way is very much the best way (SHTF backups need to be airgapped), none of the enterprise backup solutions I've ever used had direct disk/share access from a server or workstation (admittedly, my career is fairly short). The only way to get data on or off was through the agent. The backup software would contact the machine, tell it to initiate the backup, and it would pull the data over. Unless I'm misunderstanding how ransomware works, it either needs some kind of SMB/NFS share it can attach to before it is able to begin the encryption process, or to actually infect the machine so it can encrypt itself. If our org were hit with ransomware, I'm fairly confident we would be dead in the water for a couple days while we get the servers reimaged/VMs back in place and reimage workstations, and lose maybe 2-3 days worth of work.

Our workstation backups are much softer targets (wouldn't be surprised if we lost most if not everything...), but we've informed all the employees and management that workstations are best effort only, and they need to keep important docs on the share.

If it takes weeks to encrypt all the data and show the ransom message, I'd understand that, but otherwise, I still would expect them to have something.

I won't say its impossible to get the backups caught up in a ransomware situation, but for us, it would have to be specifically targeted.

The proper way it should be done is critical documents should be stored on something like a Sharepoint server which requires credentials to upload/download. If they are like code files, then a code repository like TFS or a local git server, or even VSS. For mail, mail servers have backup. All those servers are on RAID with fall over if it's super mission critical. For personal and non-critical mission files, files via client are copied over to a network drive. The network drive is RAID and backed up via password protected access. Running via linux if your a Win-tel organization would be preferable because it makes attack vectors with multiple OS more difficult. But even a windows server acting as a backup would do if it was properly isolated.
 
I know restoring from backups isn't the fastest/easiest thing to do, but they didn't have backups of all this stuff? Even if it is a week or two back, that's still something.

we're going to address backups tomorrow. they said everyday until the attack.

then it was

what happened to all of our backups!?
 
backups from tape are multi version and accessed via password with any credible program. Unless the attackers got a hold of that password (highly unlikely) then ruining backups is unlikely. A good backup plan takes tapes and stores them off site in a vault to be pulled for emergencies like this (and natural disasters like fire)

IT admin in charge of this should have been fired or contractor sued.

Someone with Domain Admin was compromised. That's how this started.
 
The proper way it should be done is critical documents should be stored on something like a Sharepoint server which requires credentials to upload/download. If they are like code files, then a code repository like TFS or a local git server, or even VSS. For mail, mail servers have backup. All those servers are on RAID with fall over if it's super mission critical. For personal and non-critical mission files, files via client are copied over to a network drive. The network drive is RAID and backed up via password protected access. Running via linux if your a Win-tel organization would be preferable because it makes attack vectors with multiple OS more difficult. But even a windows server acting as a backup would do if it was properly isolated.

I've seen white papers on server rooms that not only take a thumbprint but an RFID access card. Once you are inside, the one part of a two part password is contained in a lockbox which you have to use a key from the admin to get access too. The second password is your own and it gets logged to a physical print file the second you access the room. This print file is locked away.

These rooms have only 1 port open. It's soul purpose has a limited set of command API's that are encrypted and validated with codes to send and receive data for storage. And no this is not government classified. The machine on the outside then determines where the data goes or how it's used. But the data is protected. Even if the outside machine is somehow compromised and the they manage to call the API's they can't corrupt the data due to the backup nature.

Snowden should have never been given credential management the way he was, even in a support position. That's handing over keys to the kingdom. And as soon as a super-visor credential starts download a ton of documents, red alerts should have been going off.

Even in my organization, everybody has to ask permission for a new branch. And we aren't classified.
 
backups from tape are multi version and accessed via password with any credible program. Unless the attackers got a hold of that password (highly unlikely) then ruining backups is unlikely. A good backup plan takes tapes and stores them off site in a vault to be pulled for emergencies like this (and natural disasters like fire)

IT admin in charge of this should have been fired or contractor sued.

Its government - IT Admins likely had cost restrictions, personnel shortages and/or antiquated systems. The admins also likely did not have the authority to make decisions on wide scale due to that being the responsibility of an agency or department which is usually headed by non-IT people. At one time it was probably setup properly (at original installation) and then never maintained due to cost cutting.
 
Now see that is why you don't use the same password on mission critical machines.

Well at the same time even if you don't use the same one, try and go be responsible for 50+ devices on a rolling 60 day window with mandatory changes. You will develop a method that is crackable as it is impossible for a person to remember truely random passwords changing 60 days on multiple systems. PKI cards and pin codes help but still.

I knew one guy that simply remembered 1 key for any password. He has a physical pattern that he does based starting on whatever key. Now the result looks random as hell but is a pattern when seen in the aggregate.
 
Well at the same time even if you don't use the same one, try and go be responsible for 50+ devices on a rolling 60 day window with mandatory changes. You will develop a method that is crackable as it is impossible for a person to remember truely random passwords changing 60 days on multiple systems. PKI cards and pin codes help but still.

I knew one guy that simply remembered 1 key for any password. He has a physical pattern that he does based starting on whatever key. Now the result looks random as hell but is a pattern when seen in the aggregate.

The password is kept in a lock box. They key you own and are responsible for. The lockbox is physically attached to the machine. When you need to roll updates, you unlock the box and pull out the password. Old school security (physical keys) works.
 
Someone with Domain Admin was compromised. That's how this started.

More likely multiple people had full domain admin access on their regular login accounts.
One of them clicked on something they shouldn't, or logged into a system that was already infected.
 
we're going to address backups tomorrow. they said everyday until the attack.

then it was

what happened to all of our backups!?

When I started my current job, one of the first things I looked at was the backup.
The IT person they had before me was worthless. Nothing documented, and I was on my own to figure out the network.

Imaging my surprise when I checked the backup software, and found the last completed full backup was over 3 months ago.
Turned out the tape drive had stopped working.
The former IT person claimed that wasn't possible because he had just checked last week :rolleyes:
(I think he just setup the backup jobs, put the tapes in the changer and never check or took anything off site)

Even worse, the backup jobs he had setup only backed up about 1/3 of the servers.

Had to buy a large USB drive to start backing up the most important data while getting the tape drive repaired.

That was 12 year ago, and the changer held (7) 20GB tapes.

Last few years, I've been using LTO-6 tapes (2.6TB uncompressed) in a changer that holds 24 tapes. I'm getting ready to upgrade to LTO-7 (6TB per tape uncompressed) since I'm taking 10 tapes on the weekend backup, and it takes most the weekend to write them. LTO-7 is twice as fast, assuming my backup server can keep up.
 
How long does tape last?

Depends on the type of tape.

Most common large tapes use the LTO format, either LTO-6, LTO-7 or LTO-8.
Uncompressed that's 2.6TB, 6.0TB, or 12TB per tape. Up to 2.5x more with compression.

An LTO tape can be fully written/read about 200 to 300 times. So if you run a full backup every week, and have 3 sets of tapes, you should get around 15 years of usage on each set.
Drive will be obsolete before the tapes wear out.

Shelf life is 15-30 years.
 
The drives typically break before the tapes do. But tapes are rotated out so if one fails you have a backup (albiet a bit older)
 
how can any professional who uses a computer at their job not have a backup? Too busy doing their nails, surfing for porn, sleeping on the job (I know because I saw it for myself and no, not the porn; the clock milking)
 
Depends on the type of tape.

Most common large tapes use the LTO format, either LTO-6, LTO-7 or LTO-8.
Uncompressed that's 2.6TB, 6.0TB, or 12TB per tape. Up to 2.5x more with compression.

An LTO tape can be fully written/read about 200 to 300 times. So if you run a full backup every week, and have 3 sets of tapes, you should get around 15 years of usage on each set.
Drive will be obsolete before the tapes wear out.

Shelf life is 15-30 years.


Damn.

I thought tape drives were horribly outdated tech.
 
Damn.

I thought tape drives were horribly outdated tech.

Only way to do weekly backups of multiple TB's of data..
Couldn't imaging trying to backup 40TB to the internet, even with a 1gb connection.

Biggest issue is that the tape drives are getting so fast, it's difficult to stream the data fast enough to keep the drive from stopping.
It's bad for the drive to stop, because it has to rewind a little and start the tape forward again. This can shorted both tape life and cause excessive wear on the tape drive.

LTO-6 (2.6 TB) has a max speed of 160MB/second and minimum speed of 20MB/s, although if you are getting 2:1 compression the speeds are 320MB/second and 40MB/second
LTO-7 (6.0 TB) is 300MB/second & 40MB/second uncompressed, and 600MB/second & 80MB/second compressed
LTO-8 (12.0 TB) is 360MB/second and 720MB/second compressed

Calculate that out. 720MB/sec is 43GB/minute, or 2.6TB/hour. That's about 16 hours to backup 40TB's at full speed on 4 LTO-8 tapes. Wish I could afford LTO-8 drives/tapes :(

They are already working on LTO-9, 10, 11 & 12, although LTO-12 will probably be out around 2026.
LTO-12 is supposed to hold 192TB per tape :eek:

Only way to keep up with the drives is D2D2T (Disk 2 Disk 2 Tape) so you are reading from local drives and writing to tape.
Even a 6 drive SATA raid has trouble sustaining 600MB/second read.
Plus you can't get bogged down with small files.
Almost all my servers are virtualized, and it's much faster to backup a single VHDX virtual drive than 100,000's of small files.
I have a dozen 6TB drives in a pair of raid 5's on my backup server.
Raid 5 is ok in this case, since I would never bother with a rebuild. It's faster to just kill the raid and backup all the servers again. :D
Plus it's easier to manage a stack of barcoded tapes than to deal with dozens of USB drives. :p

FYI: If it wasn't for deduplication, and the ability to backup volumes in deduplicated format, I'd probably be trying to backup over 80TB's of data by now :eek:
 
Depends on the type of tape.

Most common large tapes use the LTO format, either LTO-6, LTO-7 or LTO-8.
Uncompressed that's 2.6TB, 6.0TB, or 12TB per tape. Up to 2.5x more with compression.

An LTO tape can be fully written/read about 200 to 300 times. So if you run a full backup every week, and have 3 sets of tapes, you should get around 15 years of usage on each set.
Drive will be obsolete before the tapes wear out.

Shelf life is 15-30 years.
Thanks, not an IT pro and was curious.
 
While your way is very much the best way (SHTF backups need to be airgapped), none of the enterprise backup solutions I've ever used had direct disk/share access from a server or workstation (admittedly, my career is fairly short). The only way to get data on or off was through the agent. The backup software would contact the machine, tell it to initiate the backup, and it would pull the data over. Unless I'm misunderstanding how ransomware works, it either needs some kind of SMB/NFS share it can attach to before it is able to begin the encryption process, or to actually infect the machine so it can encrypt itself. If our org were hit with ransomware, I'm fairly confident we would be dead in the water for a couple days while we get the servers reimaged/VMs back in place and reimage workstations, and lose maybe 2-3 days worth of work.

Our workstation backups are much softer targets (wouldn't be surprised if we lost most if not everything...), but we've informed all the employees and management that workstations are best effort only, and they need to keep important docs on the share.

If it takes weeks to encrypt all the data and show the ransom message, I'd understand that, but otherwise, I still would expect them to have something.

I won't say its impossible to get the backups caught up in a ransomware situation, but for us, it would have to be specifically targeted.


We use Veeam to back up our VMware VM's (we are 95% virtual...only a few few physical for really powerful terminal servers). The backups are being taken thru the Veeam software,proxy, and agents, but in the end, the backup archive files are just dumped on a Windows box. So my worry is if that Veeam repository gets compromised by Ransomeware. It does have different credentials, etc...but you never know in this day and age. So every Friday, I plug in that extra server to the network (10GB networking is a amazing thing for file transfers) and copy the weeks worth of data to it and then unplug it from the network again. I also copy over my database full and incremental backups, network configuration backups, etc.

We are a small shop and this is more then enough. If we were compromised, it would be a long weekend, but I am confident with this strategy that everything can be restored.
 
Well at the same time even if you don't use the same one, try and go be responsible for 50+ devices on a rolling 60 day window with mandatory changes. You will develop a method that is crackable as it is impossible for a person to remember truely random passwords changing 60 days on multiple systems. PKI cards and pin codes help but still.

I knew one guy that simply remembered 1 key for any password. He has a physical pattern that he does based starting on whatever key. Now the result looks random as hell but is a pattern when seen in the aggregate.

I post my passwords publicly on paper in a way that is only visible to me. 20x20 grid of random characters. I remember a connect the dots style pattern. When a password refresh is done the grid is re-randomized, printed, and the password changed. I keep the same patterns so its easy to remember, but would never be meaningful to anyone if they stole it.
 
Back
Top