What's your backup strategy?

Coldblackice

[H]ard|Gawd
Joined
Aug 14, 2010
Messages
1,152
Enough is enough -- after 20+ years of failing to do so, I'm twisting my own arm to finally get a functional, recurring backup strategy implemented. So for the non-professional/business home userbase out there --


What's your backup strategy?


Things like,

Methodology:
Imaging? File/folder backup? Incremental/cumulative/full backup?

Software:
Acronis? Macrium? Paragon?

Medium:
Cloud? DVD/Bluray? Tape? Separate hard drives?

Interval:
Periodic? Recurring? One-off? Automated/manual?


...and any other particulars on your backup process and how you go about it would be much appreciated.



As for me (up until now):

Methodology:
Crossed-fingers

Software:
Crossed-fingers

Medium:
Crossed-fingers

Interval:
Bi-yearly "Let's worry about this some other time"
 
Last edited:
Files

Methodology
One-way sync

Software
GoodSync and rsync

Medium
From one raidz2 to another raidz2 array

Interval
Nightly

ESX Hosts

Methodology
Imaging, nightly incremental, monthly full

Software
Veeam

Medium
raidz2 dataset
 
15 years myself and I happened to learn my lesson by accident. Decided to play around with Paragon exactly 7 days before my RAID0 array after 3 years took a dump. Been a believer in learning my ways ever since (2012 mind you lol).

Methodology
Imaging

Software
Paragon Backup and Restore (Free) superseded by Macrium Reflect (Free) which I personally this is a more direct solid product.

Medium
USB 3.0 External HDD

Interval
Weekly (script) +3 month long-term rotating backup (at my personal discretion)

It could be better, but at the moment this suits my needs. Down the road I'll eventually move to something more optimal as well as a mass storage solution, but those funds aren't available and really it isn't needed.

Since recovering for the first time in 2012 I've since had to re-image 3 times in a year for various reasons. I think the digital gods were trying to speak to me. I have heard you.
 
Methodology:
Belt and suspenders

Software:
Windows Server 2012 R2 Essentials, Cloudberry

Medium:
Home server in closet (with a mix of a hardware-based RAID6 array, Windows Storage Spaces, and individual disks)
External drives
Cloud

Interval:
Nightly for backing up individual PCs to server. Fresh backups (i.e., complete not incremental) on a monthly basis.
Nightly server backup to non-pooled storage for backing up centrally stored files. Fresh backups (i.e., complete not incremental) on a monthly basis.
Twice daily backup to Amazon S3 for relatively small amount of "could not possibly be replaced" items (documents and pictures, largely)
Rotating monthly backup of "annoying to replace" items, along with "could not possibly be replaced" items to an external drive that then lives in our fireproof safe.
 
Methodology:
Copy operation: MainPC > USB drives > HTPC (three copies of everything)

Software:
Truecrypt mounted volumes (dismounted obviously when copying)

Medium:
1TB + 2TB (soon to be 2TB + 4TB)

Interval:
Monthly

Been doing this for ten years, never lost a thing.
First with 320GB, then 320GB + 1TB, then 1TB + 2TB (i just cycle disks through high-capacity phase to low capacity phase)

I also have an Owncloud box connected via a static IP, but that is more of a mobile convenience than a backup strategy.
 
Went from Zip disks, to CD-Rs, to DVD-R, then to a 500GB HDD.
I also use a 8gb USB drive, and 4gb SDs.
Am getting me a 1TB HDD, and 16gb USB.

Perhaps even a BD burner.
 
I have done reasonably well with putting things of different importance into the filesystems in a way a backup script can recognize. I push to a snapshotted filesystem on a different machine.

The main point of struggling is snapshot depth versus completeness. Not only do I have to make the basic tradeoff - do I spend my space on more stuff or more snapshots? It's worse - it is very important to keep data that changes a lot out of the snapshots unless you are willing to spend a lot of disk space on them. That gets you into a decision mess to exclude things from backup or from snapshots based on two dimensions, base importance and how noisy the data is.

To make things worse, ZFS snapshots are immutable. After you identify a piece of garbage you can not get it out of the old snapshots without recreating them which is very time-consuming. I am seriously tempted to switch to rsnapshot because of this.
 
Methodology:
Imaging, full backup, along with localized backups of files on flash drives.

Software:
Apricorn EZGig

Medium:
One HDD per computer along with various flash drives.

Interval:
Manually periodic.
 
Methodology:
My desktop keeps no important files on it, so I actually don't back it up. I keep all my files on my file server. My wife's laptop backs up to the server about daily with Windows Backup--but I'd like to add another file-based backup for her stuff. My file server backs itself up to external drives regularly.

Currently, my backups are stored in a Sentry fire chest that is stored inside a UL-listed "fireproof" file cabinet. When the new bank branch opens in town (currently none for our bank), I'll be keeping a copy there.

I back up my entire current storage pool, but as I am archiving more things digitally my backup strategies will change a little to differentiate from replaceable and non-replaceable items.

Software:
Windows Backup & rsync

Medium:
Network ZFS storage server and USB single-drive ZFS pools

Interval:

Daily: Wife's backup to server, standard rsync from server to USB ZFS drive (this only takes a few minutes).

Weekly: Server uses rsync with checksum once a week from the server to USB drive pools (this generally takes several hours).

Monthly (about): I swap out the USB drives. I currently have three single drives in rotation. It doesn't always happen monthly, but some months I don't have much data change.


I have rsync make a log every time it runs, so I can review changes when I need to.
 
I run a server 2012r2 box that backs up my desktop and laptop nightly (almost nightly with the laptop). The backups go to a raid5 setup where I keep all my important data. The raid5 gets backed up to 3tb external nightly.
 
Because I use a Mac, I do time machine backups to a 4 bay NAS+cloud.
For my media, all data is on two NAS's and the duplicated onto a system with RAID 5+ hot spare.
 
Methodology:
Monthly full image with incremental images nightly.

Software:
Symantec System Recovery 2013 x 23. Windows servers running in a HA HyperV cluster.

Medium:
4 x 3TB RAIDZ1 on FreeNAS + several 3TB external drives

Interval:
Monthly full image with incremental images nightly. Biannually full images taken to safe deposit box. Monthly full images taken offsite.

The images can be drilled down into to recover individual files. Backup fail regularly on Windows Server 2003 machines so it takes some micromanagement on those boxes.
 
Personally, I perfer several different backup methods. Mainly cause there is less chance of a total failure, and cause the different backup method are targetting different issues.

I do file level backups of important items of each computer, like the users folder on windows, and home folders on unix.
I also do a full image backup of each machine, in case I need to restore it from a total failure.

I backup these to a local system, and then I have that system also backup those backups offsite to my colocation center.
 
Methodology:
c: drive backup of my main PC. Not a whole lot else

Software:
Windows Home Server 2011.

Medium:
6x2TB plugin (DrivePool) based pool with redundancy on selected folders

Interval:
nightly incremental updates

I should probably be doing something more for the files that would be annoying or impossible to replace, but aren't "critical" or sentimental
 
Backup processes are usually highly dependent on how you have your stuff setup. For myself I store all important documents on a file server so the actual desktops don't matter to me, as I can always get the programs back I had before relatively quickly (usually if the computer wipes out it probably needed a fresh install anyways).

Methodology:
OS and VMs: Windows server backup nightly to a raid 1 storage spaces partition which then gets backed up every once in a while onto an external.
Stored files: Backed up at my discretion (whenever I get enough new stuff for it to matter) from a raid 5 array to a 4TB external drive (will add more when needed but I'm apparently not as voracious about storage as some).

Software:
OS and VMs: Windows server stuff works very well, particularly when restoring itself. Bare metal restores work fantastically in my experience (been more than once).
Files: Just a simple robocopy batch file that mirrors the folders exactly.

Medium:
External hard drives, between that and raid I feel relatively secure (as I come home to a burning apartment tonight after stating that). I don't keep anything that I couldn't live without on storage like that, but I'd be depressed if I lost my media library.

Interval:
OS and VMs once a day (at night) incremental
Files: whenever it seems like enough has been added for it to be necessary.
 
Methodology:
Full-copy backup of all files.
No versioning or keeping of deleted files. It's all media-type stuff so there are no versions.

Software:
NTFS for the backup drives. I like the low overhead of NTFS, it wastes very little space on metadata. Far less than say EXT4 for example. Also, I like that it does not care if you fill it all the way up with only like 1GB free on a 4TB disk.

I also like the good cross-platform support for NTFS via ntfs-3g. NTFS has never done me wrong, I have always had the best luck recovering data from NTFS drives in cases of accidental deletion or formatting, etc.

For example. I have done a "full" format on an NTFS disk. Or accidentally ran a recursive rm -rf on an NTFS drive and I was always able to easily restore 100% of files and even their filenames and file paths and everything were still in tact. No lost+found unnamed and unorganized stuff.

Updating the backups is done mostly manually by me. When I update my backups I take a snapshot in ZFS and then take another one when I do the next backup so I can use "zfs diff" or rsync -n "dry run" to see the changes and see what files I need to delete or add to my backup drives.

Medium:
Offline individual hard drives (I use an eSATA/USB3 dock). Stored in a padded case like this: http://getprostorage.com/shop/images/ProStorage-18-03.jpg
I keep the drives off-site.

Interval:
I take the drives home at least once a month or more often depending on how much data I change and if I want it backed up sooner or not.
 
Methodology:
Full-copy backup of all files. No versioning/snapshots configured.

Software:
rsync + cron

Medium:
My NAS has 7x3TB in raidz2. This is backed up to another NAS with 6x3TB in raidz.

Interval:
It used to be daily, but I haven't set the job up in cron since my last reconfiguration. Right now, it's manual, and happens about three times a month.

In the future, I'd like to start using ZFS' snapshots. Unfortunately, I'm at about 300GB free right now - There's just not enough room, and I can't buy more drives for another year or two. (Not to mention, when I do, I need a new case!)
 
I'm confused. Do you mean low on space on the sending machine, not the receiving one? If so, you don't need to keep the snapshot(s) after doing the send...
 
me personally I like to turn my head back just far enough + look down to make sure there isn't anyone behind me; then I activate my right leg thigh muscles, lift, and use my tendons to angle my leg to coordinates topview-south; then I relax my muscles just enough to let gravity flatten my right foot against the ground; and finally I slide my left foot toward me to synchronize in parallel alignment with my right foot.

:D

EDIT: oh, oops, this isn't genmay

Methodology:
Monthly full backup "disaster recovery" image
File/folder/system state daily incrementals

Software:
NovaSTOR NovaBACKUP

Medium:
External USB 4TB HDD

Interval:
See Methodology

Would I recommend or buy NovaSTOR products again? No. Acronis FTW.
 
Last edited:
Methodology:
Imaging (full backups)

Software:
Macrium Reflect

Medium:
External USB 3.0 hard disks

Interval:
OS drive: Automatically backed up weekly.
Data drives: Manually backed up monthly.

Process:
The server has a dedicated scratch disk for temporarily holding backups. When I back-up one of the data drives (by imaging it with Macrium), I write the image file to this scratch disk and verify it.
Once the backup is verified, I go ahead and plug in the corresponding external backup drive. I delete last-months backup from the external drive and copy/paste the new backup in its place. When the data has been copied I unplug the external drive and store it in a safe place.

This process ensures that there are always (at least) two intact copies of my data.

Future Plans:
I currently have 3 data drives that need to be backed up. If I grow my storage much farther, my current backup process will become cumbersome. I might switch to using something like a drobo as a large backup disk, instead.
 
I actually have 4 2TB drives passed through to a vm in a raid0 than I run a robo copy script around 3 am to copy all changes over from our main file server which has a raid 5 array.
 
Last edited:
Methodology:
Imaging! File/folder backup! Incremental/full backup!

Software:
Aomei Backupper

Medium:
Cloud, NAS, erternal hd,HDD

Interval:
Ususally monthly

a bootalbe CD is necessary.
 
Methodology:

Back up only the irreplaceable such as family photos & video and personal documents such as tax returns for example. If my gigs of pr0n is lost, oh well.

Software:
Microsoft

Medium:
Hard drive on separate PC in he house and Google cloud.

Interval:
Twice a year.
 
I'm confused. Do you mean low on space on the sending machine, not the receiving one? If so, you don't need to keep the snapshot(s) after doing the send...
I have low space on both machines, since they're identical. And the whole point to the snapshots would be to keep them. I'd be using them for versioning.

Suppose you have a video of your kid's first steps in /pool/video. You back up that data religiously, because it's very important to you. Every day, it's backed up automatically to your second NAS. You go to watch the video one day, only to discover that an undetected virus (or whatever) has corrupted it. You check your second NAS. That copy is corrupted as well, since the backup was overwritten the day the virus got ya. With snapshots, you can go back to the last time that file was changed, and find your copy from before the virus hit you.

I need to get that set up. But unfortunately, snapshotting does take a chunk of disk space. Whenever you change a file, the old one is saved as well as the new one - you're now taking up twice the space. Change the file again, and you're taking up 3x the space. (Actually, it may just save the diff... I can't recall.) 99% of my data is static, so I don't need a ton of space, but I definitely need more than the 300GB I have available.
 
At home:

Crashplan. Done. Fuck it.

Work:

Equalogic group sycreps to secondary units across street in secondary server room, backup exec backs that up to Dedupe volume on an MD1220 with daily incrementals and weekly fulls, and the dedupe DB/Folder is then backed up to LTO6 Libraries and sent offsite.
 
I occasionally put all my important files on a USB key stick, and then go drop it in some random heavily-traveled area. A mall, a park, whatever. My assumption being at least some % of them will end up plugged into some guy's computer and uploaded to the internet in some sort of "look at all this stuff I found on some guy's USB key" thread or torrent. If I ever need to recover, I just need some search skills and a torrent client, most likely.
 
My wife and I currently use Macs as our main computers. Our backup strategy on the most basic level involves the use of Time Machine. Basically our Macs are backed up to our own external 1TB drives, we each have one. I would like to get a second drive for each of us and store them offsite.

On my own Mac, along with main external 1TB drive, I back up the important files to another external 1TB Drive. The most important daily files that I use such as school and work projects are backed up on a flash drive, so I actually have my most important files in three different places. My wife thinks that is overkill, but I believe you can never be too safe. I have only had one or two hard drives fail on me, but when they did I could not recover any data off of them, and unfortunately they weren't backed up either.

A long time ago, before external hard drives were affordable, I used to burn CDs, then DVDs, and use Zip drives. I did not back up as often then because the media got pricey after a while, and it was not as convenient as now backing up to an external drive.

Since my Mac was upgraded from a standard 500GB drive to an SSD, a complete bootable system is on that standard 500GB drive. Unfortunately that has not been updated recently so I won't have a current full bootable backup. I wouldn't really consider that as a backup, just something else that I have for a just in case. If I could afford to, I would have another SSD with everything on it, so if my current SSD fails, I can just replace it and be back up and running instantly. I can go back to that standard 500GB drive, but I don't think I would like the performance decrease.
 
Currently:

Methodology:
File/Folder Backup

Software:
Windows Explorer lol
Dropbox

Medium:
External Hard drives
Dropbox (important school work mostly)

Interval:
Yearly for Ext HDDs
Daily for Dropbox

horrible i know. thats why I am in the process of creating a NAS for the sole purpose of backing up. Will be:

Methodology:
Images

Software:
Crashplan
Veeam for ESXi Server's VMs
Dropbox for really important stuff I need on the go

Medium:
NAS box with 12TB capcity
Cloud (dropbox)

Interval:
Weekly for Crashplan
Weekly for Veeam
Daily for Dropbox
 
Methodology:
Image of whole machine - disk space is cheap.

Software:
Acronis 9 was my favorite imaging backup so far. It made one file a .tib that you could take anywhere and load up or restore. After version 9 they started doing metadata stuff and while I don't mind the idea of metadata I want a single friggin file I can port anywhere and recover and not worry about metadata.

Medium:
For professional things tape and online (local disk)
For home my NAS box

Interval:
full image first sat of month, incrementals each saturday after
 
I wish I had the time to set up a clean back up system... It's not like I have terabytes worth of files waiting to be lost
 
Methodology
ZFS snapshots and ZFS send and receive. Also Time Machine

Software
OmniOS = Nappit

Medium
My important files from my large array are backed up to a smaller Microserver. I also have a VPN tunnel with my brother and I send my snapshots to him as well.

Interval
Weekly onsite monthly off site (to my brothers NAS)
 
Back
Top