Which method do you currently use as of now to backup data?

beautypc

BAD TRADER
Joined
Apr 12, 2012
Messages
308
In terms of the home user with typical files such as media, documents and such..

1. Which software do you use to backup?
2. Why this software?
3. Method of backing up ie network, usb drives, etc and why this route?
4. Easiest and most effective solution to backup files in your opinion?
 
1. I use Ghost, Brightsparks SyncBack, and MS SyncToy.

2. Despite complaints about how Symantec ruined Ghost after 9.0 it's always worked reliably for me and all of the functionality lost back then has been regained in later versions, the other two are just good freeware to supplement Ghost (which I only use for image backups) and they've worked reliably for years.

3. I only have a netbook and a desktop to back up, so I have an external USB 3.0 drive hooked to the desktop and an SD inside the netbook for data backups. Occasionally I hook up the external to the netbook to re-image the OS.

4. I do bi-monthly image backups of the desktop OS's drive (SSD) to the external drive, and I pretty much mirror my data HDD to the external twice a week.

I use Dropbox (got 22GB on a free account) for school work and other stuff I'm currently working on (also mirrored to the external) so I'm not terribly worried about keeping multiple versions of my in-use data and whatnot, since Dropbox does that for me. Once I'm done actively working on something I take it out of Dropbox.

If I wanted to be super careful I wouldn't simply sync my HDD with the external but most of the stuff on it are things that aren't changing so it doesn't worry me much, not much danger of something getting overwritten or accidentally deleted.

I backup my music to Amazon's cloud service ($20/year for unlimited music storage and 20GB of other stuff) along with some personal/important files. The music's just convenient to have there since I can stream and re-download it anywhere, tho my CDs are ripped to FLAC (on the HDD, backed up to external and also stored as MP3s on the HDD/cloud).

My phone and tablet are set to backup pictures and a couple other things to Dropbox, tho there's also NANDROID backups (sorta like an image backup) thru CWM and app/data backups with Titanium.

Outside of the Amazon cloud stuff and some shuffling thru Dropbox most of this is fully automated and I never really think about it. The image backups are good redundancy so I don't have to re-install and re-configure EVERYTHING if my OS ever gets screwed up or the SSD dies. The file backups are pretty basic, no versioning or anything but between that and using Dropbox for in-progress stuff this system works for me.
 
Last edited:
Wow that's very cool. Im glad you posted that it gives me something to think about now.. thanks!
 
I use SpiderOak (like Dropbox, but better!) to backup work data and documents and other kinds of small but important files. For bigger things like movies, music and games I use Crashplan, but because I'm cheap and New Zealand internet is crap I only back it up to a local hard drive. No RAID, just a single drive. Not because I'm cheap, but because I'm poor. I also do C:\ drive image dumps to said HDD whenever I can remember, usually once every week or two.

I've been trying to look about getting a NAS or building a server/HTPC kind of thing, but NAS' seem really expensive for what you're buying and I don't even have the money to buy another hard drive, let alone build a whole system plus hard drives from scratch.
 
No problem, if I had a laptop I used heavily or multiple systems that I'm responsible for in the house I'd probably set up a NAS or WHS but for now I keep it relatively simple. I do have one other external drive that I back up really important documents to, and I keep it at a friend's place. I refresh it once a year or so.

I might eventually go down the WHS route even if I'm still working off the desktop/netbook mostly, so that I can archive my movie collection permanently. That and/or multiple systems are the primary motivations for centralized network backups imo.
 
I'm self employed as is my wife. I occassionally work from home whereas she always does and her work is centered around large deliverable documents that are weeks and sometimes months to create so backups of her work is very important.

1) I use Acronis True Image Home 2011 and Microsoft's SyncToy. Years ago I was using Ghost, but one of my pet peeves is software that won't completely uninstall and Norton seems unable to create software that will actually, completely uninstall, so I've been avoiding anything Norton for awhile.

2) I tried Acronis out when I got a license for a much earlier version off newegg for about $5 usd about 3-4 yrs ago. I liked it well enough that I kept my eye open for additional deals and eventually got a license for all the Windows PC's in our house, my desktop, my laptop, wife's desktop and laptop and the htpc. I think most of those licenses came out to <$10 each. I skipped upgrading them all to '09 or to 2010 versions but did upgrade them all to 2011. I've not upgraded any of them to 2012 yet.

I had some good success using Acronis to restore a backup of an OS to a new drive and then swap out the drives without issue. I've done this with dual boot drives, boot partions on striped drives, upgrading to SSDs, etc. The only hiccups I've had is a few times I've had to repair the MBR after a restore to a new drive.

Some of its current features I really like - real time backup (use this for wife's work docs) which gives you a full history of a file, custom backup schemes with automated cleanup, can convert Acronis backup images to Windows backup formats, fairly easy to retrieve individual files from a backup image and they do updates/patches. If you hunt, Acronis deals can be had, no need to pay full for a license.

Some things I don't like - they need to patch often. Their interface change for 2011 kinda sucked, definitely dumbed it down to the lowest common denominator user and made the interface clunky for the more advanced user. Dealing with the MBR on a restore is less than obvious at first. Their clone drive function could use more options and be a bit easier to understand. The real-time backup must be to a local drive.

3) Backup method - I use multiple in an attempt to not have all my egss in one basket. Daily or close to daily backups for htpc and wife's desktop to internal local drives. Weekly backups for all desktops and laptops to unRaid server across network. Every month or two I'll run manual backups to external drives I keep in a safe. Most of my work is done within virtual machines and I use SyncToy between my dekstop/laptop/external drives/unRaid Server.

"4. Easiest and most effective solution to backup files in your opinion? " Anything, just whatever you choose, use it!!! If you just glance at forums on occassion, you'll see all kinds of amateurs posting about their dead or dying drive, having no backups and loosing their stuff. Don't be one of them. It is ok to not have backups if you don't need them (don't mind reinstalling Windows and your favorite steam games for example). If you really are an [H]'er, you have backups. Also, whatever you chose, prove that you can retrieve your data and prove that you can restore an OS and get backup and running. A backup solution is not a solution unless you know for a fact that it works, which means you have restored from those backups before.
 
Those are some good tips, the best backup solution is definitely the one you'll actually use (fortunately it's easy to fully automate the more rudimentary backups these days)... And if you're doing any sort of backup to compressed/packaged formats you need to verify it, check you know how to boot off restore media, etc.
 
1. If you have a desktop and can keep a USB drive attached, Microsoft image backup is pretty damn awesome.

2. If you have a couple of machines and can be "bothered" to run backups...something like Acronis with a USB drive is pretty good too.

3. If you have absolutely no patience or have many laptops...WHS is pretty sweet.

4. I would always recommend a cloud based system (many free/cheap ones out there) for critical data. Your 2000 linux ISO's are not critical data.

----
Our houshold

WHS for daily machine backup
Crashplan for critical data
Small FreeNAS box for server data backup.
 
The easiest way is to have an internal hard drive to back up to and to have an automated process.

I have an internal hard drive with the most recent 16-18 months of monthly backups and daily changes. I also have external hard drives with the same type of data. The external drives go off site from time to time. Old data hopefull never comes back from off site.

I use a batch file the zips up day to daily changes.

I use a batch file that does an "XCopy" for the monthly changes.

Both of the batch files are run by the Windows scheduler every night.

---

The only work I need to do is delete backups when the internal drive gets too full and to replace the external drives from time to time as they get migrated to off site.
 
Been going over this post - I have used windows 7 to handle all my machines, and backup to my server, I have one xp machine that this will not work for. the windows solution saved me once when my raid0 on my gaming machine went down so i have some trust in it.
I recently installed whs2011 as a VM to play with, and it would be so nice to be able to use it to machine all backups but unless i passthrough my raid I don't see a way to allow backups to save to it.
if i do pass the raid to the whs VM do i still have full control over it on the server 2008?
Stil trying to find that one stop for all Backup for all my computers.
 
A) I have Acronis TIH 2011 to image backup my OS drive to my data drive.

B) I use Crashplan to back up my data drive (MINUS the Acronis images) to following FOUR locations:
1. Cloud via Crashplan subscription
2. To a dedicated internal backup drive, near continously
3. To an external hard drive every day or two, which remains powered down when not in use.
4. To the OpenIndiana NAS at my workplace via the Solaris Crashplan client, using the backup "friend" location via my VPN connection there.

All my most critical data backups are in 4 locations, 2 of which are at separate offsite locations (cloud, and workplace). Even so I still think there is room for improvement here as I feel too reliant on Crashplan. I want to still setup an alternate backup using Acronis to give some software resiliency. I may switch my dedicated internal drive backup to Acronis based and leave the other three via Crashplan.
 
I use a laptop at work (128gb SSD) and a desktop at home (64gb SSD). My current projects that I'm working on all the data is stored on dropbox (automatically sync'd to both machines). Older stuff and larger files are stored on my 6TB ZFS NAS at home. No other data is on the desktop. All important data on the NAS is backed up to crashplan (cheap unlimited backup) and periodically to a USB external drive.

My laptop at work has a copy of my work data that is too big for dropbox (about 40gb) that is kept in sync with my NAS at home with unison (OSX->Solaris).

Since there is no dropbox client for solaris I periodically copy my dropbox folder to my NAS so that it can be backed up to crashplan.

My desktop drive is imaged to the NAS with microsoft image backup thing in Win7 weekly.
 
I use WHS2011 VM to backup all of my machines every night. (2 desktops, 1 VMserver, 2 laptops, HTPC) this handles the client images. From there I backup the WHS2011 (the server & client images, critical data and regular data shares) to an external drive 1-2 /month and taken offsite.

My critical data sits in my skydrive folder, on a mirror, in the server. The server gets backed up, see above. I also have it backed onto a TrueCrypt 32GB USB in the safe.

For my regular data (docs, pics, home vids, music, business stuff software) I have 2 2TB drives in a mirror. This gets backed up with the server.

For my Video shares I just run a RAID5 in the server and store the original discs offsite.

For sync I use DropBox between my desktop and laptop. My WP7 and my son's backs up to skydrive. My wifes iPhone and iPad to iCloud.

I think that's everything....???
 
For regular backups, I use WHS (nightly backup of each machine - currently 4 on the network most of the time). I also do nightly backups to Amazon's S3 service for documents/photos/purchased music/other items that would be difficult or impossible to replace. I use Cloudberry on WHS for that.

Large media files (ripped DVDs, blu-rays, and recorded TV I want to keep) are backed periodically up on an external hard drive or two, and reside on a RAID6 array in the WHS box so there's some fault tolerance as well. If my building burns down, I'm out of luck on the media files (but I'd have saved to the cloud the document that spells out what the files are and the corresponding burned disks for when I file my insurance claim). A separate RAID6 array handles the storage of the music/documents/pictures/etc. that is (partially, for the irreplacable items) backed up to the cloud.
 
WHS2011 does backups of 4 clients.

Crashplan Family Unlimited plan (up to 10 computers, $120/yr., unlimited space) backs up all 5 to cloud.
 
Last edited:
I have a ZFS raid on my PC with 8 disks. I have a on/off switch to one molex power cable. This molex powers all 8 disks (via spread out cables). Normally, the raid is shut down. When I need to back up, I turn on the raid via the on/off switch and copy data. And then shut down the raid again when done. So I do have one 8 disk storage server, but the disks are normally shut down.

I have one 3TB disk that is used as a cache. When it is full, I turn on the raid and copy data.
 
I use WHS 2011 with Stablebit running my disks as one large drive pool to house all my data. Dont have backups of individual PC's yet, if i even go down that route. My critical data is spread across 4 different computers in the house (including the server) and im in the process of FTPing it all up on my web host. (unlimited data and bandwith). Ive been using my own domain name for email for the past decade, may as well get some more use out of it.
I'll be upset if i lose my movie and music collection, but my pictures are my most important files. cant replace those.
 
ZFS server here.

I have 5x2Tb raidz. I also have an external enclosure that has 2x3tb ZFS. I do a snapshots of the raidz pool, attach the enclosure, and do a zfs send to the enclosure's pool, disconnect then store in a remote location.

The enclosure could be permently kept remotely and the send done across the Internet, but have not done so yet. There is not redundancy on the enclosure, but is an allowable risk for me until a buy a 4 bay enclosure.
 
Config backup:
I have several Linux-servers running, both physical and virtual. Every night the configuration is backed up by the fileserver using a rsync script. Only edited non-default files are backed up. This also serves a documentation and versioning purpose.

Storage backup:
Everyone stores everything on the Debian fileserver in our house. This server have server grade hardware and runs RAID6 with 8x2TB disks. Every night I back up to my backup server standing on my desk in my work office, which also have 8x2TB in RAID6. The backup is done via internet. I keep a daily snapshot of the server for a year before it is deleted.

Everything is fully automatic using a script running rsync. Rsync only backs up files that are changed, and only the change itself. I also use the hard-linking feature for the almost 5 TB of backed up data on the fileserver, which makes it possible to keep snapshots for long time without running out of space.

Reporting:
I get a mail with backup report every day. I will know immediately if a backup has any kind of difficulties.

Client backup?
I have several Windows machines running as clients, but as everything important is stored on the fileserver, and bookmarks etc are synced using the browsers own mechanisms, they have absolutely nothing worth keeping a backup of. A backup will only serve a purpose for time saving. A complete disk failure on these machines happens so rare that it is no problem for me to reinstall the OS and the base applications. Hell, it is probably long due anyway. I did image based backups for a while, but I found it only was a waste of space without a real purpose the way the systems are used at home.

When I think of it I _do_ actually have one backup on one of the clients. I use a Windows 7 robocopy script on my gaming rig to sync a backup of all my installed games to the fileserver. When the backup is on the fileserver it is automatically part of the nightly snapshot system so I can basically restore games I had one year ago.

Backup of backup?
Every good backup system have backups of the backup. I use a fully encrypted 2TB external USB disc for this. About once a week I connect this and sync the latest snapshot backup on the backup server to the disk. As it is only 2TB, and I have almost 5TB of backed up data, I exclude DVD-isos and TV Shows from this backup.
After the sync I disconnect the disc completely. It only serves as a backup in case some freak power issue destroys the online machines completely, or if something happens with both my servers / disk arrays.


So, my data has the following protection:
1. Double redundant (RAID6) storage
2. Daily snapshot to offsite backup server with one year retention
3. The backup server also has double redundancy (RAID6)
4. Weekly offline backup of the latest snapshot to external drive

Basically only a nuke destroying a big part of Norway can make me loose data. And should this happen I suspect that my digital data will be the least of my worries.
The motivation for setting this up was mainly taking good care of all our photos and movies of our kids, and my large painstakingly sorted and tagged collection of FLAC-music. I also think it is very fun to plan, script and learn about all the potential issues in creating a perfect backup system optimized for my system without obvious flaws or administrative overhead. I have mostly used left over/old hardware from my fileserver setup so it didn't cost me much to create a robust backup system.
 
Last edited:
U970z.jpg

Drag and drop. I've never imaged my OS, I only care about files. (USB3)
I have one drive mirror the other.
 
I use O & O's Disk Image Server Edition.

Image off to shared drive / partition on server every few weeks. I'm gonna have to move to a dedicated NAS real soon though, getting to the point where 4+TB is necessary.
 
Config backup:
I have several Linux-servers running, both physical and virtual. Every night the configuration is backed up by the fileserver using a rsync script. Only edited non-default files are backed up. This also serves a documentation and versioning purpose.
How do you protect your data against bit rot? Do you checksum all files regularly to see if some bits have been altered randomly?
 
How do you protect your data against bit rot? Do you checksum all files regularly to see if some bits have been altered randomly?

In a RAID6 setup the monthly mdadm raid-scrubbing will find and resolve most bitrot caused by harddrives, unless it is of massive proportions. I also run countinous SMART-monitoring with daily selftests of every drive and weekly long selftests to catch any errors appearing.

This will only catch harddrive errors. To be sure that other kind of unauthorized change doesn't happen undetected I added md5 checking of all the files to my backup scripts and stored the checksums with every backup. I had it running for a couple of months, but didn't bother to optimize it so it did a complete md5 check of all the files in the backup every day. If I remember correctly It did take almost 2 hours on 2,5TB data.

After half a year I recalculated all the checksums and compared it with the one in the original backup and not a single checksum failed on 2.5 TB data.

This convinced me that with my setup and RAID6-scrubbing bit-rot was not a big issue that i needed to be worried about for my data. I turned md5 checksuming off to ease the load on the hard drives in the backup server as they are only Seagate 2TB LP disks not meant for 24/7 usage, and two hours of pedal-to-the-metal every night didn't feel necessary.
This of course means that I will _not_ be able to guarantee that all files are unchanged If I someday have to restore them, but because I did some real testing on my data I am not worried about corruption of my files, even though it may happen on a file someday. I plan to migrate to btrfs when I trust it and let it do the heavy lifting in this area in the future, at least on the storage level.

If this guarantee was more important than personal experience, e.g because I did scientific work, I would of course have it enabled still.
 
Last edited:
When I think of it, I could calculate md5sums only once a week as an example. As I use rsync with --link-dest all unchanged files are hardlinked anyway, so they point to the same area on disk. If the check from last week validates it is safe to presume that the files from the last nightly backup is also OK. Only the changed files will not have been checked, but as this is normally just a small amount of data that is newly written it is very unlikely that it is affected by bit rot.

It could be a good compromise, but I need to modify my script a bit. But that's the fun stuff :D
 
Most of my PC's have regular windows backups scheduled to my local ZFS fileserver, I use a central fileserver for most "Documents", cloud services for bookmarks (xmarks) and passwords (lastpass).

When I do take images of systems, I use Acronis TrueImage.

Nothing fancy, certainly lacking on off-site protection (looking into that) and room for improvement.
 
In a RAID6 setup the monthly mdadm raid-scrubbing will find and resolve most bitrot caused by harddrives,
No raid, in particular Raid5 or Raid6, is designed to discover bit rot. They dont have checksums to do that. Sure, they do XOR calculations, but that is to repair the raid if a disk breaks. But no bit rot, they can not catch that. You need a modern solution for bit rot, such as ZFS. I have lot of research papers on this, I can show you if you wish.

Thus, you can not trust raid6, you need to do manual checksumming SHA256 or MD5 comes to mind. I think it is good that you do a checksum now and then. Enable the checksums again and run them regularly, and it seems you have covered your bases.
 
No raid, in particular Raid5 or Raid6, is designed to discover bit rot. They dont have checksums to do that. Sure, they do XOR calculations, but that is to repair the raid if a disk breaks. But no bit rot, they can not catch that. You need a modern solution for bit rot, such as ZFS. I have lot of research papers on this, I can show you if you wish.

Thus, you can not trust raid6, you need to do manual checksumming SHA256 or MD5 comes to mind. I think it is good that you do a checksum now and then. Enable the checksums again and run them regularly, and it seems you have covered your bases.

This research was the reason for me implementing the md5 checksuming in the first place.
But I do not completely agree with you regarding RAID, nor do the research. In normal day to day usage RAID won't do anything good as parity is not checked for every read, but during scrubbing, given a parity based RAID, all blocks are reread and the parity recalculated. Many, if not most, small errors will get caught by this. So RAID6 and scrubbing is indeed effective against the most common forms of bit corruption. This is also stated by the research I have found (from Cern and "Silent data corruption in SATA arrays: A solution")

But as you correctly states there are some errors that escapes RAID and harddrive-detection (13% of the silent errors was not found by scrubbing according to research netapp have done, but this also means that 87% _is_ detected and hopefully corrected). This is mostly when some write goes bad or to the wrong place on its way to the drive, perhaps because of some hardware error in the path. For these special cases ZFS shines as it does checksuming all the way to the disk. I will install btrfs when I find it ready to take care of these cases, even though my testing has shown that for me it is no issue at all. But better safe than sorry.
 
My server is backed up weekly to internal backup drives. I then back that up to cloud storage "crasplan" around 7 TB

so I find a local backup + cloud backup or ofsite backup at a friends is good.
 
Best backup option for you is already installed on your windows computer:

open up a command prompt

ROBOCOPY C:\SourceFolder D:\DestinationFolder /MIR




robocopy is built into windows from vista onwards, and exists for win2000 & winxp if you look for it, if you are still old school.

The /MIR command means MIRROR, it will make an exact copy of the source directory in the destination you specify. Be aware that a /MIR command will delete any files in the destination that aren't in the source, (it truly does MIRROR the source).

I use this command as a batch file with a scheduled task to an external 3TB hdd for my important directories. (run weekly)

On top of that, I'm running a ZFS file server in RAIDZ2 configuration for additional redundancy + snapshot capability.
 
syncback with FTP at the backup location. seems pretty reliable.

At home a combination of dropbox and for my media, I do most of it by hand between a server I keep in the server room at work, and a server at home. On UPS at both locations. No RAID.
 
Robocopy is a very nice tool, but has some serious flaws for a backup system compared to e.g rsync.

- Doesn't support hardlinking, which means that you will have to make complete copies of all your data if you want to have things like daily snapshots of your system.
- If it detects a change in a file it has to copy the entire file, while rsync can copy only the changed parts.

Robocopy is best suited for synced copies not a complete backup system because of this. A synced copy is of course much better than no backup, but because you are also syncing things like accidental deletions and corrupted files you may discover later that things you thought were in your backup aren't. Then you need some kind of versioned backup, which is very impractical without hardlinking support.
 
Last edited:
But I do not completely agree with you regarding RAID, nor do the research. In normal day to day usage RAID won't do anything good as parity is not checked for every read, but during scrubbing, given a parity based RAID, all blocks are reread and the parity recalculated. Many, if not most, small errors will get caught by this. So RAID6 and scrubbing is indeed effective against the most common forms of bit corruption. This is also stated by the research I have found (from Cern and "Silent data corruption in SATA arrays: A solution")
Here you say that RAID6 and scrubbing is effective against bit corruption.

Below you say 13% of all errors are not caught by scrubbing:
But as you correctly states there are some errors that escapes RAID and harddrive-detection (13% of the silent errors was not found by scrubbing according to research netapp have done, but this also means that 87% _is_ detected and hopefully corrected).

So, the conclusion is: there is a small chance that RAID6 and scrubbing can not detect all bit rot errors. Therefore you need to use manual checksumming, such as MD5 or SHA256. Hence, RAID6 is not safe against bit rot. Use manual checksumming. Or automatic checksumming (ZFS).
 
I am not sure encrypting is ever a good idea.

I've been running all of my machines with fully encrypted disks for many years. I don't encrypt the disks in my gaming box but every other disk is encrypted including removables. I haven't had one bit of trouble.

My favorite advantage is not having to worry about erasing failed disks before RMA. I just pop the disk out and mail it off, no worries.
 
I am not sure encrypting is ever a good idea.

Because I don't trust encryption to not create problems when I really need the files I don't use it for my main backup. The only reason for using it is because the disk sits in a S-ATA docking in my office and could easily be snatched by anyone coming into our office.

Regarding RAID6 and silent corruption:
My point was not that you are completely protected using RAID6 and scrubbing, just that 87% is much better than nothing for this already rare issue, and good enough for my usage until btrfs is ready.
I din't get a single corruption in half a year on several TB of data using this setup, so the last 13% will certainly not cause big problems.

It is however not an issue anymore at all. I did some hard thinking and was able to create incremental checksuming for my rsync-snapshots. I keep a complete checksum-file for every snapshot, but for the existing non-changed files the checksums are just copied from the previous checksum calculation, so only the changed files need to be recalculated.

In other words the checksum are only calculated either when no existing checksum exists or when something is changed in the files.

I then have a weekly job that verifies the files from the latest backup which warns me by mail if anything doesn't verify OK.
 
Back
Top