Fileserver Backup Suggestions?

JamesP

n00b
Joined
Mar 9, 2010
Messages
28
My Nas4Free ZFS server seems pretty stable now (after replacing 2 failed WD red drives out of 6 :(), and I'd like to create a backup solution for the server.

I have about 8.5TB on the server now of which only about 3.2TB are 'important' data. Max capacity is about 12TB.

Online backup does not seem to be a possibility since it would take 1 full year to upload only the 'important' data over my 1 Mbps UL connection. Some online backup services offer the option of sending them a HD for the initial backup, but that costs about $120 per TB!

What do most people do to backup such large amounts of data? I'm thinking about creating a second fileserver and initially hooking it up to my LAN to handle the initial seed. Then I'd mail the server to my brother's house and use it to perform incremental backups going forward.

Thanks for any tips,
James
 
Your plan is what I more or less do.

I use a removable hard drive connected by a USB device rather than a whole second server. 2TB dirves seem reasonable in size and cost. we have USB cases as well as something we just plug a HD into.

I am not sure mail is the proper way to ship the drives. (We are close enough that my daughter carries the drives back and forth.) But flat rate boxes are cheap.
 
I guess that you periodically perform full backups and then send the backups to your daughter? I was hoping to send only the initial seed backup, and then perform subsequent incremental backups over the internet.

I'm starting to wonder if a Raspberry Pi with a bunch of disks in a raid configuration could be used as a remote, incremental backup server? Hmm....

James
 
If you have a remote backup system, if you put ZFS on it look into snapshots + zfs send/receive. There is very little overhead with this, much better than rsync for example. You can send snapshots very frequently, or just do them nightly when no one will be bothered by limited bandwidth being used.

Personally I just use old HDDs as backups. I've got like 20TB or whatever of data and then a pile of old 500GB-1.5TB drives. Then I got script that gets a full list of files to backup (all 20TB or whatever), and then decides which files should go on which drives. Previously I would then plug in the old pile of dirves one drive at a time, like inserting tapes, run rsync with the list of files that goes to that drive, and when it was done pop in the next drive. It was basically automated expect for me physically switching which drive was plugged in.

More recently I got one of these for $300 to do backups to:
http://www.avsforum.com/t/1412640/are-you-looking-for-a-less-expensive-norco-4220-4224-alternative

It is loud as shit and not something I want to leave running, but now I won't have to physically swap drives. Instead going to put this in the garage. I'll now have to physically plug server in, go upstairs and run one command that backs everything up, and then go unplug server. Annoyingly even when server is plugged in and off the PSU's fans stay on, and they're loud as hell, lol. I'm watchin the e-waste at work for a RPC strip I can repurpose for poweing on/off server in garage, lol. But in any case same idea. Bunch of individual drives. Being that they're all old drives, and different sizes, I don't want to use raid arrays and end up with a failed array. Soon as I have a few extra drives I'll also use snapraid across these so I then can lose one and be okay.

But I mean house burns down hopefully garage is fine. And if house burns down and a couple of the old HDDs fail oh well I lost some data, but not all.
 
Last edited:
There's three options that i've looked at:
1: tape, reliable but expensive
2: external HDD's
3: a bunch of DVD(+/-)R/BD-R media.

I've basically gone with option 3

however your initial idea is definately possible. Setup a rsync server (or something else that'll only send the new/changed files) on the master end and have the backup set to duplicate it once every $time_period.
 
I kept digging around and there does not seem to exist a reasonable solution for backing up several TB of data online. :(

So, for now, I'm going to backup my data to removable drives while I plan out a build for a backup server. I'll perform the initial sync with the backup server locally, then move it off-site and perform incremental syncs over the net.

Thanks,
James
 
I use EaseUS TodoBackup. I have 2 4TB drives that I use exclusively for backing up my most important directories. I just have Todo run on the 1st of the month an incremental backup. I keep all this housed in the same fileserver where the data is.
 
Another thing that can be done is use Crashplan. Note you can use the software for free and send the backups to a remote site, like your daughters.

You can even seed it first on a local usb drive then take that drive to your daughters and use it for incremental backups.

Crashplan software runs on windows, mac, linux, and solaris. I use it in a similar setup on my OmniOS ZFS box.
 
Another thing that can be done is use Crashplan. Note you can use the software for free and send the backups to a remote site, like your daughters.

I actually did install Crashplan and really liked the software and the fact that I could use both on the Windows side and the Unix side. Thing is that it can only backup to an attached drive (not over the net) unless you pay a monthly fee or take advantage of a loophole in the software. The loophole might be closed sometime in the future and hence I didn't really want to rely on it. :( :( :(

James
 
I actually did install Crashplan and really liked the software and the fact that I could use both on the Windows side and the Unix side. Thing is that it can only backup to an attached drive (not over the net) unless you pay a monthly fee or take advantage of a loophole in the software. The loophole might be closed sometime in the future and hence I didn't really want to rely on it. :( :( :(

By design it backs up over the WAN to multiple destinations. I'm not really sure what you're talking about.

I have eight users backing up to my crashplan server (it's just a pc with storage) over the net. This is 100% free.
 
By design it backs up over the WAN to multiple destinations. I'm not really sure what you're talking about.

I have eight users backing up to my crashplan server (it's just a pc with storage) over the net. This is 100% free.

I guess I should have been more detailed. I'm using NAS4FREE as my NAS and I try to treat it as much like a network appliance as possible. It's the embedded version and I'd prefer not to install any other software on it. Makes for easy upgrades.

Crashplan will let you backup to other computers on the network, as long as Crashplan is running on those machines. I'd rather not install crashplan on the server (big hassle apparently) and hence I can't use this route for local backups.

Ideally, I'd like to point Crashplan to a network folder (hosted on the NAS) and just tell it to place all the backups there. It wont let you do this by default, but there is a loophole to get it to work:
http://virtualj.net/20120126/use-crashplan-automatically-backup-network-drive-nas

So, for my local backups (i.e. lan machines performing hourly backups to the NAS4FREE server), I went with Duplicati. Works fairly well right now backing up to a NAS folder and it seems that v2.0 of Duplicati is going to be even better.

Using Crashplan to backup the NAS4FREE server itself to another machine would still require installing Crashplan on the NAS...

James
 
I actually did install Crashplan and really liked the software and the fact that I could use both on the Windows side and the Unix side. Thing is that it can only backup to an attached drive (not over the net) unless you pay a monthly fee or take advantage of a loophole in the software. The loophole might be closed sometime in the future and hence I didn't really want to rely on it. :( :( :(

James

Well if you do not want to run crashplan on the machine itself then yes it wont really work well for you. Note I do not even think the paid for version will allow this.
 
I use my own offsite backup fileserver with incremental rsync snapshots with checksum verification and mail reporting ( just a python script i have made). I do backup of 6 servers on two separate sites with this. I started with about 1tb over 0,35 mbit dsl, but now I have about 17 TiB data over symmetric 100 mbit with everything from 5 years to a week retention depending on the content. Everything is self driven and automatic. I initially had the backupserver at home and preeseeded it. It has 7 4tb drives in md RAID6, just in case.

Crash plan is way to slow on large amounts of data because of their man-in-the-middle-approach to avoid firewall issues, but is very usable and a good match for most home scenarios. I have an actual transfer rate of 100 mbits to my backupserver via ipsec/gre, which make recovery of terabytes of data reasonable fast.
 
Last edited:
At work I use Backup Exec to backup and dedupe 14TB to a SAS array, then backup that to LTO6 libraries.

Backup is the most expensive part of the operation behind data storage.
 
Back
Top