CrashPlan Exiting Consumer Backup Market

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,739
Hey guys.

I just got this email:

Hello,

Thank you for being a CrashPlan® for Home customer. We're honored that you’ve trusted us to protect your data.

We want you to know that we have shifted our business strategy to focus on the enterprise and small business segments. This means that over the next 14 months we will be exiting the consumer market and you must choose another option for data backup before your subscription expires. We are committed to providing you with an easy and efficient transition.
WHAT DOES THIS MEAN TO YOU
We will honor your existing CrashPlan for Home subscription, keeping your data safe, as always, until your current subscription expires.

To allow you time to transition to a new backup solution, we've extended your subscription (at no cost to you) by 60 days. Your new subscription expiration date is 03/09/2018.

YOUR CHOICES
Your first step is to consider the options below, available exclusively for CrashPlan for Home customers. Once you make your selection, no further action is required until your new expiration date. We will send you reminders well before your CrashPlan for Home subscription ends.

The options are to either switch to CrashPlan business, or to switch to cryptonite.

I had been souring on CrashPlan over the last year anyway, so I'm not too upset about this, but now I have to figure out what's going to replace it.


Maybe I'll just build a second system with old drives and stay it under my desk at work, and tell it to sync my home base after business hours and on weekends?

What are you guys doing for off-site backups for home NAS these days?
 
Maybe I'll just build a second system with old drives and stay it under my desk at work, and tell it to sync my home base after business hours and on weekends?
I did pretty much that. After I checked backup services and the cost of server colocation, I decided to upgrade my bandwidth at home instead and zfs send to a system built out of old parts that I placed at work.
 
I did pretty much that. After I checked backup services and the cost of server colocation, I decided to upgrade my bandwidth at home instead and zfs send to a system built out of old parts that I placed at work.

I'd be curious how you implemented that.

I was thinking of writing a script that does something like this:

1.) Use rsync to copy new files to backup server
2.) Use find to delete files removed since last sync (rsyncs --delete function doesn't seem to always work for me)
3.) Snapshot the file system using zfs.

I could then have the script run once a day in Cron, overnight.

I'd wind up with a very large number of snapshots in a hurry, but I could manually delete old ones. Maybe keep daily's for like two weeks, weekly's for a few months, then monthlies after that.

I still have to figure out the details of the script, but if there is a better way to do something like this with ZFS send, I'm interested.
 
I use syncoid to send (actually pull) the newest snapshots created by zfs autosnap. I occasionally run this script to delete unwanted snapshots on the backup. No idea why you'd have to work with rsync when you have zfs :confused:
 
I use syncoid to send (actually pull) the newest snapshots created by zfs autosnap. I occasionally run this script to delete unwanted snapshots on the backup. No idea why you'd have to work with rsync when you have zfs :confused:


Well, I've never played with ZFS send or receive. I've only used ZFS like a local file system, so I am not familiar with how to best do these things.

How does ZFS send work? Does it only send differential data, or would I be trying to transfer the entire damned thing, every time?
 
I am now using Duplicati to backup to an unlimited storage Google Drive, available from G Suite for Business at $10/month. Yes they say you need 5 users for unlimited storage but they don't enforce that. Works great.

If you don't want to roll your own Wirecutter recommends Backblaze (and I agree), but note that they remove files 30 days after they were deleted, a major downside to the service.

https://www.duplicati.com/
https://gsuite.google.com/solutions/
https://www.backblaze.com/
 
Well, I've never played with ZFS send or receive. I've only used ZFS like a local file system, so I am not familiar with how to best do these things.

How does ZFS send work? Does it only send differential data, or would I be trying to transfer the entire damned thing, every time?
Once a full initial snapshot has been transferred, all further sends are block-level incremental, so it is the most efficient way possible. To use zfs send/receive, make sure at least one of the machines is reachable by the other via SSH.

As for Backblaze, unfortunately they don't support Linux under their flat fee service, only under the metered B2 business plan.
 
Yes, that's right. Crashplan had a crappy java client, but it was the only all-in-one unlimited storage offsite backup provider to support linux.

But if you're a sophisticated user running Linux and talking about rolling your own ZFS offsite backup solution rather than just using syncthing like a non-crazy person, you can probably figure out Duplicati and do a lot more with an unlimited google drive than just backing up to it. I know I do.
 
Once a full initial snapshot has been transferred, all further sends are block-level incremental, so it is the most efficient way possible. To use zfs send/receive, make sure at least one of the machines is reachable by the other via SSH.

As for Backblaze, unfortunately they don't support Linux under their flat fee service, only under the metered B2 business plan.


That's very cool, thank you.

I'd be curious how you have scripted it all, or do you do it manually?

A few more questions:

1.) Does using Send/Receive require a snapshot to be created on the local machine before sending it to backup, or can you send the current state?

2.) What happens if the transfer is interrupted, such that you don't get a complete transfer of the snapshot? Does it still keep the differential data so that next time it can resume, or the next time you initiate a zfs send/receive action does it start over?

The reason I ask is, the way I had imagined setting this up is so that I have a predefined window of time every day (lets say, between 2am and 6am on business days, and maybe 4am and 10am on weekends) I let it run. I had planned to cron it all, and then have another cron that kills the running process when the time window is up, if it is still running.

My worry would be if I create a large change in one day that is large enough to not be able to be completed in one overnight time window, that - if it cannot resume transfers - I wind up being caught in a situation where it eternally starts a new sync every night, and can never complete it.

Appreciate any thoughts you may have on the matter.
 
Yes, you need at least one snapshot to send data, or at least two to perform an incremental send. Interrupted sends lose the transferred data after the last completed snapshot. Say you have snaps A, B and C, and you are performing an incremental send with -I parameter from snap A to C and the transfer is interrupted after the transfer completed A-B, but not B-C, then you only lose that last part of the transfer. However, there's now the bookmark feature, that lets you resume interrupted sends, but I can't tell you anything about it as I have never used it.

I, like most people using ZFS have zfs-auto-snap set up to create rolling snapshots for monthly, weekly, daily, hourly and frequent intervals. I currently use syncoid on a weekly basis to automatically and incrementally sync the accumulated snaps to the backup machine. This is of course cron-able. Syncoids author is working on integrating the bookmark feature, AFAIK, btw.

As for having large daily increments that could be too much for 24h worth of transfer time / your available transfer window:
If you have frequent auto-snaps, the individual increments should be small enough that even an interrupted transfer wouldn't lose you much data. Subsequent runs should catch up eventually as long as your total data churn doesn't exceed your transfer capacity. Just make sure auto-snaps doesn't delete the rolling snaps before you managed to transfer them.
 
Has anyone found any replacements for CrashPlan in terms of local networked backups?
I'm having a real tough time finding something that doesn't require a CS degree in networking to get set up.
 
Has anyone found any replacements for CrashPlan in terms of local networked backups?
I'm having a real tough time finding something that doesn't require a CS degree in networking to get set up.


For local backups, I'm assuming (but have nothing to base it on) that the CrashPlan client won't just stop working.

I could be wrong though.

It likely won't be supported or patched, but I'm guessing it will continue working until a Windows update breaks it.


There are a number of open source GUI alternatives that try to mimick Apple Time Machine style backups, and some do run on Windows, but most don't.

I researched this a while back but can't remember the details. I'll do some poking around and see what I can find.
 
For local backups, I'm assuming (but have nothing to base it on) that the CrashPlan client won't just stop working.

I could be wrong though.

It likely won't be supported or patched, but I'm guessing it will continue working until a Windows update breaks it.


There are a number of open source GUI alternatives that try to mimick Apple Time Machine style backups, and some do run on Windows, but most don't.

I researched this a while back but can't remember the details. I'll do some poking around and see what I can find.
That's the problem.
Crash Plan local requires a login, which will be shut off as part of this change. So local is also being shuttered.

Did some research and I think I'm going with a Drobo box at this point.
 
I've been using SyncToy since... a long time to do my periodic backups.I have daily weekly and monthly syncs and they all run on a schedule. First sync is long, but after that they are usually pretty speedy only syncing files that have changed. Makes sure I can restore shit if something gets fucked up down the road.
 
I've been using SyncToy since... a long time to do my periodic backups.
I used SyncToy in the past. As basically a windows based rsync with GUI, its great for LAN or external disk backups. Ofc, there are many more options for the non-linux-handicapped ;)
 
I used SyncToy in the past. As basically a windows based rsync with GUI, its great for LAN or external disk backups. Ofc, there are many more options for the non-linux-handicapped ;)

Eh, why mess with what works?

It backs up to my RockStor NAS running BTRFS. So much for being linux handicapped.
 
Has anyone found any replacements for CrashPlan in terms of local networked backups?
I'm having a real tough time finding something that doesn't require a CS degree in networking to get set up.

What I have been using for quite a while now is:
Cobian backup - to backup everything to one computer
Backblaze - to cloud backup everything from the local backup

That way I only have to pay for a single subscription which is about $50 a year for unlimited storage.
 
What I have been using for quite a while now is:
Cobian backup - to backup everything to one computer
Backblaze - to cloud backup everything from the local backup

That way I only have to pay for a single subscription which is about $50 a year for unlimited storage.
Thanks for the info. I have seen Cobian before, but to be honest the nature of the company behind it kind of worried me. I know it has changed hands a few times and is developed by only a small number of people if not only one guy.
 
Yes, you need at least one snapshot to send data, or at least two to perform an incremental send. Interrupted sends lose the transferred data after the last completed snapshot. Say you have snaps A, B and C, and you are performing an incremental send with -I parameter from snap A to C and the transfer is interrupted after the transfer completed A-B, but not B-C, then you only lose that last part of the transfer. However, there's now the bookmark feature, that lets you resume interrupted sends, but I can't tell you anything about it as I have never used it.

I, like most people using ZFS have zfs-auto-snap set up to create rolling snapshots for monthly, weekly, daily, hourly and frequent intervals. I currently use syncoid on a weekly basis to automatically and incrementally sync the accumulated snaps to the backup machine. This is of course cron-able. Syncoids author is working on integrating the bookmark feature, AFAIK, btw.

As for having large daily increments that could be too much for 24h worth of transfer time / your available transfer window:
If you have frequent auto-snaps, the individual increments should be small enough that even an interrupted transfer wouldn't lose you much data. Subsequent runs should catch up eventually as long as your total data churn doesn't exceed your transfer capacity. Just make sure auto-snaps doesn't delete the rolling snaps before you managed to transfer them.


That is cool.

A couple of more questions if you don't mind.

1.) Does the target pool have to have the same physical configuration as the source, or could I - say - ZFS send a snapshot from a pool consisting of 2x RAIDz2 vdevs to a pool consisting of a single RAIDz3 vdev?

2.) Can snapshots from different pools be sent to the same pool? How do they then appear on the target pool. As different datasets?

3.) Can I have different pool/dataset settings on the target pool than on the source pool? For instance, my current pool does not use deduplication and uses a light LZH compression. Could I have the target pool do deduplication and a heavier compression on the data?

Thanks again!
 
1.) Does the target pool have to have the same physical configuration as the source, or could I - say - ZFS send a snapshot from a pool consisting of 2x RAIDz2 vdevs to a pool consisting of a single RAIDz3 vdev?

2.) Can snapshots from different pools be sent to the same pool? How do they then appear on the target pool. As different datasets?

3.) Can I have different pool/dataset settings on the target pool than on the source pool? For instance, my current pool does not use deduplication and uses a light LZH compression. Could I have the target pool do deduplication and a heavier compression on the data?
Send/recv are zfs - not zpool - commands, so they operate on datasets. Therefore...

1) Pool layout doesn't matter.
2) Yes & yes, exactly.
3) I believe so but haven't tested it. There is a dedupe option in zfs send.

My AIO has a pool with mirrored SSDs & another with HDs in Z1. The SSDs' NFS dataset automatically syncs to a backup on the HDs every hour. Works great.
 
So, not to necro, but some here might be specifically interested.
If you're looking for a crash plan replacement for local networked backups, consider a Drobo plus ResilioSync for client backups.
It runs Plex, auto backs up clients, has built in redundancy, drive health testing and monitoring, and auto rebuilding when drives are swapped. The desktop app is also top notch.

image.png


I've had it on place since November, and I am super happy
 
Another suggestion is FreeFileSync

"FreeFileSync is a folder comparison and synchronization software that creates and manages backup copies of all your important files. Instead of copying every file every time, FreeFileSync determines the differences between a source and a target folder and transfers only the minimum amount of data needed. FreeFileSync is Open Source software, available for Windows, Linux and macOS."

It is not a full system backup but for folders it works wonders and supports cloud drives too. In fact it can connect through FTP/SFTP to any server you like. And it is 100% FREE.
 
I ended up going with iDrive for everything excluding movies. They were doing a Christmas thing that gave you a physical network harddrive in addition to online, so I have both local and online backup.

For movies I ended up buying/using old 2-3TB drives and spreading backups out across many drives. I wrote a perl script to keep track of everything.
 
Send/recv are zfs - not zpool - commands, so they operate on datasets. Therefore...

1) Pool layout doesn't matter.
2) Yes & yes, exactly.
3) I believe so but haven't tested it. There is a dedupe option in zfs send.

My AIO has a pool with mirrored SSDs & another with HDs in Z1. The SSDs' NFS dataset automatically syncs to a backup on the HDs every hour. Works great.


Thank you for all of this information.

I currently have my main NAS backing up to my backup server locally using scripted and cron:eded auto-backup doing a nightly snapshot and transfer, utilizing modified versions of this guys script I found.

Once it has been running for a while, and I am comfortable with its reliability, I am going to move the server to my remote location.

Only concern I have now is, what happens if one day I have LOTS of writes, and wind up with a large enough diff between snapshots that the zfs send/recv operation does not complete before the next one triggers, 24 hours later?

Maybe I should include some sort of check for already running instances of the script and just skip a day if it isn't done.

Thoughts?
 
Beats me. We know the destination will appear identical to send/recv until the 1st sync finishes, so I suspect either an error or 2 almost identical (2nd will have additional, new data) syncs.

You might want to calculate the amount of data required for this to be a problem, based on the bandwidth to the remote, etc.

And how about creating a small filesystem for testing & running syncs way sooner than they can finish? If you can set a speed limit on the xfer, this should be easy to test. Worst case, you could force a port to 10 or 100Mbps.
 
Maybe this ams stupid questions but... Couldn't you store the snapshot locally then send it to the remote destination? You'd have step one done at local or LAN speed then the transfer of the snapshot can happen without causing stack up issues.
 
That would work but is impractical for large amounts of data, because you need a local duplicate of the entire filesystem (not just the 1 snap). If your source pool runs SSDs, you could bounce it to spinners before remote. But if you start with, say, 8TB of client backups on rust...
 
Thank you for all of this information.
I currently have my main NAS backing up to my backup server locally using scripted and cron:eded auto-backup doing a nightly snapshot and transfer, utilizing modified versions of this guys script I found.
Once it has been running for a while, and I am comfortable with its reliability, I am going to move the server to my remote location.
Only concern I have now is, what happens if one day I have LOTS of writes, and wind up with a large enough diff between snapshots that the zfs send/recv operation does not complete before the next one triggers, 24 hours later?
Maybe I should include some sort of check for already running instances of the script and just skip a day if it isn't done.
Thoughts?

That works. I've also used lock files, but running ps and grepping for your process is better, since it's possible that the backup crashed and the lock file wasn't removed.
I guess either way you'd want to send a mail for a long backup and/or crashed process.
 
Back
Top