CrashPlan Exiting Consumer Backup Market

Discussion in 'SSDs & Data Storage' started by Zarathustra[H], Aug 25, 2017.

  1. Zarathustra[H]

    Zarathustra[H] Pick your own.....you deserve it.

    Messages:
    23,508
    Joined:
    Oct 29, 2000
    Hey guys.

    I just got this email:

    The options are to either switch to CrashPlan business, or to switch to cryptonite.

    I had been souring on CrashPlan over the last year anyway, so I'm not too upset about this, but now I have to figure out what's going to replace it.


    Maybe I'll just build a second system with old drives and stay it under my desk at work, and tell it to sync my home base after business hours and on weekends?

    What are you guys doing for off-site backups for home NAS these days?
     
  2. zrav

    zrav Limp Gawd

    Messages:
    159
    Joined:
    Sep 22, 2011
    I did pretty much that. After I checked backup services and the cost of server colocation, I decided to upgrade my bandwidth at home instead and zfs send to a system built out of old parts that I placed at work.
     
  3. Zarathustra[H]

    Zarathustra[H] Pick your own.....you deserve it.

    Messages:
    23,508
    Joined:
    Oct 29, 2000
    I'd be curious how you implemented that.

    I was thinking of writing a script that does something like this:

    1.) Use rsync to copy new files to backup server
    2.) Use find to delete files removed since last sync (rsyncs --delete function doesn't seem to always work for me)
    3.) Snapshot the file system using zfs.

    I could then have the script run once a day in Cron, overnight.

    I'd wind up with a very large number of snapshots in a hurry, but I could manually delete old ones. Maybe keep daily's for like two weeks, weekly's for a few months, then monthlies after that.

    I still have to figure out the details of the script, but if there is a better way to do something like this with ZFS send, I'm interested.
     
  4. zrav

    zrav Limp Gawd

    Messages:
    159
    Joined:
    Sep 22, 2011
    I use syncoid to send (actually pull) the newest snapshots created by zfs autosnap. I occasionally run this script to delete unwanted snapshots on the backup. No idea why you'd have to work with rsync when you have zfs :confused:
     
  5. Zarathustra[H]

    Zarathustra[H] Pick your own.....you deserve it.

    Messages:
    23,508
    Joined:
    Oct 29, 2000

    Well, I've never played with ZFS send or receive. I've only used ZFS like a local file system, so I am not familiar with how to best do these things.

    How does ZFS send work? Does it only send differential data, or would I be trying to transfer the entire damned thing, every time?
     
  6. schizo

    schizo [H]ard|Gawd

    Messages:
    1,515
    Joined:
    Nov 6, 2004
    I am now using Duplicati to backup to an unlimited storage Google Drive, available from G Suite for Business at $10/month. Yes they say you need 5 users for unlimited storage but they don't enforce that. Works great.

    If you don't want to roll your own Wirecutter recommends Backblaze (and I agree), but note that they remove files 30 days after they were deleted, a major downside to the service.

    https://www.duplicati.com/
    https://gsuite.google.com/solutions/
    https://www.backblaze.com/
     
  7. zrav

    zrav Limp Gawd

    Messages:
    159
    Joined:
    Sep 22, 2011
    Once a full initial snapshot has been transferred, all further sends are block-level incremental, so it is the most efficient way possible. To use zfs send/receive, make sure at least one of the machines is reachable by the other via SSH.

    As for Backblaze, unfortunately they don't support Linux under their flat fee service, only under the metered B2 business plan.
     
    Zarathustra[H] likes this.
  8. schizo

    schizo [H]ard|Gawd

    Messages:
    1,515
    Joined:
    Nov 6, 2004
    Yes, that's right. Crashplan had a crappy java client, but it was the only all-in-one unlimited storage offsite backup provider to support linux.

    But if you're a sophisticated user running Linux and talking about rolling your own ZFS offsite backup solution rather than just using syncthing like a non-crazy person, you can probably figure out Duplicati and do a lot more with an unlimited google drive than just backing up to it. I know I do.
     
  9. Zarathustra[H]

    Zarathustra[H] Pick your own.....you deserve it.

    Messages:
    23,508
    Joined:
    Oct 29, 2000

    That's very cool, thank you.

    I'd be curious how you have scripted it all, or do you do it manually?

    A few more questions:

    1.) Does using Send/Receive require a snapshot to be created on the local machine before sending it to backup, or can you send the current state?

    2.) What happens if the transfer is interrupted, such that you don't get a complete transfer of the snapshot? Does it still keep the differential data so that next time it can resume, or the next time you initiate a zfs send/receive action does it start over?

    The reason I ask is, the way I had imagined setting this up is so that I have a predefined window of time every day (lets say, between 2am and 6am on business days, and maybe 4am and 10am on weekends) I let it run. I had planned to cron it all, and then have another cron that kills the running process when the time window is up, if it is still running.

    My worry would be if I create a large change in one day that is large enough to not be able to be completed in one overnight time window, that - if it cannot resume transfers - I wind up being caught in a situation where it eternally starts a new sync every night, and can never complete it.

    Appreciate any thoughts you may have on the matter.
     
  10. zrav

    zrav Limp Gawd

    Messages:
    159
    Joined:
    Sep 22, 2011
    Yes, you need at least one snapshot to send data, or at least two to perform an incremental send. Interrupted sends lose the transferred data after the last completed snapshot. Say you have snaps A, B and C, and you are performing an incremental send with -I parameter from snap A to C and the transfer is interrupted after the transfer completed A-B, but not B-C, then you only lose that last part of the transfer. However, there's now the bookmark feature, that lets you resume interrupted sends, but I can't tell you anything about it as I have never used it.

    I, like most people using ZFS have zfs-auto-snap set up to create rolling snapshots for monthly, weekly, daily, hourly and frequent intervals. I currently use syncoid on a weekly basis to automatically and incrementally sync the accumulated snaps to the backup machine. This is of course cron-able. Syncoids author is working on integrating the bookmark feature, AFAIK, btw.

    As for having large daily increments that could be too much for 24h worth of transfer time / your available transfer window:
    If you have frequent auto-snaps, the individual increments should be small enough that even an interrupted transfer wouldn't lose you much data. Subsequent runs should catch up eventually as long as your total data churn doesn't exceed your transfer capacity. Just make sure auto-snaps doesn't delete the rolling snaps before you managed to transfer them.
     
  11. Modred189

    Modred189 I'm Smarter Than You

    Messages:
    16,717
    Joined:
    May 24, 2006
    Has anyone found any replacements for CrashPlan in terms of local networked backups?
    I'm having a real tough time finding something that doesn't require a CS degree in networking to get set up.
     
  12. Zarathustra[H]

    Zarathustra[H] Pick your own.....you deserve it.

    Messages:
    23,508
    Joined:
    Oct 29, 2000

    For local backups, I'm assuming (but have nothing to base it on) that the CrashPlan client won't just stop working.

    I could be wrong though.

    It likely won't be supported or patched, but I'm guessing it will continue working until a Windows update breaks it.


    There are a number of open source GUI alternatives that try to mimick Apple Time Machine style backups, and some do run on Windows, but most don't.

    I researched this a while back but can't remember the details. I'll do some poking around and see what I can find.
     
  13. Modred189

    Modred189 I'm Smarter Than You

    Messages:
    16,717
    Joined:
    May 24, 2006
    That's the problem.
    Crash Plan local requires a login, which will be shut off as part of this change. So local is also being shuttered.

    Did some research and I think I'm going with a Drobo box at this point.
     
  14. Jorona

    Jorona 2[H]4U

    Messages:
    3,091
    Joined:
    Nov 6, 2011
    I've been using SyncToy since... a long time to do my periodic backups.I have daily weekly and monthly syncs and they all run on a schedule. First sync is long, but after that they are usually pretty speedy only syncing files that have changed. Makes sure I can restore shit if something gets fucked up down the road.
     
  15. zrav

    zrav Limp Gawd

    Messages:
    159
    Joined:
    Sep 22, 2011
    I used SyncToy in the past. As basically a windows based rsync with GUI, its great for LAN or external disk backups. Ofc, there are many more options for the non-linux-handicapped ;)
     
  16. Jorona

    Jorona 2[H]4U

    Messages:
    3,091
    Joined:
    Nov 6, 2011
    Eh, why mess with what works?

    It backs up to my RockStor NAS running BTRFS. So much for being linux handicapped.
     
  17. cyclone3d

    cyclone3d [H]ardForum Junkie

    Messages:
    11,907
    Joined:
    Aug 16, 2004
    What I have been using for quite a while now is:
    Cobian backup - to backup everything to one computer
    Backblaze - to cloud backup everything from the local backup

    That way I only have to pay for a single subscription which is about $50 a year for unlimited storage.
     
    Modred189 likes this.
  18. Modred189

    Modred189 I'm Smarter Than You

    Messages:
    16,717
    Joined:
    May 24, 2006
    Thanks for the info. I have seen Cobian before, but to be honest the nature of the company behind it kind of worried me. I know it has changed hands a few times and is developed by only a small number of people if not only one guy.
     
  19. Zarathustra[H]

    Zarathustra[H] Pick your own.....you deserve it.

    Messages:
    23,508
    Joined:
    Oct 29, 2000

    That is cool.

    A couple of more questions if you don't mind.

    1.) Does the target pool have to have the same physical configuration as the source, or could I - say - ZFS send a snapshot from a pool consisting of 2x RAIDz2 vdevs to a pool consisting of a single RAIDz3 vdev?

    2.) Can snapshots from different pools be sent to the same pool? How do they then appear on the target pool. As different datasets?

    3.) Can I have different pool/dataset settings on the target pool than on the source pool? For instance, my current pool does not use deduplication and uses a light LZH compression. Could I have the target pool do deduplication and a heavier compression on the data?

    Thanks again!
     
  20. HammerSandwich

    HammerSandwich Gawd

    Messages:
    1,005
    Joined:
    Nov 18, 2004
    Send/recv are zfs - not zpool - commands, so they operate on datasets. Therefore...

    1) Pool layout doesn't matter.
    2) Yes & yes, exactly.
    3) I believe so but haven't tested it. There is a dedupe option in zfs send.

    My AIO has a pool with mirrored SSDs & another with HDs in Z1. The SSDs' NFS dataset automatically syncs to a backup on the HDs every hour. Works great.
     
    Zarathustra[H] likes this.
  21. Modred189

    Modred189 I'm Smarter Than You

    Messages:
    16,717
    Joined:
    May 24, 2006
    So, not to necro, but some here might be specifically interested.
    If you're looking for a crash plan replacement for local networked backups, consider a Drobo plus ResilioSync for client backups.
    It runs Plex, auto backs up clients, has built in redundancy, drive health testing and monitoring, and auto rebuilding when drives are swapped. The desktop app is also top notch.

    image.png

    I've had it on place since November, and I am super happy
     
    JargonGR likes this.
  22. JargonGR

    JargonGR Limp Gawd

    Messages:
    225
    Joined:
    Dec 16, 2006
    Another suggestion is FreeFileSync

    "FreeFileSync is a folder comparison and synchronization software that creates and manages backup copies of all your important files. Instead of copying every file every time, FreeFileSync determines the differences between a source and a target folder and transfers only the minimum amount of data needed. FreeFileSync is Open Source software, available for Windows, Linux and macOS."

    It is not a full system backup but for folders it works wonders and supports cloud drives too. In fact it can connect through FTP/SFTP to any server you like. And it is 100% FREE.
     
  23. westrock2000

    westrock2000 [H]ardForum Junkie

    Messages:
    10,229
    Joined:
    Jun 3, 2005
    I ended up going with iDrive for everything excluding movies. They were doing a Christmas thing that gave you a physical network harddrive in addition to online, so I have both local and online backup.

    For movies I ended up buying/using old 2-3TB drives and spreading backups out across many drives. I wrote a perl script to keep track of everything.