The Ceph Blog

Incremental Snapshots with RBD


While Ceph has a wide range of use cases, the most frequent application that we are seeing is that of block devices as data store for public and private clouds managed by OpenStack, CloudStack, Eucalyptus, and OpenNebula. This means that we frequently get questions about things like geographic replication, backup, and disaster recovery (or some combination therein, given the amount of overlap on these topics). While a full-featured, robust solution to geo-replication is currently being hammered out there are a number of different approaches already being tinkered with (like Sebastien Han’s setup with DRBD or the upcoming work using RGW).

However, since one of the primary focuses in managing a cloud is the manipulation of images, the solution to disaster recovery and general backup can often be quite simplistic. Incremental snapshots can fill this, and several other, roles quite well. To that end I wanted to share a few thoughts from RBD developer Josh Durgin for those of you who may have missed his great talk at the OpenStack Developer Summit a few weeks ago.

For the purposes of disaster recovery, the idea is that you could run two simultaneous Ceph clusters in different geographic locations and instead of copying a new snapshot each time, you could simply generate and transfer a delta. The incantation would look something like this:

rbd export-diff --from-snap snap1 pool/image@snap2 pool_image_snap1_to_snap2.diff

This creates a simple binary file that stores the following information:

  • original snapshot name (if applicable)
  • end snapshot name
  • size of the image at ending snapshot
  • the diff between snapshots

The format of this file can be seen in the RBD doc.

After exporting a diff you could either simply back up the file somewhere offsite or import the diff on top of the existing image on a remote Ceph cluster.

rbd import-diff /path/to/diff backup_image

This will write the contents of the differential to the backup image and create a snapshot with the same name as the original ending snapshot. It will fail and do nothing if a snapshot with this name already exists. Since overwriting the same data is idempotent, it’s safe to have an import-diff interrupted in the middle.

These commands can work with stdin and stdout as well, so you could do something like:

rbd export-diff --from-snap snap1 pool/image@snap2 - | ssh user@second_cluster rbd import-diff - pool2/image

You can see which extents changed (in plain text, json, or xml) via:

rbd diff --from-snap snap1 pool/image@snap2 --format plain

There are a couple of limitations in the current implementation, however.

  1. There’s no guarantee you’re importing a diff onto an image in the right state (i.e. the same image at the same snapshot as the diff was exported from).
  2. There’s no way to inspect the diff files to see what snapshots they refer to, so you’d have to depend on the filename containing that information.

While the implementation is still relatively simple, you can see how this could be quite useful in managing not only cloud images, but any of your Ceph block devices. This functionality hit the streets with the recent ‘cuttlefish‘ stable release, but if you have questions or enhancement requests please let us know.

To learn more about some of the new things coming in future versions of Ceph you can check out the current published roadmap of work Inktank is planning on contributing. Also if you missed the virtual Ceph Developer Summit, the videos have been posted for review. In the meantime, if you have questions, comments, or anything for the good of the cause feel free to stop by our irc channel or drop a note to one of the mailing lists.

scuttlemonkey out

Comments: Incremental Snapshots with RBD

  1. How do you create the original backup image before you can import the diffs? Anything I tried ends up like this:

    $ rbd snap create rbd/foo@snap1
    $ rbd snap create rbd/foo@snap2

    $ rbd export rbd/foo@snap1 – | pv | rbd import – rbd/foo-bak

    $ rbd ls -l
    foo 4096M 1
    foo@snap1 4096M 1
    foo@snap2 4096M 1
    foo-bak 4096M 1

    See above, that there is no “foo-bak@snap1”.

    $ rbd export-diff –from-snap snap1 rbd/foo@snap2 – | rbd import-diff – rbd/foo-bak
    start snapshot ‘snap1’ does not exist in the image, aborting
    Importing image diff: 0% complete…failed.
    rbd: import-diff failed: (22) Invalid argument
    rbd: export-diff error: (32) Broken pipe

    And then, as you can see, it fails because snap1 doesn’t exist.

    I have scripted this for ZFS, which took a snapshot as an argument to send, and then the destination had all snapshots in between sends (including between original and the first incremental send).

    Posted by Peter
    October 11, 2013 at 5:11 pm
  2. Hi Peter,

    If you’ve just exported and then imported the full image at snap1 to a second cluster, you can just create the snapshot on the second cluster yourself, i.e.:

    $ rbd snap create rbd/foo-bak@snap1

    Then importing a diff from snap1 to snap2 will work.

    Starting from scratch, you can create an empty image as the target, and import the entire contents as a diff (by not specifying –from-snap) – this will often be faster, and it will create the snapshot for you as well. This looks like:

    $ rbd create rbd/foo-bak -s 1
    $ rbd export-diff rbd/foo@snap1 – | rbd import-diff – rbd/foo-bak

    So the first step is different, but all subsequent backups would be since the last snapshot:

    $ rbd export-diff –from-snap snap1 rbd/foo@snap2 – | rbd import-diff – rbd/foo-bak

    Posted by Josh
    October 14, 2013 at 11:41 pm
  3. Hi Josh,

    Inspired by this post i created a little script. Needs soms extra features but might be interesting for admins.

    Posted by Rens Reinders
    December 17, 2013 at 10:15 pm
  4. Corrected link:

    Posted by Thijs
    December 19, 2013 at 8:27 am
  5. I tried this out, while it works fine I’m seeing my original images and (snapshiots) are shown as format 2 (rbd ls -l,) but the imported ones are format 1. Is the format number not exported?

    Posted by Mark Kirkwood
    January 16, 2014 at 1:26 am
  6. Actually – if I follow the method outlined in the 2nd comment above, then specifying format 2 when creating the empty image will sort it.

    Posted by Mark Kirkwood
    January 16, 2014 at 3:14 am
  7. I’ve written a simple tool in C to patch exported image and diff files together:

    It’s not been tested outside my very basic system.

    Posted by mda
    September 1, 2014 at 8:57 am
  8. these snap shot incremental backups work very well between 2 rbds on the same cluster. I will now move it over from one rbd on cluster 1 to another rbd on cluster2.
    Thanks Josh for the Comment 2.

    Posted by Ruchika
    January 23, 2015 at 9:55 pm

Add Comment

© 2016, Red Hat, Inc. All rights reserved.