The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • July 8, 2013
    XenServer Support for RBD

    Ceph has been enjoying tremendous success in the Openstack and Cloudstack communities, and a large part of this is due to the ease of using the Ceph block device (RBD) with the KVM or the open-source Xen hypervisor . This was achieved through patches that were made by Josh Durgin, an Inktank engineer, to Qemu […]

  • June 25, 2013
    v0.65 released

    Our next development release v0.65 is out, with a few big changes.  First and foremost, this release includes a complete revamp of the architecture for the command line interface in order to lay the groundwork for our ongoing REST management API work.  The ‘ceph’ command line tool is now a thin python wrapper around librados. […]

  • June 23, 2013
    What I think about CephFS in OpenStack

    I recently had some really interesting questions that led to some nice discussions.
    Since I received the same question twice, I thought it might be good to share the matter with the community.

    The question was pretty simple and obvioulsy the context…

  • June 20, 2013
    v0.61.4 released

    We have resolved a number of issues that v0.61.x Cuttlefish users have been hitting and have prepared another point release, v0.61.4.  This release fixes a rare data corruption during power cycle when using the XFS file system, a few monitor sync problems, several issues with ceph-disk and ceph-deploy on RHEL/CentOS, and a problem with OSD […]

  • June 12, 2013
    v0.64 released

    A new development release of Ceph is out. Notable changes include: osd: monitor both front and back interfaces osd: verify both front and back network are working before rejoining cluster osd: fix memory/network inefficiency during deep scrub osd: fix incorrect mark-down of osds mon: fix start fork behavior mon: fix election timeout mon: better trim/compaction […]

  • June 11, 2013
    New Ceph Backend to Lower Disk Requirements

    I get a fair number of questions on the current Ceph blueprints, especially those coming from the community. Loic Dachary, one of the owners of the Erasure Encoding blueprint, has done a great job taking a look at some of issues at hand. When evaluating Ceph to run a new storage service, the replication factor […]

  • June 11, 2013
    Ceph RBD Online Resize

    Extend rbd drive with libvirt and XFS

    First, resize the device on the physical host.

    Get the current size :

    1
    
    $ qemu-img info -f rbd "rbd:rbd/myrbd"

    Be careful, you must specify a bigger size, shrink a volume is destructive for the FS.

    1
    
    $ qemu-img resize -f rbd "rbd:rbd/myrbd" 600G

    List device define for myVM :

    1
    
    $ virsh domblklist myVM

    Resize libvirt blockdevice :

    1
    2
    
    $ virsh blockresize --domain myVM --path vdb --size 600G
    $ rbd info rbd/myrb

    Extend xfs on guest :

    1
    
    $ xfs_growfs /mnt/rbd/myrbd

    Extend rbd with kernel module

    You need at least kernel 3.10 on ceph client to support resizing.
    For previous version look at http://dachary.org/?p=2179

    Get current size :

    1
    
    $ rbd info rbd/myrbd

    Just do :

    1
    2
    
    $ rbd resize rbd/myrbd --size 600000
    $ xfs_growfs /mnt/rbd/myrbd

    Also, since cuttlefish you can’t shrink a bloc device without specify additional option (–allow-shrink)

  • June 6, 2013
    v0.61.3 released

    This is a much-anticipated point release for the v0.61 Cuttlefish stable series.  It resolves a number of issues, primarily with monitor stability and leveldb trimming.  All v0.61.x uses are encouraged to upgrade. Upgrading from bobtail: There is one known problem with mon upgrades from bobtail.  If the ceph-mon conversion on startup is aborted or fails […]

  • June 3, 2013
    Ceph integration in OpenStack: Grizzly update and roadmap for Havana

    What a perfect picture, a Cephalopod smoking a cigar! Updates!
    The OpenStack developer summit was great and obviously one of the most exciting session was the one about the Ceph integration with OpenStack.
    I had the great pleasure to attend this sess…

  • May 29, 2013
    v0.63 released

    Another sprint, and v0.63 is here.  This release features librbd improvements, mon fixes, osd robustness, and packaging fixes. Notable features in this release include: librbd: parallelize delete, rollback, flatten, copy, resize librbd: ability to read from local replicas osd: resurrect partially deleted PGs osd: prioritize recovery for degraded PGs osd: fix internal heartbeart timeouts when scrubbing very […]

  • May 23, 2013
    State of the union: Ceph and Citrix

    Since last month saw huge amounts of OpenStack news coming out of the Developer Summit in Portland, I thought it might be worth spending some time on CloudStack and its ecosystem this month. With the Citrix Synergy event in full swing, a ‘State of the Union’ with respect to Ceph and Citrix is probably the […]

  • May 16, 2013
    ViPR: A software-defined storage mullet?

    Almost every few weeks, new storage products are announced by competitors and I generally avoid commenting on them. But EMC’s ViPR announcement contains attempts to perform both marketing and technical sleight of hand around software-defined storage that potentially do much to slow down the inevitable change that is coming to the storage market. While EMC […]

Careers