The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • June 11, 2013
    New Ceph Backend to Lower Disk Requirements

    I get a fair number of questions on the current Ceph blueprints, especially those coming from the community. Loic Dachary, one of the owners of the Erasure Encoding blueprint, has done a great job taking a look at some of issues at hand. When evaluating Ceph to run a new storage service, the replication factor […]

  • June 11, 2013
    Ceph RBD Online Resize

    Extend rbd drive with libvirt and XFS

    First, resize the device on the physical host.

    Get the current size :

    1
    
    $ qemu-img info -f rbd "rbd:rbd/myrbd"

    Be careful, you must specify a bigger size, shrink a volume is destructive for the FS.

    1
    
    $ qemu-img resize -f rbd "rbd:rbd/myrbd" 600G

    List device define for myVM :

    1
    
    $ virsh domblklist myVM

    Resize libvirt blockdevice :

    1
    2
    
    $ virsh blockresize --domain myVM --path vdb --size 600G
    $ rbd info rbd/myrb

    Extend xfs on guest :

    1
    
    $ xfs_growfs /mnt/rbd/myrbd

    Extend rbd with kernel module

    You need at least kernel 3.10 on ceph client to support resizing.
    For previous version look at http://dachary.org/?p=2179

    Get current size :

    1
    
    $ rbd info rbd/myrbd

    Just do :

    1
    2
    
    $ rbd resize rbd/myrbd --size 600000
    $ xfs_growfs /mnt/rbd/myrbd

    Also, since cuttlefish you can’t shrink a bloc device without specify additional option (–allow-shrink)

  • June 6, 2013
    v0.61.3 released

    This is a much-anticipated point release for the v0.61 Cuttlefish stable series.  It resolves a number of issues, primarily with monitor stability and leveldb trimming.  All v0.61.x uses are encouraged to upgrade. Upgrading from bobtail: There is one known problem with mon upgrades from bobtail.  If the ceph-mon conversion on startup is aborted or fails […]

  • June 3, 2013
    Ceph integration in OpenStack: Grizzly update and roadmap for Havana

    What a perfect picture, a Cephalopod smoking a cigar! Updates!
    The OpenStack developer summit was great and obviously one of the most exciting session was the one about the Ceph integration with OpenStack.
    I had the great pleasure to attend this sess…

  • May 29, 2013
    v0.63 released

    Another sprint, and v0.63 is here.  This release features librbd improvements, mon fixes, osd robustness, and packaging fixes. Notable features in this release include: librbd: parallelize delete, rollback, flatten, copy, resize librbd: ability to read from local replicas osd: resurrect partially deleted PGs osd: prioritize recovery for degraded PGs osd: fix internal heartbeart timeouts when scrubbing very […]

  • May 23, 2013
    State of the union: Ceph and Citrix

    Since last month saw huge amounts of OpenStack news coming out of the Developer Summit in Portland, I thought it might be worth spending some time on CloudStack and its ecosystem this month. With the Citrix Synergy event in full swing, a ‘State of the Union’ with respect to Ceph and Citrix is probably the […]

  • May 16, 2013
    ViPR: A software-defined storage mullet?

    Almost every few weeks, new storage products are announced by competitors and I generally avoid commenting on them. But EMC’s ViPR announcement contains attempts to perform both marketing and technical sleight of hand around software-defined storage that potentially do much to slow down the inevitable change that is coming to the storage market. While EMC […]

  • May 16, 2013
    Deploying Ceph with ceph-deploy

    If you have deployed Ceph recently without the assistance of an orchestration tool like Chef or Juju you may have noticed there has been a lot of attention on ceph-deploy. Ceph-deploy is the new stand-alone way to deploy Ceph (replacing mkcephfs) that relies only on ssh, sudo, and some Python to get the job done. […]

  • May 14, 2013
    v0.62 released

    This is the first release after cuttlefish. Since most of this window was spent on stabilization, there isn’t a lot of new stuff here aside from cleanups and fixes (most of which are backported to v0.61). v0.63 is due out in 2 weeks and will have more goodness. mon: fix validation of mds ids from CLI commands […]

  • May 14, 2013
    Incremental Snapshots with RBD

    While Ceph has a wide range of use cases, the most frequent application that we are seeing is that of block devices as data store for public and private clouds managed by OpenStack, CloudStack, Eucalyptus, and OpenNebula. This means that we frequently get questions about things like geographic replication, backup, and disaster recovery (or some […]

  • May 14, 2013
    v0.61.2 released

    This release has only two changes: it disables a debug log by default that consumes disk space on the monitor, and fixes a bug with upgrading bobtail monitor stores with duplicated GV values. We urge all v0.61.1 users to upgrade to avoid filling the monitor data disks. mon: fix conversion of stores with duplicated GV values mon: disable […]

  • May 13, 2013
    Deploy a Ceph MDS server

    How-to quickly deploy a MDS server.

    Assuming that /var/lib/ceph/mds/mds is the mds data point.

    Edit ceph.conf and add a MDS section like so:

    [mds]
    mds data = /var/lib/ceph/mds/mds.$id
    keyring = /var/lib/ceph/mds/mds.$id/mds.$id.keyring

    [md…

Careers