The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • July 11, 2013
    Ceph Cuttlefish VS Bobtail Part 3: 128K RBD Performance

    Contents Introduction Sequential Writes Random Writes Sequential Reads Random Reads Conclusion INTRODUCTION I’m amazed you are still here! You must be a glutton for punishment (or you haven’t read part 1 and part 2 yet!) If you haven’t run away in fear yet, then prepare yourself! Today we will be looking at how the Ceph […]

  • July 10, 2013
    Ceph Cuttlefish VS Bobtail Part 2: 4K RBD Performance

    Contents Introduction Sequential Writes Random Writes Sequential Reads Random Reads Conclusion INTRODUCTION Welcome back! If you haven’t gotten a chance to read part 1 of our Ceph Cuttlefish VS Bobtail comparison, right now is a great time. Today, we will be looking at how the Ceph Kernel and QEMU/KVM RBD implementations perform with 4K IOs […]

  • July 9, 2013
    v0.66 released

    Our last development release before dumpling is here! The main improvements here are with monitor performance and OSD pg log rewrites to speed up peeering. In other news, the dumpling feature freeze is upon us. The next month we will be focusing entirely on stabilization and testing. There will be a release candidate out in a week or […]

  • July 9, 2013
    Ceph Cuttlefish VS Bobtail Part 1: Introduction and RADOS Bench

    Contents Introduction System Setup Test Setup 4KB Results 128KB Results 4MB Results Conclusion INTRODUCTION Hello readers! You probably thought you had finally gotten rid of me after the strict radio silence I’ve been keeping since the last performance tuning article. No such luck I’m afraid! In reality we here at Inktank have been busy busy […]

  • July 8, 2013
    XenServer Support for RBD

    Ceph has been enjoying tremendous success in the Openstack and Cloudstack communities, and a large part of this is due to the ease of using the Ceph block device (RBD) with the KVM or the open-source Xen hypervisor . This was achieved through patches that were made by Josh Durgin, an Inktank engineer, to Qemu […]

  • June 25, 2013
    v0.65 released

    Our next development release v0.65 is out, with a few big changes.  First and foremost, this release includes a complete revamp of the architecture for the command line interface in order to lay the groundwork for our ongoing REST management API work.  The ‘ceph’ command line tool is now a thin python wrapper around librados. […]

  • June 23, 2013
    What I think about CephFS in OpenStack

    I recently had some really interesting questions that led to some nice discussions.
    Since I received the same question twice, I thought it might be good to share the matter with the community.

    The question was pretty simple and obvioulsy the context…

  • June 20, 2013
    v0.61.4 released

    We have resolved a number of issues that v0.61.x Cuttlefish users have been hitting and have prepared another point release, v0.61.4.  This release fixes a rare data corruption during power cycle when using the XFS file system, a few monitor sync problems, several issues with ceph-disk and ceph-deploy on RHEL/CentOS, and a problem with OSD […]

  • June 12, 2013
    v0.64 released

    A new development release of Ceph is out. Notable changes include: osd: monitor both front and back interfaces osd: verify both front and back network are working before rejoining cluster osd: fix memory/network inefficiency during deep scrub osd: fix incorrect mark-down of osds mon: fix start fork behavior mon: fix election timeout mon: better trim/compaction […]

  • June 11, 2013
    New Ceph Backend to Lower Disk Requirements

    I get a fair number of questions on the current Ceph blueprints, especially those coming from the community. Loic Dachary, one of the owners of the Erasure Encoding blueprint, has done a great job taking a look at some of issues at hand. When evaluating Ceph to run a new storage service, the replication factor […]

  • June 11, 2013
    Ceph RBD Online Resize

    Extend rbd drive with libvirt and XFS

    First, resize the device on the physical host.

    Get the current size :

    1
    
    $ qemu-img info -f rbd "rbd:rbd/myrbd"

    Be careful, you must specify a bigger size, shrink a volume is destructive for the FS.

    1
    
    $ qemu-img resize -f rbd "rbd:rbd/myrbd" 600G

    List device define for myVM :

    1
    
    $ virsh domblklist myVM

    Resize libvirt blockdevice :

    1
    2
    
    $ virsh blockresize --domain myVM --path vdb --size 600G
    $ rbd info rbd/myrb

    Extend xfs on guest :

    1
    
    $ xfs_growfs /mnt/rbd/myrbd

    Extend rbd with kernel module

    You need at least kernel 3.10 on ceph client to support resizing.
    For previous version look at http://dachary.org/?p=2179

    Get current size :

    1
    
    $ rbd info rbd/myrbd

    Just do :

    1
    2
    
    $ rbd resize rbd/myrbd --size 600000
    $ xfs_growfs /mnt/rbd/myrbd

    Also, since cuttlefish you can’t shrink a bloc device without specify additional option (–allow-shrink)

  • June 6, 2013
    v0.61.3 released

    This is a much-anticipated point release for the v0.61 Cuttlefish stable series.  It resolves a number of issues, primarily with monitor stability and leveldb trimming.  All v0.61.x uses are encouraged to upgrade. Upgrading from bobtail: There is one known problem with mon upgrades from bobtail.  If the ceph-mon conversion on startup is aborted or fails […]

Careers