The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • May 16, 2013
    Deploying Ceph with ceph-deploy

    If you have deployed Ceph recently without the assistance of an orchestration tool like Chef or Juju you may have noticed there has been a lot of attention on ceph-deploy. Ceph-deploy is the new stand-alone way to deploy Ceph (replacing mkcephfs) that relies only on ssh, sudo, and some Python to get the job done. […]

  • May 14, 2013
    v0.62 released

    This is the first release after cuttlefish. Since most of this window was spent on stabilization, there isn’t a lot of new stuff here aside from cleanups and fixes (most of which are backported to v0.61). v0.63 is due out in 2 weeks and will have more goodness. mon: fix validation of mds ids from CLI commands […]

  • May 14, 2013
    Incremental Snapshots with RBD

    While Ceph has a wide range of use cases, the most frequent application that we are seeing is that of block devices as data store for public and private clouds managed by OpenStack, CloudStack, Eucalyptus, and OpenNebula. This means that we frequently get questions about things like geographic replication, backup, and disaster recovery (or some […]

  • May 14, 2013
    v0.61.2 released

    This release has only two changes: it disables a debug log by default that consumes disk space on the monitor, and fixes a bug with upgrading bobtail monitor stores with duplicated GV values. We urge all v0.61.1 users to upgrade to avoid filling the monitor data disks. mon: fix conversion of stores with duplicated GV values mon: disable […]

  • May 13, 2013
    Deploy a Ceph MDS server

    How-to quickly deploy a MDS server.

    Assuming that /var/lib/ceph/mds/mds is the mds data point.

    Edit ceph.conf and add a MDS section like so:

    [mds]
    mds data = /var/lib/ceph/mds/mds.$id
    keyring = /var/lib/ceph/mds/mds.$id/mds.$id.keyring

    [md…

  • May 11, 2013
    What a Year 1!

    Back in January I posted about how Inktank’s momentum was accelerating. Well, to say this trend is continuing would be a gross understatement. The Inktank team continues to execute at a blinding pace and the world keeps on noticing. For example: •    The Community and Marketing (we are hiring) teams killed it at the OpenStack Summit. There […]

  • May 9, 2013
    Ceph Developer Summit – Summary and Session Videos

    Contents Introduction Opening Remarks Morning Sessions Ceph Management API Erasure Encoding as a Storage Backend RGW Geo-Replication and Disaster Recovery Afternoon Sessions RADOS Gateway refactor into library, internal APIs Chef Cookbook Consolidation & ceph-deploy Improvements Enforced bucket-level quotas in RGW Testing, build/release & Teuthology Client Security for CephFS RADOS namespaces, CRUSH language extension, CRUSH library […]

  • May 9, 2013
    v0.61.1 released

    This release is a small update to Cuttlefish that fixes a problem when upgrading a bobtail cluster that had snapshots. Please use this instead of v0.61 if you are upgrading to avoid possible ceph-osd daemon crashes. There is also fix for a problem deploying monitors and generating new authentication keys. Notable changes: osd: handle upgrade […]

  • May 7, 2013
    Use existing RBD images and put it into Glance

    The title of the article is not that explicit, actually I had trouble to find a proper one. Thus let me clarify a bit. Here is the context I was wondering if Glance was capable of converting images within its store. The quick answer is no, but I thin…

  • May 7, 2013
    Ceph Cuttlefish Release has Arrived!

    ­­Today marks another milestone for Ceph with the release of Cuttlefish (v0.61), the third stable release of Ceph. Inktank’s development efforts for the Cuttlefish release have been focused around Red Hat support and making it easier to install and configure Ceph while improving the operational ease of integrating with 3rd party tools, such as provisioning […]

  • May 7, 2013
    v0.61 “Cuttlefish” released

    Spring has arrived (at least for some of us), and a new stable release of Ceph is ready.  Thank you to everyone who has contributed to this release! Bigger ticket items since v0.56.x “Bobtail”: ceph-deploy: our new deployment tool to replace ‘mkcephfs’ robust RHEL/CentOS support ceph-disk: many improvements to support hot-plugging devices via chef and […]

  • May 3, 2013
    v0.56.5 released

    Behold, another Bobtail update!  This one serves three main purposes: it fixes a small issue with monitor features that is important when upgrading from argonaut -> bobtail -> cuttlefish, it backports many changes to the ceph-disk helper scripts that allow bobtail clusters to be deployed with the new ceph-deploy tool or our chef cookbooks, and it […]

Careers