The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • June 25, 2013
    v0.65 released

    Our next development release v0.65 is out, with a few big changes.  First and foremost, this release includes a complete revamp of the architecture for the command line interface in order to lay the groundwork for our ongoing REST management API work.  The ‘ceph’ command line tool is now a thin python wrapper around librados. […]

  • June 20, 2013
    v0.61.4 released

    We have resolved a number of issues that v0.61.x Cuttlefish users have been hitting and have prepared another point release, v0.61.4.  This release fixes a rare data corruption during power cycle when using the XFS file system, a few monitor sync problems, several issues with ceph-disk and ceph-deploy on RHEL/CentOS, and a problem with OSD […]

  • June 12, 2013
    v0.64 released

    A new development release of Ceph is out. Notable changes include: osd: monitor both front and back interfaces osd: verify both front and back network are working before rejoining cluster osd: fix memory/network inefficiency during deep scrub osd: fix incorrect mark-down of osds mon: fix start fork behavior mon: fix election timeout mon: better trim/compaction […]

  • June 11, 2013
    New Ceph Backend to Lower Disk Requirements

    I get a fair number of questions on the current Ceph blueprints, especially those coming from the community. Loic Dachary, one of the owners of the Erasure Encoding blueprint, has done a great job taking a look at some of issues at hand. When evaluating Ceph to run a new storage service, the replication factor […]

  • June 6, 2013
    v0.61.3 released

    This is a much-anticipated point release for the v0.61 Cuttlefish stable series.  It resolves a number of issues, primarily with monitor stability and leveldb trimming.  All v0.61.x uses are encouraged to upgrade. Upgrading from bobtail: There is one known problem with mon upgrades from bobtail.  If the ceph-mon conversion on startup is aborted or fails […]

  • May 29, 2013
    v0.63 released

    Another sprint, and v0.63 is here.  This release features librbd improvements, mon fixes, osd robustness, and packaging fixes. Notable features in this release include: librbd: parallelize delete, rollback, flatten, copy, resize librbd: ability to read from local replicas osd: resurrect partially deleted PGs osd: prioritize recovery for degraded PGs osd: fix internal heartbeart timeouts when scrubbing very […]

  • May 23, 2013
    State of the union: Ceph and Citrix

    Since last month saw huge amounts of OpenStack news coming out of the Developer Summit in Portland, I thought it might be worth spending some time on CloudStack and its ecosystem this month. With the Citrix Synergy event in full swing, a ‘State of the Union’ with respect to Ceph and Citrix is probably the […]

  • May 16, 2013
    Deploying Ceph with ceph-deploy

    If you have deployed Ceph recently without the assistance of an orchestration tool like Chef or Juju you may have noticed there has been a lot of attention on ceph-deploy. Ceph-deploy is the new stand-alone way to deploy Ceph (replacing mkcephfs) that relies only on ssh, sudo, and some Python to get the job done. […]

  • May 14, 2013
    v0.62 released

    This is the first release after cuttlefish. Since most of this window was spent on stabilization, there isn’t a lot of new stuff here aside from cleanups and fixes (most of which are backported to v0.61). v0.63 is due out in 2 weeks and will have more goodness. mon: fix validation of mds ids from CLI commands […]

  • May 14, 2013
    Incremental Snapshots with RBD

    While Ceph has a wide range of use cases, the most frequent application that we are seeing is that of block devices as data store for public and private clouds managed by OpenStack, CloudStack, Eucalyptus, and OpenNebula. This means that we frequently get questions about things like geographic replication, backup, and disaster recovery (or some […]

  • May 14, 2013
    v0.61.2 released

    This release has only two changes: it disables a debug log by default that consumes disk space on the monitor, and fixes a bug with upgrading bobtail monitor stores with duplicated GV values. We urge all v0.61.1 users to upgrade to avoid filling the monitor data disks. mon: fix conversion of stores with duplicated GV values mon: disable […]

  • May 9, 2013
    Ceph Developer Summit – Summary and Session Videos

    Contents Introduction Opening Remarks Morning Sessions Ceph Management API Erasure Encoding as a Storage Backend RGW Geo-Replication and Disaster Recovery Afternoon Sessions RADOS Gateway refactor into library, internal APIs Chef Cookbook Consolidation & ceph-deploy Improvements Enforced bucket-level quotas in RGW Testing, build/release & Teuthology Client Security for CephFS RADOS namespaces, CRUSH language extension, CRUSH library […]