Planet Ceph

Aggregated news from external sources

  • June 3, 2013
    Ceph integration in OpenStack: Grizzly update and roadmap for Havana

    What a perfect picture, a Cephalopod smoking a cigar! Updates!
    The OpenStack developer summit was great and obviously one of the most exciting session was the one about the Ceph integration with OpenStack.
    I had the great pleasure to attend this sess…

  • May 29, 2013
    v0.63 released

    Another sprint, and v0.63 is here.  This release features librbd improvements, mon fixes, osd robustness, and packaging fixes. Notable features in this release include: librbd: parallelize delete, rollback, flatten, copy, resize librbd: ability to read from local replicas osd: resurrect partially deleted PGs osd: prioritize recovery for degraded PGs osd: fix internal heartbeart timeouts when scrubbing very […]

  • May 23, 2013
    State of the union: Ceph and Citrix

    Since last month saw huge amounts of OpenStack news coming out of the Developer Summit in Portland, I thought it might be worth spending some time on CloudStack and its ecosystem this month. With the Citrix Synergy event in full swing, a ‘State of the Union’ with respect to Ceph and Citrix is probably the […]

  • May 16, 2013
    ViPR: A software-defined storage mullet?

    Almost every few weeks, new storage products are announced by competitors and I generally avoid commenting on them. But EMC’s ViPR announcement contains attempts to perform both marketing and technical sleight of hand around software-defined storage that potentially do much to slow down the inevitable change that is coming to the storage market. While EMC […]

  • May 16, 2013
    Deploying Ceph with ceph-deploy

    If you have deployed Ceph recently without the assistance of an orchestration tool like Chef or Juju you may have noticed there has been a lot of attention on ceph-deploy. Ceph-deploy is the new stand-alone way to deploy Ceph (replacing mkcephfs) that relies only on ssh, sudo, and some Python to get the job done. […]

  • May 14, 2013
    v0.62 released

    This is the first release after cuttlefish. Since most of this window was spent on stabilization, there isn’t a lot of new stuff here aside from cleanups and fixes (most of which are backported to v0.61). v0.63 is due out in 2 weeks and will have more goodness. mon: fix validation of mds ids from CLI commands […]

  • May 14, 2013
    Incremental Snapshots with RBD

    While Ceph has a wide range of use cases, the most frequent application that we are seeing is that of block devices as data store for public and private clouds managed by OpenStack, CloudStack, Eucalyptus, and OpenNebula. This means that we frequently get questions about things like geographic replication, backup, and disaster recovery (or some […]

  • May 14, 2013
    v0.61.2 released

    This release has only two changes: it disables a debug log by default that consumes disk space on the monitor, and fixes a bug with upgrading bobtail monitor stores with duplicated GV values. We urge all v0.61.1 users to upgrade to avoid filling the monitor data disks. mon: fix conversion of stores with duplicated GV values mon: disable […]

  • May 13, 2013
    Deploy a Ceph MDS server

    How-to quickly deploy a MDS server.

    Assuming that /var/lib/ceph/mds/mds is the mds data point.

    Edit ceph.conf and add a MDS section like so:

    [mds]
    mds data = /var/lib/ceph/mds/mds.$id
    keyring = /var/lib/ceph/mds/mds.$id/mds.$id.keyring

    [md…

  • May 11, 2013
    What a Year 1!

    Back in January I posted about how Inktank’s momentum was accelerating. Well, to say this trend is continuing would be a gross understatement. The Inktank team continues to execute at a blinding pace and the world keeps on noticing. For example: •    The Community and Marketing (we are hiring) teams killed it at the OpenStack Summit. There […]

  • May 9, 2013
    Ceph Developer Summit – Summary and Session Videos

    Contents Introduction Opening Remarks Morning Sessions Ceph Management API Erasure Encoding as a Storage Backend RGW Geo-Replication and Disaster Recovery Afternoon Sessions RADOS Gateway refactor into library, internal APIs Chef Cookbook Consolidation & ceph-deploy Improvements Enforced bucket-level quotas in RGW Testing, build/release & Teuthology Client Security for CephFS RADOS namespaces, CRUSH language extension, CRUSH library […]

  • May 9, 2013
    v0.61.1 released

    This release is a small update to Cuttlefish that fixes a problem when upgrading a bobtail cluster that had snapshots. Please use this instead of v0.61 if you are upgrading to avoid possible ceph-osd daemon crashes. There is also fix for a problem deploying monitors and generating new authentication keys. Notable changes: osd: handle upgrade […]

Careers