The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • March 5, 2013
    CephFS MDS Status Discussion

    There have been a lot of questions lately about the current status of the Ceph MDS and when to expect a stable release. Inktank has been having some internal discussions around CephFS release development, and I’d like to share them with you and ask for feedback! A couple quick notes: first, this blog post is […]

  • March 5, 2013
    v0.58 released

    It’s been two weeks and v0.58 is baked.  Notable changes since v0.57 include: mon: rearchitected to utilize single instance of paxos and a key/value store (Joao Luis) librbd: fixed some locking issues with flatten (Josh Durgin) rbd: udevadm settle on map/unmap to avoid various races (Dan Mick) osd: move pg info, log into leveldb (== […]

  • March 1, 2013
    Results From the Ceph Census

    Hi! From February 13-18, we ran our very first Ceph Census. The purpose of this survey was to get a sense of how many Ceph clusters are out there, what they are used for, and which technologies they are used alongside. The Census was announced on the ceph-devel and ceph-users mailing lists and a link […]

  • February 21, 2013
    Deploying Ceph with Juju

    NOTE: This guide is out of date. Please see the included documentation on the more recent charms in the charmstore. The last few weeks have been very exciting for Inktank and Ceph. There have been a number of community examples of how people are deploying or using Ceph in the wild. From the ComodIT orchestration […]

  • February 19, 2013
    v0.57 released

    We’ve been spending a lot of time working on bobtail-related stabilization and bug fixes, but our next development release v0.57 is finally here!  Notable changes include: osd: default to libaio for the journal (some performance boost) osd: validate snap collections on startup osd: ceph-filestore-dump tool for debugging osd: deep-scrub omap keys/values ceph tool: some CLI […]

  • December 13, 2012
    v0.55.1 released

    There were some packaging and init script issues with v0.55, so a small point release is out. It fixes a few odds and ends: init-ceph: typo (new ‘fs type’ stuff was broken) debian: fixed conflicting upstart and sysvinit scripts auth: fixed default auth settings osd: dropped some broken asserts librbd: fix locking bug, race with […]

  • December 13, 2012
    Deploying Ceph with a Crowbar

    We have seen users deploying Ceph in a number of different ways, which is just plain awesome! I have spoken with people deploying with makecephfs, ceph-deploy, Juju, Chef, and even the beginnings of some Puppet work. However, thanks to collaboration between Inktank and Dell there is a really solid deployment pathway using Dell’s Crowbar tool […]

  • December 6, 2012
    Monitoring a Ceph Cluster

    Ok, so you have gone through the five minute quickstart guide, learned a bit about Ceph, and stood a pre-production server up to test real data and operations…now what? Over the past couple of weeks we have gotten quite a few questions about monitoring and troubleshooting a Ceph cluster once you have one. Thankfully, our […]

  • December 4, 2012
    v0.55 released

    We had originally planned to make v0.55 a long-term stable release, but a lot of last-minute changes and fixes went into this cycle, so we are going to wait another cycle and make v0.56 bobtail.   A lot of work went into v0.55, however.  If you aren’t running argonaut (v0.48.*), please give v0.55 a try […]

  • November 19, 2012
    Getting Involved with Ceph

    The Ceph community is made up of many individuals with a wide variety of backgrounds, from FOSS hacker to corporate architect. We feel very fortunate to have such a great, and active, community. Even more so lately, as we have been fielding a number of questions on how best to become a more active participant […]

  • November 14, 2012
    v0.54 released

    The v0.54 development release is ready!  This will be the last development release before v0.55 “bobtail,” our next long-term stable release, is ready.  Notable changes this time around include: osd: use entire device if journal is a block device osd: new caps structure (see below) osd: backfill target reservations (improve performance during recovery) ceph-fuse: many […]

  • November 9, 2012
    Ceph Performance Part 2: Write Throughput Without SSD Journals

    INTRODUCTION Hello again! If you are new around these parts you may want to start out by reading the first article in this series available here. For the rest of you, I am sure you are no doubt aware by now of the epic battle that Mark Shuttleworth and I are waging over who can […]

Careers