The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • March 21, 2013
    Adding Support for RBD to stgt

    tgt, the Linux SCSI target framework (well, one of them) is an iSCSI target implementation whose goals include implementing a large portion of the SCSI emulation code in userland. tgt can provide iSCSI over Ethernet or iSER (iSCSI extensions for RDMA) over Infiniband. It can emulate various SCSI target types (really “command sets”): SBC (normal […]

  • March 14, 2013
    Ceph “Office Hours” Announced

    If you watch the mailing lists closely you will already have seen the new “Office Hours” announcement that went out yesterday. We are very excited about the opportunities this provides our larger community for getting directly involved. While Inktank may be the first, we’re hoping that many other organizations will help to “man the rails” […]

  • March 12, 2013
    Ceph Settles in to Aggressive Release Cadence

    Since its inception, Ceph has always had a fast-paced rolling release tempo. However, with the amount of adoption that Ceph has had over the last year (including continued integration work with several other open source projects), we wanted to move to a more reliable, and predictable, release schedule. The release schedule can be divided into […]

  • March 7, 2013
    Ceph’s New Monitor Changes

    Back in May 2012, after numerous hours confined to a couple of planes since departing Lisbon, I arrived at Los Angeles to meet most of the folks from Inktank. During my stay I had the chance to meet everybody on the team, attend the company’s launch party and start a major and well deserved rework […]

  • March 5, 2013
    CephFS MDS Status Discussion

    There have been a lot of questions lately about the current status of the Ceph MDS and when to expect a stable release. Inktank has been having some internal discussions around CephFS release development, and I’d like to share them with you and ask for feedback! A couple quick notes: first, this blog post is […]

  • March 5, 2013
    v0.58 released

    It’s been two weeks and v0.58 is baked.  Notable changes since v0.57 include: mon: rearchitected to utilize single instance of paxos and a key/value store (Joao Luis) librbd: fixed some locking issues with flatten (Josh Durgin) rbd: udevadm settle on map/unmap to avoid various races (Dan Mick) osd: move pg info, log into leveldb (== […]

  • March 1, 2013
    Results From the Ceph Census

    Hi! From February 13-18, we ran our very first Ceph Census. The purpose of this survey was to get a sense of how many Ceph clusters are out there, what they are used for, and which technologies they are used alongside. The Census was announced on the ceph-devel and ceph-users mailing lists and a link […]

  • February 21, 2013
    Deploying Ceph with Juju

    NOTE: This guide is out of date. Please see the included documentation on the more recent charms in the charmstore. The last few weeks have been very exciting for Inktank and Ceph. There have been a number of community examples of how people are deploying or using Ceph in the wild. From the ComodIT orchestration […]

  • February 19, 2013
    v0.57 released

    We’ve been spending a lot of time working on bobtail-related stabilization and bug fixes, but our next development release v0.57 is finally here!  Notable changes include: osd: default to libaio for the journal (some performance boost) osd: validate snap collections on startup osd: ceph-filestore-dump tool for debugging osd: deep-scrub omap keys/values ceph tool: some CLI […]

  • February 14, 2013
    Deploying Ceph with ComodIT

    At this year’s Cloud Expo Europe I had a nice chat with the guys from ComodIT who are making some interesting deployment and orchestration tools. They were kind enough to include their work in a blog post earlier this week and give me permission to replicate it here for your consumption. As always, if any […]

  • February 14, 2013
    v0.56.3 released

    We’ve fixed an important bug that a few users were hitting with unresponsive OSDs and internal heartbeat timeouts.  This, along with a range of less critical fixes, was sufficient to justify another point release.  Any production users should upgrade. Notable changes include: osd: flush peering work queue prior to start osd: persist osdmap epoch for […]

  • February 13, 2013
    CEPH – THE CRUSH DIFFERENCE!

    So many people have been asking us for more details on CRUSH, I thought it would be worthwhile to share more about it on our blog. If you have not heard of CRUSH, it stands for “Controlled Replication Under Scalable Hashing”. CRUSH is the pseudo-random data placement algorithm that efficiently distributes object replicas across a […]

Careers