Planet Ceph

Aggregated news from external sources

  • March 2, 2012
    The git repositories have moved

    We’ve renamed our ‘org’ on github.com, so the new URLs are

    github.com/ceph
    github.com/ceph/ceph.git
    github.com/ceph/ceph-client.git
    etc.

    and so forth (instead of github.com/NewDreamNetwork).  You can update your git remote with som…

  • January 23, 2012
    SCALE 10x slides

    My SCALE 10x talk went well.  For those who are interested, here are the slides in .odp format. Here is the pdf without the pretty and very effective builds.

  • October 11, 2011
    RBD Status Update

    Just a quick update on the current status of RBD. The main recent development is that librbd (the userspace library) can ack writes immediately (instead of waiting for them to actually commit), to better mimic the behavior of a normal disk. Why do this? A long long time ago, when you issued a write to […]

  • September 3, 2011
    Roadmap update

    We spent some time this week working on our technical roadmap for the next few months. It’s all been mostly translated into issues and priorities in the tracker (here’s a sorted priority list), but from that level of gory detail it’s hard to see the forest for the trees. At a high level, the basic […]

  • February 10, 2011
    SCALE 9x

    I’ll be giving a talk at SCALE 9x targeted toward system administrators and users.  It’ll be Sunday, February 27th at 4:30pm in the Century AB room.  Hope to see you there!
    UPDATE: Here are the slides, as ODF and PDF.

  • December 22, 2010
    RBD upstream updates

    QEMU-RBD The QEMU-RBD block device has been merged upstream into the QEMU project. QEMU-RBD was created originally by Christian Brunner, and is binary compatible with the linux native RBD driver. It allows the creation of QEMU block devices that are striped over objects in RADOS — the Ceph distributed object store. As with the corresponding […]

  • October 29, 2010
    Rados Block Device merged for 2.6.37

    The Linux kernel merge window is open for v2.6.37, and RBD (rados block device) has finally been merged.  RBD lets you create a block device in Linux that is striped over objects stored in a Ceph distributed object store.  This basic approach gives you some nice features: “thin provisioning” — space isn’t used in the […]

  • March 19, 2010
    RBD: rados block driver

    Christian Brunner sent an initial implementation of ‘rbd’, a librados-based block driver for qemu/KVM, to the ceph-devel list last week.   A few minor nits aside, it looks pretty good and works well.  The basic idea is to stripe a VM block device over (by default) 4MB objects stored in the Ceph distributed object store.  This […]

  • March 19, 2010
    Client merged for 2.6.34

    Linus merged the Ceph for 2.6.34 this morning, which means the next kernel release will be able to mount a Ceph file system without any additional patches or modifications. This is a pretty big milestone for us, and we’re excited! The next few weeks will be spent hammering out client bugs and polishing the v0.20 […]

  • September 22, 2009
    Ceph talk at LCA2010

    I’ll be giving a talk on Ceph at linux.conf.au 2010!  (Oddly enough, it’s in New Zealand this year, but I’m not complaining.)  I’ve heard great things about LCA, and am looking forward to being there. The talk will cover two general areas: Ceph’s RADOS object storage architecture, including some of its data processing features, and […]

  • March 12, 2009
    More configuration improvements

    We’ve updated the configuration framework (again) so that only a single configuration file is needed for the entire cluster. The ceph.conf file consists of a global section, a section for each daemon type (e.g., mon, mds, osd), and a section for each daemon instance (e.g., mon0, mds.foo, osd12).  This allows you to specify options in […]

  • January 30, 2009
    Some performance comparisons

    I did a few basic tests comparing Ceph to NFS on a simple benchmark, a Linux kernel untar.  I tried to get as close as possible to an “apples to apples” comparison.  The same client machine is used for NFS and Ceph; another machine is either the NFS server or the Ceph MDS.  The same […]

Careers