Planet Ceph

Aggregated news from external sources

  • August 5, 2009
    v0.12 released

    I’ve just tagged a v0.12 released, and sent the kernel client patchset off to the Linux kernel and fsdevel lists again.  There was a v0.11 a week ago as well that incorporated some earlier feedback from the kernel lists. Changes since v0.11: mapping_set_error on failed writepage document correct debugfs mount point simplify layout/striping ioctls removed […]

  • July 16, 2009
    v0.10 released

    We’ve released v0.10.  The big items this time around: kernel client: some cleanup, unaligned memory access fixes much debugging of MDS recovery: kernel client will now correctly untar, compile kernel with MDS server running in a 60 second restart loop. a few misc mds fixes osd recovery fixes userspace client: many bug fixes, now quite […]

  • June 6, 2009
    RADOS snapshots

    Some interesting issues came up when we started considering how to expose the RADOS snapshot functionality to librados users.  The object store exposes a pretty low-level interface to control when objects are cloned (i.e. when an object snapshot is taken via the btrfs copy-on-write ioctls).  The basic design in Ceph is that the client provides […]

  • May 19, 2009
    The RADOS distributed object store

    The Ceph architecture can be pretty neatly broken into two key layers.  The first is RADOS, a reliable autonomic distributed object store, which provides an extremely scalable storage service for variably sized objects.  The Ceph file system is built on top of that underlying abstraction: file data is striped over objects, and the MDS (metadata […]

  • May 19, 2009
    v0.8 released

    Ceph v0.8 has been released.  Debian packages for amd64 and i386 have been built and there is a tarball, or you can pull the ‘master’ branch from Git.  This update has a lot of important protocol changes and corresponding performance improvements: Client / MDS protocol simplification — faster, less fragile Online adjustment of data and/or […]

  • March 12, 2009
    More configuration improvements

    We’ve updated the configuration framework (again) so that only a single configuration file is needed for the entire cluster. The ceph.conf file consists of a global section, a section for each daemon type (e.g., mon, mds, osd), and a section for each daemon instance (e.g., mon0, mds.foo, osd12).  This allows you to specify options in […]

  • March 11, 2009
    dbench performance

    Yehuda and I did some performance tuning with dbench a couple weeks back and made some significant improvements.  Here are the rough numbers, before I forget.  We were testing on a simple client/server setup to make a reasonable comparison with NFS: single server on a single SATA disk, and a single client. Since we were […]

  • March 10, 2009
    v0.7 release

    I’ve tagged a v0.7 release.  Probably the biggest change in this release (aside from the usual bug fixes and performance improvements) is the new start/stop and configuration framework.  Notably, the entire cluster configuration can be described by a single cluster.conf file that is shared by all nodes (distributed via scp or NFS or whatever) and used […]

  • March 6, 2009
    New configuration and startup framework

    Yehuda and I spent last week polishing his configuration framework and reworking the way everything is configured and started up.  I think the end result is pretty slick: There are now two configuration files.  The first, cluster.conf, defines which hosts participate in the cluster, which daemons run on which hosts, and what paths are used […]

  • February 24, 2009
    Debian packages

    I’ve built some debian packages for both the userspace daemons and the kernel module source.  Trying things out is now as simple as adding a few lines to your apt sources file and doing an apt-get install!  More info in the wiki.

  • January 30, 2009
    Some performance comparisons

    I did a few basic tests comparing Ceph to NFS on a simple benchmark, a Linux kernel untar.  I tried to get as close as possible to an “apples to apples” comparison.  The same client machine is used for NFS and Ceph; another machine is either the NFS server or the Ceph MDS.  The same […]

  • January 20, 2009
    POSIX file system test suite

    The unstable client (with all of the async metadata changes) is passing the full POSIX file system test suite again (modulo the question of whether chmod -1,-1 should be a no-op or update ctime).  We’re also surviving long dbench runs.  Progress!  I hope to push this all into the master branch after a bit more […]

Careers