Planet Ceph

Aggregated news from external sources

  • March 12, 2009
    More configuration improvements

    We’ve updated the configuration framework (again) so that only a single configuration file is needed for the entire cluster. The ceph.conf file consists of a global section, a section for each daemon type (e.g., mon, mds, osd), and a section for each daemon instance (e.g., mon0, mds.foo, osd12).  This allows you to specify options in […]

  • March 11, 2009
    dbench performance

    Yehuda and I did some performance tuning with dbench a couple weeks back and made some significant improvements.  Here are the rough numbers, before I forget.  We were testing on a simple client/server setup to make a reasonable comparison with NFS: single server on a single SATA disk, and a single client. Since we were […]

  • March 10, 2009
    v0.7 release

    I’ve tagged a v0.7 release.  Probably the biggest change in this release (aside from the usual bug fixes and performance improvements) is the new start/stop and configuration framework.  Notably, the entire cluster configuration can be described by a single cluster.conf file that is shared by all nodes (distributed via scp or NFS or whatever) and used […]

  • March 6, 2009
    New configuration and startup framework

    Yehuda and I spent last week polishing his configuration framework and reworking the way everything is configured and started up.  I think the end result is pretty slick: There are now two configuration files.  The first, cluster.conf, defines which hosts participate in the cluster, which daemons run on which hosts, and what paths are used […]

  • February 24, 2009
    Debian packages

    I’ve built some debian packages for both the userspace daemons and the kernel module source.  Trying things out is now as simple as adding a few lines to your apt sources file and doing an apt-get install!  More info in the wiki.

  • January 30, 2009
    Some performance comparisons

    I did a few basic tests comparing Ceph to NFS on a simple benchmark, a Linux kernel untar.  I tried to get as close as possible to an “apples to apples” comparison.  The same client machine is used for NFS and Ceph; another machine is either the NFS server or the Ceph MDS.  The same […]

  • January 20, 2009
    POSIX file system test suite

    The unstable client (with all of the async metadata changes) is passing the full POSIX file system test suite again (modulo the question of whether chmod -1,-1 should be a no-op or update ctime).  We’re also surviving long dbench runs.  Progress!  I hope to push this all into the master branch after a bit more […]

  • January 12, 2009
    Asynchronous metadata operations

    The focus for the last few weeks has been on speeding up metadata operations.  The problem has been that the focus was first and foremost on reliability and recoverability.  Each metadata operation was performed by the MDS, and it was journaled safely to the OSDs before being applied.  This meant that every metadata operation went […]

  • December 16, 2008
    Scrubbing

    The last month has seen a lot of work on the storage cluster, fixing recovery related bugs, improving threading, and working out a mechanism for online scrubbing.  In this case, scrubbing is basically a low-level fsck of the object storage layer.  For each PG being scrubbed, the primary and any replica nodes generate a catalog […]

  • November 14, 2008
    v0.5 release sent to linux-fsdevel, -kernel

    I’ve tagged a v0.5 release, and this time sent the client portion in patch form to linux-kernel and linux-fsdevel for review.  We’ll see what happens!  It weights in a 20k lines of code, so I’ll be impressed if anyone decides to wade through it immediately. New in this release: Lots of bug fixes, especially in […]

  • November 6, 2008
    lockdep for pthreads

    Linux has a great tool called lockdep for identifying locking dependency problems.  Instead of waiting until an actual deadlock occurs (which may be extremely difficult when it is a timing-sensitive thing), lockdep keeps track of which locks are already held when any new lock is taken, and ensures that there are no cycles in the […]

  • October 7, 2008
    v0.4 Release

    I’ve tagged v0.4.  New in this release: Flexible snapshots (create snapshots of _any_ subdirectory) Recursive accounting for size, ctime, file counts Lots of client bug fixes and improvements, including asynchronous writepages, additional crc protection of network messages, sendpage (zero-copy writes where supported). The main new item in this release is the snapshot support.  Unlike snapshots […]

Careers