Planet Ceph

Aggregated news from external sources

  • January 28, 2012
    v0.41 released

    v0.41 is ready!  There are a few key things in this release: osd: new ‘backfill’ recovery code (less memory, faster) osd: misc bug fixes (scrub, out of order replies) radosgw: better logging librados: improved api for compound operations For v0.42 we’re working on improved journal performance for the OSD, better encoding for data structures (to ease […]

  • January 14, 2012
    v0.40 released

    It’s been several weeks, but v0.40 is ready.  This has mostly been a stabilization release, so there isn’t too much new here.  Some notable additions include: new and improved specfile, for RH and SuSE based distributions. mon: expose cluster stats via admin socket (accessible via collectd plugin) simpler/generalized admin socket interface (ceph –admin-socket /path/to/sock command) […]

  • December 2, 2011
    v0.39 released

    v0.39 has been tagged and uploaded.  There was a lot of bug fixing going on that isn’t terribly exciting.  That aside, the highlights include: mon: rearchitected bootstrap (mkfs) mon: vastly simplified mon cluster expansion config: choose daemon ip based on subnet instead of explicitly hadoop: misc hadoop client fixes osd: many bugs fixed make: pretty […]

  • November 11, 2011
    v0.38 released

    It”s a week delayed, but v0.38 is ready.  The highlights: osd: some peering refactoring osd: “replay” period is per-pool (now only affects fs data pool) osd: clean up old osdmaps osd: allow admin to revert lost objects to prior versions (or delete) mkcephfs: generate reasonable crush map based on “host” and “rack” fields in [osd.NN] […]

  • October 18, 2011
    v0.37 released

    v0.37 is ready.  Notable changes this time around: radosgw: backend on-disk format changes radosgw: improved logging radosgw: package improvements (init script, fixed deps) osd: bug fixes! teuthology: btrfs testing If you are currently storing data with radosgw, you will need to export and reimport your data as the backend storage strategy has changed to improve […]

  • October 1, 2011
    v0.36 released

    It’s been three weeks and v0.36 is ready.  The most visible change this time around is that the daemons and tools have been renamed.  Everything that used to start with ‘c’ now starts with ‘ceph-’, and libceph is now libcephfs.  Nothing earth shattering, but we’re trying to clean these things up where we can and […]

  • September 21, 2011
    v0.35 released

    WARNING: There is a disk format change in this release that requires a bit of extra care to upgrade safely.  Please see below. Notable changes since v0.34 include: osd: large collections of objects are pre-hashed into directories radosgw: pools are preallocated librbd: asynchronous api for many operations rbd: show progress for long-running operations rados export: […]

  • August 27, 2011
    v0.34 released

    Another 2 weeks, another release. Notable changes in v0.34: radosgw: atomic GET and PUT (and some groundwork for supporting versioning) librados: API tests mon: fix for data corruption for certain crashes cfuse/libceph: many many many bug fixes osd: fix for various races during pool/pg creation osd: fix for a few peering crashes mds: misc fixes […]

  • August 17, 2011
    v0.33 released

    v0.33 is out. Notable changes this time around: osd: internal heartbeat mechanism to detect internal workqueue stalls osd: rewritten heartbeat code, much more reliable osd: collect/sum usage stats by user-specified object category (in addition to total) mds: fixed memory leak for standby-replay mode mds: many fixes with multimds subtree management vs rename radosgw: multi-threaded mode […]

  • July 9, 2011
    v0.31 released

    We’ve released v0.31. Notable changes include: librados, libceph: can now access multiple clusters in same process osd: snapshot rollback fixes osd: scrub race mds: fixed lock starvation issue client: cache ref counting fixes client: snap writeback, umount hang, cache pressure, other fixes radosgw: atomic PUT There is also the usual mix of bug fixes and […]

  • June 28, 2011
    v0.30 released

    We’re pushing out v0.30. Highlights include: librbd: Fixed race/crash mds: misc clustered mds fixes mds: misc rename journaling/replay fixes mds: fixed flock deadlock when processes die during lock wait osd: snaptrimmer fixes, misc races, recovery bugs auth: fixed cephx race/crash librados: rados bench fix librados: flush radosgw: multipart uploads debian: gceph moved to separate package […]

  • June 16, 2011
    v0.29.1 released

    We’ve released 0.29.1 with a few fixes. The main thing is a fix for a
    race condition in librbd that was biting people using rbd with qemu/kvm.
    There was also a fixed memory leak in the OSD. The shortlog is below.

    Relevant URLs:
    * Direct download at: http://ceph.newdream.net/downloads/ceph-0.29.1.tar.gz
    * Debian/Ubuntu packages: see http://ceph.newdream.net/wiki/Debian

    Greg Farnum (1):
    man: update cosd man page to include info on flush-journal option.

    Josh Durgin (3):
    librbd: fix AioCompletion race condition
    librbd: add AioCompletion debugging
    librbd: fix block_completion race condition

    Sage Weil (5):
    moncaps: whitespace
    mon: weaken pool creation caps check
    Makefile: remove ancient comment
    mkcephfs: fix ceph.conf reference
    v0.29.1

    Sam Lang (2):
    Fix segfault caused by invalid argument string.
    Fix typo in usage output for –num-osds

    Samuel Just (2):
    ReplicatedPG: make_writeable, use correct size for clone_size entry
    PG: clear scrub_received_maps in scrub_clear_state

Careers