This release is intended to serve as a release candidate for firefly, which will hopefully be v0.80. No changes are being made to the code base at this point except those that fix bugs. Please test this release if you intend to make use of the new erasure-coded pools or cache tiers in firefly.
This release fixes a range of bugs found in v0.78 and streamlines the user experience when creating erasure-coded pools. There is also a raft of fixes for the MDS (multi-mds, directory fragmentation, and large directories). The main notable new piece of functionality is a small change to allow radosgw to use an erasure-coded pool for object data.
ceph auth export ceph auth get ceph auth get-key ceph auth print-key ceph auth list
This development release includes two key features: erasure coding and cache tiering. A huge amount of code was merged for this release and several additional weeks were spent stabilizing the code base, and it is now in a state where it is ready to be tested by a broader user base.
This is not the firefly release. Firefly will be delayed for at least another sprint so that we can get some operational experience with the new code and do some additional testing before committing to long term support.
Please note that while it is possible to create and test erasure coded pools in this release, the pools will not be usable when you upgrade to v0.79 as the OSDMap encoding will subtlely change. Please do not populate your test pools with important data that can’t be reloaded.
- MDSs and/or radosgw
If the ceph-mds daemon is restarted first, it will wait until all OSDs have been upgraded before finishing its startup sequence. If the ceph-mon daemons are not restarted prior to the ceph-osd daemons, they will not correctly register their new capabilities with the cluster and new features may not be usable until they are restarted a second time.
You can get v0.78 from the usual locations:
The past couple of weeks have been a veritable sharknado of activity for the Ceph Community. From our most successful Ceph Day yet last week in Frankfurt, Germany, to another great quarterly developer summit as work begins on the “Giant” release. It is great to see that the engagement and adoption trends are continuing and we are definitely enjoying the fruits of a rich and productive community.
Read on for details of these Ceph community events.
There is only a little over a week left to vote for OpenStack Summit talks for the upcoming Atlanta event.
While it can be hard to narrow down the list since there are so many great talks, we thought it might be helpful to create a short list of talks that touch on Ceph or something closely related. If any of these topics interest you please stop on by and give them a vote.
While you are there you can peruse some of the other great talks and perhaps find a few others to endorse. At the very least you should book your tickets now as the OpenStack events are always jam packed with useful information and great people.
See you there!
This is the final development release before the Firefly feature freeze. The main items in this release include some additional refactoring work in the OSD IO path (include some locking improvements), per-user quotas for the radosgw, a switch to civetweb from mongoose for the prototype radosgw standalone mode, and a prototype leveldb-based backend for the OSD. The C librados API also got support for atomic write operations (read side transactions will appear in v0.78).
This Dumpling point release fixes a few critical issues in v0.67.6.
All v0.67.6 users are urgently encouraged to upgrade. We also recommend that all v0.67.5 (or older) users upgrade.
The v0.67.7 point release contains a number of important fixed for the OSD, monitor, and radosgw. Most significantly, a change that forces large object attributes to spill over into leveldb has been backported that can prevent objects and the cluster from being damaged by large attributes (which can be induced via the radosgw). There is also a set of fixes that improves data safety and RADOS semantics when the cluster becomes full and then non-full.
Yesterday Mirantis announced their efforts towards Open Source vendor certifications for OpenStack that seek to build and accelerate some of the great work that has been going on in the Cinder community. This is huge, and in more ways than immediately obvious. Unfortunately, in recent history “The Cloud” has been such an overused buzzword, and encompasses so many things, that it has become almost meaningless to a wide swath of consumers.
So many people look at OpenStack and just see “software” to make “one of those cloud things” for a very specific use. They miss the point entirely that OpenStack (and others like it) are simply part of the commoditization of, and a paradigm shift in the way we think about, infrastructure.
With an open certification program we’ll be able to see advantages like:
This release includes another batch of updates for firefly functionality. Most notably, the cache pool infrastructure now support snapshots, the OSD backfill functionality has been generalized to include multiple targets (necessary for the coming erasure pools), and there were performance improvements to the erasure code plugin on capable processors. The MDS now properly utilizes (and seamlessly migrates to) the OSD key/value interface (aka omap) for storing directory objects. There continue to be many other fixes and improvements for usability and code portability across the tree.
The “Giant” Ceph Developer Summit looms….giantly… on the horizon and our wiki is ready for blueprint authors! We want your ideas, brainstorms, plans for work, or anything else you can dream up.
If you don’t already have an account on the wiki please bear with us as we work through a few kinks of account creation that have cropped up with an upgraded plugin (it couldn’t come at a worse time!). Creating an account will send you to wikilogin.ceph.com and ask for Google credentials. Once you enter them and select a username for the forum it may dump you to the old wiki at wikilogin.ceph.com instead of redirecting you back to wiki.ceph.com. If that happens please just head back to wiki.ceph.com by hand and it should let you in the door. [edit: all auth components should be working now]
If you have issues please send them to Scuttlemonkey.
Now, on with the summit details!
|03 FEB||Blueprint submissions begin|
|21 FEB||Blueprint submissions end|
|24 FEB||Summit agenda announced|
|04 MAR||Ceph Developer Summit: Day 1|
|05 MAR||Ceph Developer Summit: Day 2|
|June 2014||Giant Release|
If you are interested in submitting a blueprint, collaborating on an existing blueprint, or just attending to learn more about Ceph, read on!