The past couple of weeks have been a veritable sharknado of activity for the Ceph Community. From our most successful Ceph Day yet last week in Frankfurt, Germany, to another great quarterly developer summit as work begins on the “Giant” release. It is great to see that the engagement and adoption trends are continuing and we are definitely enjoying the fruits of a rich and productive community.
Read on for details of these Ceph community events.
There is only a little over a week left to vote for OpenStack Summit talks for the upcoming Atlanta event.
While it can be hard to narrow down the list since there are so many great talks, we thought it might be helpful to create a short list of talks that touch on Ceph or something closely related. If any of these topics interest you please stop on by and give them a vote.
While you are there you can peruse some of the other great talks and perhaps find a few others to endorse. At the very least you should book your tickets now as the OpenStack events are always jam packed with useful information and great people.
See you there!
This is the final development release before the Firefly feature freeze. The main items in this release include some additional refactoring work in the OSD IO path (include some locking improvements), per-user quotas for the radosgw, a switch to civetweb from mongoose for the prototype radosgw standalone mode, and a prototype leveldb-based backend for the OSD. The C librados API also got support for atomic write operations (read side transactions will appear in v0.78).
This Dumpling point release fixes a few critical issues in v0.67.6.
All v0.67.6 users are urgently encouraged to upgrade. We also recommend that all v0.67.5 (or older) users upgrade.
The v0.67.7 point release contains a number of important fixed for the OSD, monitor, and radosgw. Most significantly, a change that forces large object attributes to spill over into leveldb has been backported that can prevent objects and the cluster from being damaged by large attributes (which can be induced via the radosgw). There is also a set of fixes that improves data safety and RADOS semantics when the cluster becomes full and then non-full.
Yesterday Mirantis announced their efforts towards Open Source vendor certifications for OpenStack that seek to build and accelerate some of the great work that has been going on in the Cinder community. This is huge, and in more ways than immediately obvious. Unfortunately, in recent history “The Cloud” has been such an overused buzzword, and encompasses so many things, that it has become almost meaningless to a wide swath of consumers.
So many people look at OpenStack and just see “software” to make “one of those cloud things” for a very specific use. They miss the point entirely that OpenStack (and others like it) are simply part of the commoditization of, and a paradigm shift in the way we think about, infrastructure.
With an open certification program we’ll be able to see advantages like:
This release includes another batch of updates for firefly functionality. Most notably, the cache pool infrastructure now support snapshots, the OSD backfill functionality has been generalized to include multiple targets (necessary for the coming erasure pools), and there were performance improvements to the erasure code plugin on capable processors. The MDS now properly utilizes (and seamlessly migrates to) the OSD key/value interface (aka omap) for storing directory objects. There continue to be many other fixes and improvements for usability and code portability across the tree.
The “Giant” Ceph Developer Summit looms….giantly… on the horizon and our wiki is ready for blueprint authors! We want your ideas, brainstorms, plans for work, or anything else you can dream up.
If you don’t already have an account on the wiki please bear with us as we work through a few kinks of account creation that have cropped up with an upgraded plugin (it couldn’t come at a worse time!). Creating an account will send you to wikilogin.ceph.com and ask for Google credentials. Once you enter them and select a username for the forum it may dump you to the old wiki at wikilogin.ceph.com instead of redirecting you back to wiki.ceph.com. If that happens please just head back to wiki.ceph.com by hand and it should let you in the door. [edit: all auth components should be working now]
If you have issues please send them to Scuttlemonkey.
Now, on with the summit details!
|03 FEB||Blueprint submissions begin|
|21 FEB||Blueprint submissions end|
|24 FEB||Summit agenda announced|
|04 MAR||Ceph Developer Summit: Day 1|
|05 MAR||Ceph Developer Summit: Day 2|
|June 2014||Giant Release|
If you are interested in submitting a blueprint, collaborating on an existing blueprint, or just attending to learn more about Ceph, read on!
Last week Dmitry Borodaenko presented his talk on Ceph and OpenStack at the inaugural Silicon Valley Ceph User Group meeting. The meeting was well attended and also featured talks from Mellanox’s Eli Karpilovski and Inktank’s Kyle Bader. However, if you were unable to attend, the following transcript from Dmitry’s talk is a good recap just in time for the joint Mirantis / Inktank webcast on Ceph and OpenStack.[Reposted from Mirantis.com]
To understand how Ceph works as part of Mirantis OpenStack, we need to take that 20,000-foot view first. You need to know what Ceph is, what OpenStack is, and what you can do with them. And, then, we’ll get into details that actually makes this combination work. So, first we’ll explain how Ceph came about and what it turned out to be.
This is a big release, with lots of infrastructure going in for firefly. The big items include a prototype standalone frontend for radosgw (which does not require apache or fastcgi), tracking for read activity on the osds (to inform tiering decisions), preliminary cache pool support (no snapshots yet), and lots of bug fixes and other work across the tree to get ready for the next batch of erasure coding patches.
For comparison, here are the diff stats for the last few versions:
v0.75 291 files changed, 82713 insertions(+), 33495 deletions(-) v0.74 192 files changed, 17980 insertions(+), 1062 deletions(-) v0.73 148 files changed, 4464 insertions(+), 2129 deletions(-)