A new development release of Ceph is out. Notable changes include:
You can get v0.64 from the usual locations:
This is a much-anticipated point release for the v0.61 Cuttlefish stable series. It resolves a number of issues, primarily with monitor stability and leveldb trimming. All v0.61.x uses are encouraged to upgrade.
Upgrading from bobtail:
Notable changes since v0.61.2:
See the full release notes for more details.
You can get v0.61.3 from the usual places:
Another sprint, and v0.63 is here. This release features librbd improvements, mon fixes, osd robustness, and packaging fixes.
Notable features in this release include:
You can get v0.63 from the usual places:
Since last month saw huge amounts of OpenStack news coming out of the Developer Summit in Portland, I thought it might be worth spending some time on CloudStack and its ecosystem this month. With the Citrix Synergy event in full swing, a ‘State of the Union’ with respect to Ceph and Citrix is probably the easiest way to look at all the great things going on.
There are a number of products that Ceph plugs into, and many of them are built on top of open source projects. One of the great parts about Ceph is that a single cluster can service all of your data storage needs, especially as it relates to the Citrix portfolio. Much like Linux in the datacenter, it’s only a matter of time before Open Source becomes the dominant force in this last bastion of proprietary-driven infrastructure.
If you have deployed Ceph recently without the assistance of an orchestration tool like Chef or Juju you may have noticed there has been a lot of attention on ceph-deploy. Ceph-deploy is the new stand-alone way to deploy Ceph (replacing mkcephfs) that relies only on ssh, sudo, and some Python to get the job done. If you are experimenting with Ceph or find yourself deploying and tearing down Ceph clusters a lot and don’t want the added overhead of an orchestration framework, this is probably the tool for you.
Since this tool has undergone a lot of work lately, we wanted to publish a nice simple walkthrough to help people get up and running. However, since we also love it when brilliance comes from our community instead of us hogging the microphone all the time; we thought it would be better to replicate a blog from community contributor Loic Dachary. Read on for his ceph-deploy walkthrough and give it a shot!
This is the first release after cuttlefish. Since most of this window was spent on stabilization, there isn’t a lot of new stuff here aside from cleanups and fixes (most of which are backported to v0.61). v0.63 is due out in 2 weeks and will have more goodness.
You can get v0.62 from the usual places:
While Ceph has a wide range of use cases, the most frequent application that we are seeing is that of block devices as data store for public and private clouds managed by OpenStack, CloudStack, Eucalyptus, and OpenNebula. This means that we frequently get questions about things like geographic replication, backup, and disaster recovery (or some combination therein, given the amount of overlap on these topics). While a full-featured, robust solution to geo-replication is currently being hammered out there are a number of different approaches already being tinkered with (like Sebastien Han’s setup with DRBD or the upcoming work using RGW).
However, since one of the primary focuses in managing a cloud is the manipulation of images, the solution to disaster recovery and general backup can often be quite simplistic. Incremental snapshots can fill this, and several other, roles quite well. To that end I wanted to share a few thoughts from RBD developer Josh Durgin for those of you who may have missed his great talk at the OpenStack Developer Summit a few weeks ago.
This release has only two changes: it disables a debug log by default that consumes disk space on the monitor, and fixes a bug with upgrading bobtail monitor stores with duplicated GV values. We urge all v0.61.1 users to upgrade to avoid filling the monitor data disks.
You can get v0.61.2 from the usual places:
This week marked the very first Ceph Developer Summit where the community gathered to discuss development efforts focused on the next stable release ‘Dumpling.’ There was quite a turnout for such a boutique event! We hit over 50 concurrent participants in the live video stream and had almost 400 unique visitors to the relatively new Ceph wiki during that window. Participants included folks from all over the world:
There was a ton of work proposed by the community, and almost all of it was accepted and discussed for inclusion in Dumpling. We were incredibly pleased with both the turn-out, and the general caliber of the participants. Having an awesome community makes it really easy to stay excited about what we do.
Below you will find each of the session videos split out with a brief description and links to the blueprint, etherpad, and irc logs as they appeared during the session. The original summit page has also been updated with the appropriate links for posterity. We plan to leave these pages up in order to give people the ability to look back at Ceph development as far as possible. If you have questions or feedback, please email the community team.
We will be doing a developer summit for each stable release (quarterly) so if you are interested in participating feel free to post a blueprint on the wiki for consideration. The sessions for each developer summit are selected directly from submitted blueprints.
If you are interested in contributing to Ceph on a smaller scale feel free to dive right in, clone our github repository and submit a pull request for any changes you make.
Now, on to the summit!