If you have deployed Ceph recently without the assistance of an orchestration tool like Chef or Juju you may have noticed there has been a lot of attention on ceph-deploy. Ceph-deploy is the new stand-alone way to deploy Ceph (replacing mkcephfs) that relies only on ssh, sudo, and some Python to get the job done. If you are experimenting with Ceph or find yourself deploying and tearing down Ceph clusters a lot and don’t want the added overhead of an orchestration framework, this is probably the tool for you.
Since this tool has undergone a lot of work lately, we wanted to publish a nice simple walkthrough to help people get up and running. However, since we also love it when brilliance comes from our community instead of us hogging the microphone all the time; we thought it would be better to replicate a blog from community contributor Loic Dachary. Read on for his ceph-deploy walkthrough and give it a shot!
This is the first release after cuttlefish. Since most of this window was spent on stabilization, there isn’t a lot of new stuff here aside from cleanups and fixes (most of which are backported to v0.61). v0.63 is due out in 2 weeks and will have more goodness.
You can get v0.62 from the usual places:
While Ceph has a wide range of use cases, the most frequent application that we are seeing is that of block devices as data store for public and private clouds managed by OpenStack, CloudStack, Eucalyptus, and OpenNebula. This means that we frequently get questions about things like geographic replication, backup, and disaster recovery (or some combination therein, given the amount of overlap on these topics). While a full-featured, robust solution to geo-replication is currently being hammered out there are a number of different approaches already being tinkered with (like Sebastien Han’s setup with DRBD or the upcoming work using RGW).
However, since one of the primary focuses in managing a cloud is the manipulation of images, the solution to disaster recovery and general backup can often be quite simplistic. Incremental snapshots can fill this, and several other, roles quite well. To that end I wanted to share a few thoughts from RBD developer Josh Durgin for those of you who may have missed his great talk at the OpenStack Developer Summit a few weeks ago.
This release has only two changes: it disables a debug log by default that consumes disk space on the monitor, and fixes a bug with upgrading bobtail monitor stores with duplicated GV values. We urge all v0.61.1 users to upgrade to avoid filling the monitor data disks.
You can get v0.61.2 from the usual places:
This week marked the very first Ceph Developer Summit where the community gathered to discuss development efforts focused on the next stable release ‘Dumpling.’ There was quite a turnout for such a boutique event! We hit over 50 concurrent participants in the live video stream and had almost 400 unique visitors to the relatively new Ceph wiki during that window. Participants included folks from all over the world:
There was a ton of work proposed by the community, and almost all of it was accepted and discussed for inclusion in Dumpling. We were incredibly pleased with both the turn-out, and the general caliber of the participants. Having an awesome community makes it really easy to stay excited about what we do.
Below you will find each of the session videos split out with a brief description and links to the blueprint, etherpad, and irc logs as they appeared during the session. The original summit page has also been updated with the appropriate links for posterity. We plan to leave these pages up in order to give people the ability to look back at Ceph development as far as possible. If you have questions or feedback, please email the community team.
We will be doing a developer summit for each stable release (quarterly) so if you are interested in participating feel free to post a blueprint on the wiki for consideration. The sessions for each developer summit are selected directly from submitted blueprints.
If you are interested in contributing to Ceph on a smaller scale feel free to dive right in, clone our github repository and submit a pull request for any changes you make.
Now, on to the summit!
This release is a small update to Cuttlefish that fixes a problem when upgrading a bobtail cluster that had snapshots. Please use this instead of v0.61 if you are upgrading to avoid possible ceph-osd daemon crashes. There is also fix for a problem deploying monitors and generating new authentication keys.
You can get v0.61.1 from the usual places:
Spring has arrived (at least for some of us), and a new stable release of Ceph is ready. Thank you to everyone who has contributed to this release!
Bigger ticket items since v0.56.x “Bobtail”:
Notable changes since v0.60:
Behold, another Bobtail update! This one serves three main purposes: it fixes a small issue with monitor features that is important when upgrading from argonaut -> bobtail -> cuttlefish, it backports many changes to the ceph-disk helper scripts that allow bobtail clusters to be deployed with the new ceph-deploy tool or our chef cookbooks, and it fixes several important bugs in librbd. There is also, of course, the usual collection of important bug fixes in other parts of the system.
Notable changes include:
For more detailed information, see the complete changelog.
You can get v0.56.5 from the usual places:
The Ceph developer summit that we announced a few weeks ago is nearly upon us! Since the event is next week we wanted to make sure that everyone was aware of the details for participation and attendence.
Summit Date: 07 May @ 8a – 2p PDT (GMT -8)
The community has already been hard at work generating blueprints for evaluation and discussion. The next step is to discuss those blueprints and figure out how it will all fit together for the “Dumpling” release. Read on for details on how we plan on running the summit, and how you can be prepared to participate.
Last month Inktank launched a community help program that we called “Office Hours” in an attempt to provide specific hours where an engineer would be available to answer questions from the community. These efforts have done a lot for both the ability to answer questions, as well as the ability for our engineers to focus their attention on development while not on duty. Our hope was that the community would jump in and participate in these efforts just like they have done with development efforts. We were not disappointed!
Three different non-Inktank groups have stepped up and volunteered to be resident “super-geeks” and help the community. Since we had such a great response we are relaunching this effort as “Geek on Duty” and tweaking our help page a bit to make it easier for people to get the type of assistance that is most appropriate to their needs.