As Ceph development continues to move forward at an astonishing rate we’re working hard to share both our passion for what’s here and our vision of things to come via as many conduits as we can manage. If you are interested in hearing about the latest Ceph dev work, asking questions of some of the folks behind it, or just want to tell us the awesome things you are building with Ceph, keep an eye on our marathon event schedule and stop on by.
In Paris Ross Turk will be speaking on several panels. For those of you who don’t know Ross, any presentation from this seasoned Open Source veteran is well worth the time away from precious bits and/or internet cat pictures, so make sure you catch all of his appearances! Ross will be delivering both of the following talks:
Today, Dmitry Ukov wrote a great post on the Mirantis Blog entitled Object Storage approaches for OpenStack Cloud: Understanding Swift and Ceph. Dmitry’s overview on Ceph was a solid introduction to those needing an object store for their OpenStack deployment, and it was an interesting read. Thanks, Dmitry!
Naturally, since I spend most of my days thinking about Ceph, I couldn’t resist going a bit deeper with a few of Dmitry’s ideas. Here we go:
Ceph is a great object store. If you strip it down to its bare minimum, that’s what it is. Comparing the entire Ceph platform with Swift is apples and oranges, though, since Ceph can be much more than just an object store. Bonus points to the first person who writes a jingle that best accompanies that last part there.
The Ceph Object Store (also called RADOS), is a collection of two kinds of services: object storage daemons (ceph-osd) and monitors (ceph-mon). The monitors’ primary function is to keep track of which nodes are operational at any given time, while the OSDs perform actual data storage/retrieval operations. A cluster can have between a handful and thousands of OSDs, but only a small number of monitors – usually 3, 5, or 7 – are enough for most clusters. There’s a client library, librados, that allows apps to store and retrieve objects.
After several weeks of testing v0.52 is ready! This is a big release for RBD and radosgw users. Highlights include:
Another update to the stable “argonaut” series has been released. This fixes a few important bugs in rbd and radosgw and includes a series of changes to upstart and deployment related scripts that will allow the upcoming ‘ceph-deploy’ tool to work with the argonaut release.
Notable changes include:
You can get this release from the usual locations:
The latest development release v0.51 is ready. Notable changes include:
Full RBD cloning support will be in place in v0.52, as will a refactor of the messenger code with many bug fixes in the socket failure handling. This is available for testing now in ‘next’ for the adventurous. Improved OSD scrubbing is also coming soon. We should (finally) be building some release RPMs for v0.52 as well.
You can get v0.51 from the usual locations:
Greetings! I am a summer intern at Inktank. My summer project is to create a distributed, B-tree-like key-value store, with support for multiple writers, using librados and the Ceph object store. In my last blog post, I wrote about the single client implementation I created to start out with. Over the last several weeks, I’ve had great fun and have learned a lot working on my project. I designed and implemented an algorithm for making my program work for an arbitrary number of clients. I still have more to do – in particular, I’ve been changing the algorithm significantly as I encounter bottlenecks during Teuthology testing – but the core of my project is complete.
I was faced with the problem of how to allow multiple concurrent operations without causing interference that could leave the system in an inconsistent state. The Ceph object store provides atomic operations on a single object, but I sometimes need to atomically change multiple objects. When splitting or merging a leaf, I have to change the leaf object and the index object without making it possible for other clients to see an in-between state.
We’ve built and pushed the first update to the argonaut stable release. This branch has a range of small fixes for stability, compatibility, and performance, but no major changes in functionality. The stability fixes are particularly important for large clusters with many OSDs, and for network environments where intermittent network failures are more common.
The highlights include:
The fix for the radosgw usage trimming is incompatible with v0.48 (which was effectively broken). You now need to use the v0.48.1 version of radosgw-admin to initiate usage stats trimming.
There are a range of smaller bug fixes as well. For a complete list of what went into this release, please see the release notes and changelog.
You can get this stable update from the usual locations:
The next development release v0.50 is ready, and includes:
Right now the main development going on is with the RBD layering, which will hit master shortly, and OSD performance, various bits of which are being integrated. There was also a large pile of messenger cleanups and races fixes that will be on v0.52.
You can get v0.50 from the usual locations:
One note: there was a build issue with the latest gcc that affected the Debian squeeze and wheezy builds; those packages were not built for this release.
This release is a bit less exciting than most because it is the first development release since argonaut, and much of our time has been spent working on stability. Most of those fixes have been backported and slated for the next argonaut point release (v0.48.1). I’ll include both below; see the 0.48.1 release notes (when it’s available later this week) to see what changes with argonaut.
There is also lots of work going on with RBD to get the layering working. This didn’t quite make the 0.50 cutoff, but will be testable in the 0.51 release (or sooner, for those interested in testing the release candidate). The devops deployment work with Chef and upstart is also progressing nicely, although it is still not quite ready for wide use. We’ve also been working on some OSD threading and peering improvements that will appear in v0.50.
For those of you using our Debian/Ubuntu packages, please note that the URL is now slightly different for the development release. The stable (e.g., argonaut) release will remain at the old URL (http://ceph.com/debian) while the development releases will live at http://ceph.com/testing.
You can get this latest development release at:
We’re pleased to annouce the release of Ceph v0.48, code-named “argonaut.” This release will be the basis of our first long-term stable branch. Although we will continue to make releases every 3-4 weeks, this stable release will be maintained with bug fixes and select non-destabilizing feature additions for much longer than that. Argonaut is recommended for production users of rados and librados, rbd and librbd, and radosgw.
The upgrade to v0.48 argonaut from previous versions includes a disk-format upgrade. Please note:
The highlights for this release include: