This release is a bit less exciting than most because it is the first development release since argonaut, and much of our time has been spent working on stability. Most of those fixes have been backported and slated for the next argonaut point release (v0.48.1). I’ll include both below; see the 0.48.1 release notes (when it’s available later this week) to see what changes with argonaut.
There is also lots of work going on with RBD to get the layering working. This didn’t quite make the 0.50 cutoff, but will be testable in the 0.51 release (or sooner, for those interested in testing the release candidate). The devops deployment work with Chef and upstart is also progressing nicely, although it is still not quite ready for wide use. We’ve also been working on some OSD threading and peering improvements that will appear in v0.50.
For those of you using our Debian/Ubuntu packages, please note that the URL is now slightly different for the development release. The stable (e.g., argonaut) release will remain at the old URL (http://ceph.com/debian) while the development releases will live at http://ceph.com/testing.
You can get this latest development release at:
We’re pleased to annouce the release of Ceph v0.48, code-named “argonaut.” This release will be the basis of our first long-term stable branch. Although we will continue to make releases every 3-4 weeks, this stable release will be maintained with bug fixes and select non-destabilizing feature additions for much longer than that. Argonaut is recommended for production users of rados and librados, rbd and librbd, and radosgw.
The upgrade to v0.48 argonaut from previous versions includes a disk-format upgrade. Please note:
The highlights for this release include:
Hi! My name is Eleanor, and I’m working on Ceph as an intern for Inktank this summer. My task for the summer is to use the Ceph API to create a lock-free, distributed key-value store suitable for storing large sets of small key-value pairs. I’ve just finished my first year at Pomona College, where I’m majoring in Computer Science. I had previously explored concurrency with a Computer Science professor at the University of Utah, but this is my first experience with file systems, my first experience with a startup, and my first experience working on an open source software project. At the beginning of the summer, I was somewhat terrified. On my first day, as Sam walked me through how to use Github, I worried that I was in over my head. There were so many skills and so much vocabulary that came naturally to everyone around me but with which I had little to no familiarity. But the Ceph team proved to be extraordinarily welcoming and supportive as I got up to speed.
As a warm-up exercise to gain familiarity with the API, I began the summer by creating an object map benchmarking tool. Librados objects are the basic unit of storage in Ceph. Objects have a number of properties:
This is a bugfix release with one major fix and a few small ones:
I was going to wait for v0.48, but that is still several days away. If you are using RBD in production, you should either add ‘filestore fiemap = false’ to your ceph.conf file or upgrade.
You can get this release from the usual places, with the exception of Debian sid and wheezy packages; the upstream repos were sufficiently broken to make pbuilder cranky so I left them out.
This point release fixes a librbd bug and a small items:
You can get it from the usual places:
There was a problem with v0.47 that prevented new OSDs from detecting xattr support properly. We’ve released v0.47.1 to resolve this issue. If you’ve installed v0.47, and plan to create new OSDs, you should upgrade.
You can get v0.47.1 from the usual locations:
It’s been another three weeks and v0.47 is ready. The highlights include:
In truth it wasn’t the most productive sprint because of the work that went into the launch of the web sites, the launch party, and the subsequent inebriation. However, the new RBD caching feature is looking very good at this point, and patches are working their way upstream in Qemu/KVM to enable it with the generic ‘cache=writethrough’ or ‘cache=writeback’ settings.
One other noteworthy item is that I generated a new PGP key to sign releases with. The key is now in ceph.git, and has been signed by my personal key. If you are installing debs from our repositories, you’ll want to add the new key to your APT keyring to avoid annoying security warnings.
For v0.48, we are working on a ceph-osd refactor to improve threading and performance, multi-monitor and OSD hotplugging support for upstart and Chef, improvements to the OSD and monitor bootstrapping to make that possible, and RBD groundwork for the much-anticipated layering feature.
You can get v0.47 from the usual places:
Hello, everyone! I’m the new community guy for Ceph, and it’s my honor to announce the refresh of our website.
Today is a big day for all of us because Inktank has just launched. Inktank’s mission is to provide professional services and support for users of Ceph, but it goes a lot deeper; Inktank is dedicated to the success of the Ceph project and will ensure that it has the resources it needs to succeed.
I came to work at Inktank because we all believe in Ceph, we believe in the community process, and we have no interest in being a traditional software company hiding behind an open source marketing strategy. Ceph belongs to us all, and Inktank will earn its success through the quality of the service it provides.
The new Ceph.com site is a bit more modern, a lot more dynamic, and (hopefully) a bit easier to navigate.
It’s not perfect, but it gives us tons of room to grow. Very soon (within days), we’ll be releasing a major documentation rewrite to help new users get up to speed with Ceph. We’ve been looking into more advanced community metrics, project testing stats, and tighter integration between tools.
We’re just getting started! We’d love to hear from you. If you have any thoughts on the new site, let us know in the comments below.
Another sprint, and v0.46 is ready. Big items in this release include:
The biggest new item here is the new RBD (librbd) cache mode that Josh has been working on. This reuses a caching module that ceph-fuse and libcephfs have used for ages, so the cache portion of the code is well-tested, but the integration with librbd is new, and there are some (rare) failure cases that are not yet handled in this version. We recommend it for performance and failure testing at this stage, but not for production use just yet–wait for v0.47. librbd also got trim/discard support. Patches for wire it up to qemu are still working their way upstream (and won’t work for virtio until virtio gets discard support).
We’ve revamped some of the default locations for data directories and log files and incoporated a cluster name configurable. By default, the cluster name is ‘ceph’, and the config file is /etc/ceph/$cluster.conf (so ceph.conf is still the default). The $cluster substitution variable is used the other default locations, allowing the same host to contain daemons participating in different clusters. All data defaults to /var/lib/ceph/$type/$cluster-$id (e.g., /var/lib/ceph/osd/ceph-123 for osd_data), and logs go to /var/log/ceph/$cluster.$type.$id. You can, of course, still override these with your own locations as before.
There is also new logging code that allows the daemons to gather debug information at a different (higher) log level than what is actually written to the log (asynchronously). In the event of a crash (seg fault, failed assertion), the full log is dumped to the log for our reading pleasure. The general syntax looks like:
debug foo = 1/10
where ‘foo’ is the subsystem name (e.g., “osd”, “filestore”, etc.), the first number is the debug level that is written to the log, and the second number is the level that is gathered in memory (we keep many thousands of past entries around by default). The hope is that people can gather debug information in memory with a lower performance impact and avoid eating their disk space. We’ll need some more operational experience to find out how expensive that will really be.
You can get v0.46 from the usual locations:
v0.45 is ready! Notable changes include:
In short, some performance and bug fixes but no huge functionality. v0.46 will be a bit more exciting on that front.
You can get packages from the usual locations: