v10.2.3 Jewel Released

This point release fixes several important bugs in RBD mirroring, RGW multi-site, CephFS, and RADOS.

We recommend that all v10.2.x users upgrade.

For more detailed information, see the complete changelog.

Notable Changes

A Look at the Ceph Cookbook

“Ceph is awesome and so is its community” and last year Packt Publishing and Karan Singh from community came up with very first book on Ceph titled “Learning Ceph“.

The overwhelming response to the first book + maturity and popularity of Ceph becomes the base for the next title on Ceph, “Ceph Cookbook“. Author and publisher together has spent several months to produce 326 pages quality content on Ceph including 100 ready to use recipes. And here is the deal, you will get 50% discount on both these books by using this discount code ceph-50 while purchasing the eBook online at This offer is valid till 31Dec 2016.

read more…

Last week, Red Hat investigated an intrusion on the sites of both the Ceph community project ( and Inktank (, which were hosted on a computer system outside of Red Hat infrastructure. provided Ceph community versions downloads signed with a Ceph signing key  (id 7EBFDD5D17ED316D). provided releases of the Red Hat Ceph product for Ubuntu and CentOS operating systems signed with an Inktank signing key (id 5438C7019DCEEEAD). While the investigation into the intrusion is ongoing, our initial focus was on the integrity of the software and distribution channel for both sites.

To date, our investigation has not discovered any compromised code or binaries available for download on these sites. However, we cannot fully rule out the possibility that some compromised code or binaries were available for download at some point in the past. Further, we can no longer trust the integrity of the Ceph signing key, and therefore have created a new signing key (id E84AC2C0460F3994) for verifying downloads.  This new key is committed to the ceph.git repository and is also available from  All future release git tags will be signed with this new key.

This intrusion did not affect other Ceph sites such as (which contained some Ceph downloads) or (which mirrors various source repositories), and is not known to have affected any other Ceph community infrastructure.  There is no evidence that build systems or the Ceph github source repository were compromised.

New hosts for and have been created and the sites have been rebuilt.  All content available on has been verified, and all URLs for package locations now redirect there.  There is still some content missing from that will appear later today: source tarballs will be regenerated from git, and older release packages are being resigned with the new release key.

The host has been retired and affected Red Hat customers have been notified, further information is available at

Users of Ceph packages should take action as a precautionary measure to download the newly-signed versions.  Please see the instructions below.

The Ceph community would like to thank Kai Fabian for initially alerting us to this issue.

The following steps should be performed on all nodes with Ceph software installed.

Replace APT keys (Debian, Ubuntu)

sudo apt-key del 17ED316D
curl | sudo apt-key add -
sudo apt-get update

Replace RPM keys (Fedora, CentOS, SUSE, etc.)

sudo rpm -e --allmatches gpg-pubkey-17ed316d-4fb96ee8
sudo rpm --import ''

Reinstalling packages (Fedora, CentOS, SUSE, etc.)

sudo yum clean metadata
sudo yum reinstall -y $(repoquery --disablerepo=* --enablerepo=ceph --queryformat='%{NAME}' list '*')

Ceph: release RAM used by TCMalloc

{% img center Ceph release RAM used by TCMalloc %}

Quick tip to release the memory that tcmalloc has allocated but which is not being used by the Ceph daemon itself.

read more…

The Ceph and TCMalloc performance story

{% img center The Ceph and TCMalloc performance store %}

This article simply relays some recent discovery made around Ceph performance.
The finding behind this story is one of the biggest improvement in Ceph performance that has been seen in years.
So I will just highlight and summarize the study in case you do not want to read it entirely.

read more…

Ceph at the OpenStack Summit Tokyo 2015

{% img center OpenStack Summit Tokyo: time to vote %}

With this article, I would like to take the opportunity to thank you all for voting for our presentation.
It is always with a great pleasure that we will give you the last updates of Ceph developments happening in OpenStack.

Ceph talks coverage at the next OpenStack summit:

The schedule for the next OpenStack Summit in Tokyo this year was announced some days ago. One of my submissions was accepted. The presentation “99.999% available OpenStack Cloud – A builder’s guide” is scheduled for Thursday, October 29, 09:50 – 10:30.

Also other presentations from the Ceph Community have been accepted:

Checkout the links or the schedule for dates and times of the talks

Ceph Developer Summit: Hammer

As many of you Ceph Day attendees are no doubt aware, we’re fast approaching the release date for the ‘Giant’ release of Ceph. With that, it’s time to get together at another virtual Ceph Developer Summit and chat about what development work is going in to the ‘Hammer’ release. Blueprint submissions are open now, so if you have any work you would like to contribute or request of our community developers, please submit it as soon as possible to ensure it gets a CDS slot.

The rough schedule of CDS and Hammer in general should look something like this:

Date Milestone
30 SEP Blueprint submissions begin
17 OCT Blueprint submissions end
21 OCT Summit agenda announced
28 OCT Ceph Developer Summit: Day 1
29 OCT Ceph Developer Summit: Day 2 (if needed)
January 2015 Hammer Release

If there are enough sessions we are exploring the possibility of expanding our event into three days, but that will be predicated on the blueprint workload. As always, this event will be an online event (utilizing the BlueJeans system) so that everyone can attend from their own timezone. If you are interested in submitting a blueprint or collaborating on an existing blueprint, please click the big red button below!


Submit Blueprint

scuttlemonkey out
© 2016, Red Hat, Inc. All rights reserved.