v9.2.1 Infernalis released

This Infernalis point release fixes several packagins and init script issues, enables the librbd objectmap feature by default, a few librbd bugs, and a range of miscellaneous bug fixes across the system.

We recommend that all infernalis v9.2.0 users upgrade.

For more detailed information, see the complete changelog.


read more…

v0.94.6 Hammer released

This Hammer point release fixes a range of bugs, most notably a fix for unbounded growth of the monitor’s leveldb store, and a workaround in the OSD to keep most xattrs small enough to be stored inline in XFS inodes.

We recommend that all hammer v0.94.x users upgrade.

For more detailed information, see the complete changelog.


read more…

v10.0.3 released

This is the fourth development release for Jewel. Several big pieces have been added this release, including BlueStore (a new backend for OSD to replace FileStore), many ceph-disk fixes, a new CRUSH tunable that improves mapping stability, a new librados object enumeration API, and a whole slew of OSD and RADOS optimizations.

Note that, due to general developer busyness, we aren’t building official release packages for this dev release. You can fetch autobuilt gitbuilder packages from the usual location (


read more…

When a Ceph teuthology integration test fails (for instance a rados jobs), it will collect core dumps which can be downloaded from the same directory where the logs and config.yaml files can be found, under the remote/mira076/coredump directory.
The binary from which the core dump comes from can be displayed with:

$ file 1425077911.7304.core
ELF 64-bit LSB  core file x86-64, version 1, from 'ceph-osd -f -i 3'

The teuthology logs contains command lines that can be used to install the corresponding binaries:

$ echo deb\
/sha1/e54834bfac3c38562987730b317cb1944a96005b trusty main | \
  sudo tee /etc/apt/sources.list.d/ceph.list
$ sudo apt-get update
$ sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes \
  -o Dpkg::Options::="--force-confdef" \
  -o Dpkg::Options::="--force-confold" install \
  ceph=0.80.8-75-ge54834b-1trusty \

The ceph-dbg package contains debug symbols that will automatically be used by gdb(1):

$ gdb /usr/bin/ceph-osd 1425077911.7304.core
Reading symbols from /usr/bin/ceph-osd...
Reading symbols from /usr/lib/debug//usr/bin/ceph-osd...done.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/".
Core was generated by `ceph-osd -f -i 3'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007f59d6e9af07 in _dl_map_object_deps at dl-deps.c:528
(gdb) bt
#0  0x00007f59d6e9af07 in _dl_map_object_deps at dl-deps.c:528
#1  0x00007f59d6ea1aab in dl_open_worker at dl-open.c:272
#2  0x00007f59d6e9cff4 in _dl_catch_error  at dl-error.c:187
#3  0x00007f59d6ea13bb in _dl_open    at dl-addr.c:61
#5  __GI__dl_addr at dl-addr.c:137
#6  0x00007f59c06dcbc0 in ?? ()
#7  0x00007f59d70b11c8 in _r_debug ()
#8  0x00007f59c06dcba0 in ?? ()
#9  0x00007f59c06dcbb0 in ?? ()
#10 0x00007f59c06dcb90 in ?? ()
#11 0x00007f59c06dca94 in ?? ()
#12 0x0000000000000000 in ?? ()

v0.93 Hammer release candidate released

This is the first release candidate for Hammer, and includes all of the features that will be present in the final release. We welcome and encourage any and all testing in non-production clusters to identify any problems with functionality, stability, or performance before the final Hammer release.

We suggest some caution in one area: librbd. There is a lot of new functionality around object maps and locking that is disabled by default but may still affect stability for existing images. We are continuing to shake out those bugs so that the final Hammer release (probably v0.94) will be rock solid.

Major features since Giant include:
read more…

v0.87.1 Giant released

This is the first (and possibly final) point release for Giant. Our focus on stability fixes will be directed towards Hammer and Firefly.

We recommend that all v0.87 Giant users upgrade to this release.


  • Due to a change in the Linux kernel version 3.18 and the limits of the FUSE interface, ceph-fuse needs be mounted as root on at least some systems. See issues #9997, #10277, and #10542 for details.


read more…

Now Showing : Learning Ceph

  a comprehensive Book on Software Defined Storage : CEPH

ceph book

Hello Ceph ‘ers  , 
The year 2014 is pretty productive to Ceph and to its surrounding world. Ceph entered the 10 year maturity haul with its 10th Birthday.The most buzzed news “RedHat acquired Inktank” was a major success to Ceph and its community, and finally ‘Ceph Firefly’ the Long Term Support production grade version is out in 2014 , with its wow features like Erasure Coding and Cache Tiering.

During the same year , somewhere in this world i was investing my several months for writing the first ever book on Ceph . Finally it was 20 January 2015 when my First as well as Ceph’s First book got published. 
Introducing Learning Ceph : a comprehensive guide to learn software defined massively scalable Ceph storage system.
I have compiled this book with all my experience on Ceph and other related technologies and i am sure this book you pure knowledge on Ceph that you are always wondering about.

You can grab a Free Sample copy of this book from :
The book is available for purchase from variety of channels and on various formats. :
Safari Books :
Publishers Official website :
ceph book

I am sure , if you are interested to learn Ceph for building up

 your multi-petabyte storage system , this book is 100% 

a must have resource for Ceph.

So …..

Ceph book

CDS: Infernalis Call for Blueprints

The “Ceph Developer Summit” for the Infernalis release is on the way. The summit is planed for 03. and 04. March. The blueprint submission period started on 16. February and will end 27. February 2015. 
Do you miss something in Ceph or plan to develop some feature for the next release? It’s your chance to submit a blueprint here.

Quick and efficient Ceph DevStacking

{% img center Quick and efficient Ceph DevStacking %}

Recently I built a little repository on github/ceph where I put two files to help you building your DevStack Ceph.

$ git clone
$ git clone
$ cp ceph-devstack/local* devstack
$ cd devstack
$ ./

Happy DevStacking!

© 2016, Red Hat, Inc. All rights reserved.