When a Ceph teuthology integration test fails (for instance a rados jobs), it will collect core dumps which can be downloaded from the same directory where the logs and config.yaml files can be found, under the remote/mira076/coredump directory.
The binary from which the core dump comes from can be displayed with:

$ file 1425077911.7304.core
ELF 64-bit LSB  core file x86-64, version 1, from 'ceph-osd -f -i 3'

The teuthology logs contains command lines that can be used to install the corresponding binaries:

$ echo deb\
/sha1/e54834bfac3c38562987730b317cb1944a96005b trusty main | \
  sudo tee /etc/apt/sources.list.d/ceph.list
$ sudo apt-get update
$ sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes \
  -o Dpkg::Options::="--force-confdef" \
  -o Dpkg::Options::="--force-confold" install \
  ceph=0.80.8-75-ge54834b-1trusty \

The ceph-dbg package contains debug symbols that will automatically be used by gdb(1):

$ gdb /usr/bin/ceph-osd 1425077911.7304.core
Reading symbols from /usr/bin/ceph-osd...
Reading symbols from /usr/lib/debug//usr/bin/ceph-osd...done.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/".
Core was generated by `ceph-osd -f -i 3'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007f59d6e9af07 in _dl_map_object_deps at dl-deps.c:528
(gdb) bt
#0  0x00007f59d6e9af07 in _dl_map_object_deps at dl-deps.c:528
#1  0x00007f59d6ea1aab in dl_open_worker at dl-open.c:272
#2  0x00007f59d6e9cff4 in _dl_catch_error  at dl-error.c:187
#3  0x00007f59d6ea13bb in _dl_open    at dl-addr.c:61
#5  __GI__dl_addr at dl-addr.c:137
#6  0x00007f59c06dcbc0 in ?? ()
#7  0x00007f59d70b11c8 in _r_debug ()
#8  0x00007f59c06dcba0 in ?? ()
#9  0x00007f59c06dcbb0 in ?? ()
#10 0x00007f59c06dcb90 in ?? ()
#11 0x00007f59c06dca94 in ?? ()
#12 0x0000000000000000 in ?? ()

v0.93 Hammer release candidate released

This is the first release candidate for Hammer, and includes all of the features that will be present in the final release. We welcome and encourage any and all testing in non-production clusters to identify any problems with functionality, stability, or performance before the final Hammer release.

We suggest some caution in one area: librbd. There is a lot of new functionality around object maps and locking that is disabled by default but may still affect stability for existing images. We are continuing to shake out those bugs so that the final Hammer release (probably v0.94) will be rock solid.

Major features since Giant include:
read more…

v0.87.1 Giant released

This is the first (and possibly final) point release for Giant. Our focus on stability fixes will be directed towards Hammer and Firefly.

We recommend that all v0.87 Giant users upgrade to this release.


  • Due to a change in the Linux kernel version 3.18 and the limits of the FUSE interface, ceph-fuse needs be mounted as root on at least some systems. See issues #9997, #10277, and #10542 for details.


read more…

Now Showing : Learning Ceph

  a comprehensive Book on Software Defined Storage : CEPH

ceph book

Hello Ceph ‘ers  , 
The year 2014 is pretty productive to Ceph and to its surrounding world. Ceph entered the 10 year maturity haul with its 10th Birthday.The most buzzed news “RedHat acquired Inktank” was a major success to Ceph and its community, and finally ‘Ceph Firefly’ the Long Term Support production grade version is out in 2014 , with its wow features like Erasure Coding and Cache Tiering.

During the same year , somewhere in this world i was investing my several months for writing the first ever book on Ceph . Finally it was 20 January 2015 when my First as well as Ceph’s First book got published. 
Introducing Learning Ceph : a comprehensive guide to learn software defined massively scalable Ceph storage system.
I have compiled this book with all my experience on Ceph and other related technologies and i am sure this book you pure knowledge on Ceph that you are always wondering about.

You can grab a Free Sample copy of this book from :
The book is available for purchase from variety of channels and on various formats. :
Safari Books :
Publishers Official website :
ceph book

I am sure , if you are interested to learn Ceph for building up

 your multi-petabyte storage system , this book is 100% 

a must have resource for Ceph.

So …..

Ceph book

CDS: Infernalis Call for Blueprints

The “Ceph Developer Summit” for the Infernalis release is on the way. The summit is planed for 03. and 04. March. The blueprint submission period started on 16. February and will end 27. February 2015. 
Do you miss something in Ceph or plan to develop some feature for the next release? It’s your chance to submit a blueprint here.

Quick and efficient Ceph DevStacking

{% img center Quick and efficient Ceph DevStacking %}

Recently I built a little repository on github/ceph where I put two files to help you building your DevStack Ceph.

$ git clone
$ git clone
$ cp ceph-devstack/local* devstack
$ cd devstack
$ ./

Happy DevStacking!

Get the Number of Placement Groups Per Osd

Get the PG distribution per osd in command line :

pool :  0   1   2   3   | SUM 
osd.10  6   6   6   84  | 102
osd.11  7   6   6   76  | 95
osd.12  4   4   3   56  | 67
osd.20  5   5   5   107 | 122
osd.13  3   3   3   73  | 82
osd.21  9   10  10  110 | 139
osd.14  3   3   3   85  | 94
osd.15  6   6   6   87  | 105
osd.22  6   6   5   87  | 104
osd.23  10  10  10  87  | 117
osd.16  7   7   7   102 | 123
osd.17  5   5   5   99  | 114
osd.18  4   4   4   103 | 115
osd.19  7   7   7   112 | 133
osd.0   5   5   5   72  | 87
osd.1   5   5   6   83  | 99
osd.2   3   3   3   74  | 83
osd.3   5   5   5   61  | 76
osd.4   3   3   4   76  | 86
osd.5   5   5   5   78  | 93
osd.6   3   2   2   78  | 85
osd.7   3   3   3   88  | 97
osd.8   9   9   9   91  | 118
osd.9   5   6   6   79  | 96
SUM :   128 128 128 2048    |

read more…

The next OpenStack summit is planed to take place in Vancouver from 18 – 22 May 2015. The  “Call for Speakers” ended last week. The vote for presentation period started today and will end 23. February, 23:00 UTC (00:00 CEST, 05:00 CST). 
This year I’ve submitted, together with Sage Weil, a talk to the “Cloud Security” track with the title: “Storage security in a critical enterprise OpenStack environment“. The talk will provide insight into requirements for a secure setup and potential issues, pitfalls, and attack vectors against storage technologies used with an enterprise OpenStack cloud. We will present what Deutsche Telekom and Red Hat/Inktank, together with the community, are working on to build a security critical cloud with OpenStack and Ceph.
You can vote, if you are interested to see our talk at the summit, every vote is highly welcome. You can find a full abstract at the voting page.
My colleague Marc Koderer submitted this year two talks [1][2]. These are unrelated to Ceph, but he will appreciate your votes for sure, if you are interested in them.

{% img center OpenStack summit talks: Ceph and OpenStack upgrades %}

Self promotion ahead :)
For the next OpenStack summit I have submitted two talks.

The first one is about Ceph and OpenStack (yet again!), in this session Josh Durgin and I will be focusing on the roadmap around the integration of Ceph into OpenStack.
People might think that we are almost done, this is not true, even if we achieve a really good coverage many things need to be addressed.

The next talk is about OpenStack upgrade, a particularly challenging one that I am working on with Cisco since it’s from Ubuntu Precise Havana to RHEL7 Icehouse.
Basically it’s a migration and an upgrade. We already started this process, so John Dewey from Cisco and I would love to share our experience so far.

Thanks a lot in advance for your votes :).
See you in Vancouver!

© 2015, Red Hat, Inc. All rights reserved.