Planet Ceph

Aggregated news from external sources

  • March 11, 2015
    New release of python-cephclient:Ā

    I’ve just drafted a new release of python-cephclient
    on PyPi: v0.1.0.5.

    After learning about the ceph-rest-api I just had
    to do something fun with it.

    In fact, it’s going to become very handy for me as I might start to develop
    with it for things like nagios monitoring scripts.

    The changelog:


    • Add missing dependency on the requests library
    • Some PEP8 and code standardization cleanup
    • Add root “PUT” methods
    • Add mon “PUT” methods
    • Add mds “PUT” methods
    • Add auth “PUT” methods

    Donald Talton:

    • Add osd “PUT” methods

    Please try it out and let me know if you have any feedback !

    Pull requests are welcome šŸ™‚

  • March 10, 2015
    v0.80.9 Firefly released

    This is a bugfix release for firefly. It fixes a performance regression in librbd, an important CRUSH misbehavior (see below), and several RGW bugs. We have also backported support for flock/fcntl locks to ceph-fuse and libcephfs. We recommend that all Firefly users upgrade. For more detailed information, seeĀ theĀ completeĀ changelog. ADJUSTING CRUSH MAPS This point release fixes …Read more

  • March 9, 2015
    Provisionning a teuthology target with a given kernel

    When a teuthology target (i.e. machine) is provisioned with teuthology-lock for the purpose of testing Ceph, there is no way to choose the kernel. But it can be installed afterwards using the following: cat > kernel.yaml <<EOF interactive-on-error: true roles: … Continue reading

  • March 9, 2015
    Ceph OSD uuid conversion to OSD id and vice versa

    When handling a Ceph OSD, it is convenient to assign it a symbolic name that can be chosen even before it is created. That’s what the uuid argument for ceph osd create is for. Without a uuid argument, a random … Continue reading

  • March 5, 2015
    Incomplete PGs — OH MY!

    I recently had the opportunity to work on a Firefly cluster (0.80.8) in which power outages caused a failure of two OSDs. As with lots of things in technology, that’s not the whole story. The manner in which the power outages and OSD failures occurred put the cluster into a state with 5 placement groups …Read more

  • March 3, 2015
    Re-schedule failed teuthology jobs

    The Ceph integration tests may fail because of environmental problems (network not available, packages not built, etc.). If six jobs failed out of seventy, these failed test can be re-run instead of re-scheduling the whole suite. It can be done … Continue reading

  • February 28, 2015
    HOWTO extract a stack trace from teuthology (take 2)

    When a Ceph teuthology integration test fails (for instance a rados jobs), it will collect core dumps which can be downloaded from the same directory where the logs and config.yaml files can be found, under the remote/mira076/coredump directory. The binary … Continue reading

  • February 27, 2015
    v0.93 Hammer release candidate released

    This is the first release candidate for Hammer, and includes all of the features that will be present in the final release. We welcome and encourage any and all testing in non-production clusters to identify any problems with functionality, stability, or performance before the final Hammer release. We suggest some caution in one area: librbd. …Read more

  • February 27, 2015
    Analyse OpenStack guest writes and reads running on Ceph

    {% img center Analyse OpenStack guest writes and reads running on Ceph %}

    Analyse IO pattern of all your guest machines.

    Append the following in your ceph.conf:

    log file = /var/log/qem…

  • February 26, 2015
    v0.87.1 Giant released

    This is the first (and possibly final) point release for Giant. Our focus on stability fixes will be directed towards Hammer and Firefly. We recommend that all v0.87 Giant users upgrade to this release. UPGRADING Due to a change in the Linux kernel version 3.18 and the limits of the FUSE interface, ceph-fuse needs be …Read more

  • February 25, 2015
    Ceph Storage :: Next Big Thing 2015-02-25 20:10:00

    Now Showing : Learning Ceph  a comprehensive Book on Software Defined Storage : CEPHHello Ceph ‘ers  , The year 2014 is pretty productive to Ceph and to its surrounding world. Ceph entered the 10 year maturi…

  • February 24, 2015
    CDS: Infernalis Call for Blueprints

    The “Ceph Developer Summit” for the Infernalis release is on the way. The summit is planed for 03. and 04. March. The blueprint submission period started on 16. February and will end 27. February 2015. Do you miss something in Ceph or plan to deve…