Planet Ceph

Aggregated news from external sources

  • September 7, 2015
    The Ceph and TCMalloc performance story

    {% img center http://sebastien-han.fr/images/ceph-tcmalloc-performance.jpg The Ceph and TCMalloc performance store %}

    This article simply relays some recent discovery made around Ceph performance.
    The finding behind this story is one of the biggest im…

  • September 4, 2015
    Ceph at the OpenStack Summit Tokyo 2015

    {% img center http://sebastien-han.fr/images/openstack-summit-tokyo.jpg OpenStack Summit Tokyo: time to vote %}

    With this article, I would like to take the opportunity to thank you all for voting for our presentation.
    It is always with a great pleasur…

  • September 2, 2015
    Ceph: validate that the RBD cache is active

    {% img center http://sebastien-han.fr/images/ceph-check-rbd-cache-isenabled.jpg Ceph validate that the RBD cache is active %}

    Quick and simple test to validate if the RBD cache is enabled on your client.

    Simple thing first, if you are running a Ceph version newer or equal than 0.87 the cache is enabled by default.
    If not then you can simply enable the cache via the [client]:

    [client]
    rbd cache = true
    rbd cache writethrough until flush = true
    

    Then you need to activate two flags in your ceph.conf within the [client]:

    [client]
    admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/ceph/
    

    Both paths must be writable by the user running using the RBD library and the security context (SELinux or AppArmor) must be configured properly.

    Once this is done, run your application that is supposed to use librbd (a virtual machine or something else) and simply request the admin daemon from a socket:

    $ sudo ceph --admin-daemon /var/run/ceph/ceph-client.admin.66606.140190886662256.asok config show | grep rbd_cache
        "rbd_cache": "true",
        "rbd_cache_writethrough_until_flush": "true",
        "rbd_cache_size": "33554432",
        "rbd_cache_max_dirty": "25165824",
        "rbd_cache_target_dirty": "16777216",
        "rbd_cache_max_dirty_age": "1",
        "rbd_cache_max_dirty_object": "0",
        "rbd_cache_block_writes_upfront": "false",
    

    Verify the cache behaviour

    If you want to go further and test the performance enhancement brought by the cache you can do the following turn it off from the [client] section in your ceph.conf like this rbd cache = false.
    Then a benchmark like so using the following command (assuming the RBD pool exists):

    $ rbd -p rbd bench-write fio —io-size 4096 —io-threads 256 —io-total 1024000000 —io-pattern seq
    

    Eventually run this test with and without the cache section should bring a significant difference :).

    Enjoy!

  • September 1, 2015
    OpenStack Summit Tokyo 2015: Presentation

    The schedule for the next OpenStack Summit in Tokyo this year was announced some days ago. One of my submissions was accepted. The presentation “99.999% available OpenStack Cloud – A builder’s guide” is scheduled for Thursday, October 29, 09:50 – 10:30.

    Also other presentations from the Ceph Community have been accepted:

    Checkout the links or the schedule for dates and times of the talks

  • August 31, 2015
    CephFS: determine a file location

    {% img center http://sebastien-han.fr/images/cephfs-file-location.jpg CephFS determine a file location %}

    Quick tip to determine the location of a file stored on CephFS.

    To achieve that we simply need the inode number of this file.
    For this we wil…

  • August 27, 2015
    v0.94.3 Hammer released

    This Hammer point release fixes a critical (though rare) data corruption bug that could be triggered when logs are rotated via SIGHUP. It also fixes a range of other important bugs in the OSD, monitor, RGW, RBD, and CephFS. All v0.94.x Hammer users are strongly encouraged to upgrade. UPGRADING The pg ls-by-{pool,primary,osd} commands and pg ls now take the argument recovering instead of recovery in …Read more

  • August 24, 2015
    v9.0.3 released

    This is the second to last batch of development work for the Infernalis cycle. The most intrusive change is an internal (non user-visible) change to the OSD’s ObjectStore interface. Many fixes and improvements elsewhere across RGW, RBD, and another big pile of CephFS scrub/repair improvements. UPGRADING The return code for librbd’s rbd_aio_read and Image::aio_read API …Read more

  • August 24, 2015
    Ceph cluster on Docker for testing

    {% img center http://sebastien-han.fr/images/ceph-docker-demo.jpg Ceph cluster on Docker for testing %}

    I haven’t advertised this one really much (even if I’ve been using it in some articles).
    Since people are still wondering how to quickly get a full…

  • August 22, 2015
    faster debugging of a teuthology workunit

    The Ceph integration tests run via teuthology rely on workunits found in the Ceph repository. For instance: the /cephtool/test.sh workunit is modified it is pushed to a wip- in the official Ceph git repository the gitbuilder will automatically build packages … Continue reading

  • August 17, 2015
    Getting started with the Docker RBD volume plugin

    {% img center http://sebastien-han.fr/images/docker-ceph-rbd-volume-plugin.jpg Getting started with the Docker RBD volume plugin %}

    Docker 1.8 was just released a week ago and with it came the support for volume plugin.
    Several volume plugins are avai…

  • August 13, 2015
    Downgrade LSI 9207 to P19 Firmware

    After numerous problems encountered with the P20 firmware on this card model, here are the steps I followed to flash in P19 Version.

    Since, no more problems 🙂

    The model of the card is a LSI 9207-8i (SAS2308 controler) with IT FW:

    lspci | grep LSI
    0…

  • August 6, 2015
    Ceph: get the best of your SSD with primary affinity

    Using SSD drives in some part of your cluster might useful.
    Specially under read oriented workloads.
    Ceph has a mechanism called primary affinity, which allows you to put a higher affinity on your OSDs so they will likely be primary on some PGs.
    The …

Careers