Planet Ceph

Aggregated news from external sources

  • October 19, 2015
    Recap: DreamObjects at Red Hat Storage Day

    We can’t stop talking about the engineering feat that is DreamObjects. We’re very proud of having started Ceph¬†(the software that powers our cloud offering) open sourcing it for the world to enjoy, and we’re proud that Red Hat has taken the lead to make Ceph even more powerful. Image Source: @RedHatStorage on Twitter The latest …Read more

  • October 7, 2015
    teuthology forensics with git, shell and paddles

    When a teuthology integration test for Ceph fails, the results are analyzed to find the source of the problem. For instance the upgrade suite: pool_create failed with error -4 EINTR issue was reported early October 2015, with multiple integration job failures. The first step is to look into the teuthology log which revealed that pools …Read more

  • September 26, 2015
    On demand Ceph packages for teuthology

    When a teuthology jobs install Ceph, it uses packages created by gitbuilder. These packages are built every time a branch is pushed to the official repository. Contributors who do not have write access to the official repository, can either ask a developer with access to push a branch for them or setup a gitbuilder repository, …Read more

  • September 15, 2015
    OpenStack Nova: configure multiple Ceph backends on one hypervisor

    {% img center http://sebastien-han.fr/images/openstack-nova-configure-multiple-ceph-backends.jpg OpenStack Nova configure multiple Ceph backends on one hypervisor %}

    Configure a Nova hypervisor with more than one backend to store the instance’s root e…

  • September 9, 2015
    Ceph: release RAM used by TCMalloc

    {% img center http://sebastien-han.fr/images/ceph-release-memory-tcmalloc.jpg Ceph release RAM used by TCMalloc %}

    Quick tip to release the memory that tcmalloc has allocated but which is not being used by the Ceph daemon itself.

    $ ceph tell osd.* heap release
    osd.0: osd.0 releasing free RAM back to system.
    osd.1: osd.1 releasing free RAM back to system.
    osd.2: osd.2 releasing free RAM back to system.
    osd.3: osd.3 releasing free RAM back to system.
    osd.4: osd.4 releasing free RAM back to system.
    osd.5: osd.5 releasing free RAM back to system.
    osd.6: osd.6 releasing free RAM back to system.
    osd.7: osd.7 releasing free RAM back to system.
    

    Et voilà !

  • September 7, 2015
    The Ceph and TCMalloc performance story

    {% img center http://sebastien-han.fr/images/ceph-tcmalloc-performance.jpg The Ceph and TCMalloc performance store %}

    This article simply relays some recent discovery made around Ceph performance.
    The finding behind this story is one of the biggest im…

  • September 4, 2015
    Ceph at the OpenStack Summit Tokyo 2015

    {% img center http://sebastien-han.fr/images/openstack-summit-tokyo.jpg OpenStack Summit Tokyo: time to vote %}

    With this article, I would like to take the opportunity to thank you all for voting for our presentation.
    It is always with a great pleasur…

  • September 2, 2015
    Ceph: validate that the RBD cache is active

    {% img center http://sebastien-han.fr/images/ceph-check-rbd-cache-isenabled.jpg Ceph validate that the RBD cache is active %}

    Quick and simple test to validate if the RBD cache is enabled on your client.

    Simple thing first, if you are running a Ceph version newer or equal than 0.87 the cache is enabled by default.
    If not then you can simply enable the cache via the [client]:

    [client]
    rbd cache = true
    rbd cache writethrough until flush = true
    

    Then you need to activate two flags in your ceph.conf within the [client]:

    [client]
    admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/ceph/
    

    Both paths must be writable by the user running using the RBD library and the security context (SELinux or AppArmor) must be configured properly.

    Once this is done, run your application that is supposed to use librbd (a virtual machine or something else) and simply request the admin daemon from a socket:

    $ sudo ceph --admin-daemon /var/run/ceph/ceph-client.admin.66606.140190886662256.asok config show | grep rbd_cache
        "rbd_cache": "true",
        "rbd_cache_writethrough_until_flush": "true",
        "rbd_cache_size": "33554432",
        "rbd_cache_max_dirty": "25165824",
        "rbd_cache_target_dirty": "16777216",
        "rbd_cache_max_dirty_age": "1",
        "rbd_cache_max_dirty_object": "0",
        "rbd_cache_block_writes_upfront": "false",
    

    Verify the cache behaviour

    If you want to go further and test the performance enhancement brought by the cache you can do the following turn it off from the [client] section in your ceph.conf like this rbd cache = false.
    Then a benchmark like so using the following command (assuming the RBD pool exists):

    $ rbd -p rbd bench-write fio —io-size 4096 —io-threads 256 —io-total 1024000000 —io-pattern seq
    

    Eventually run this test with and without the cache section should bring a significant difference :).

    Enjoy!

  • September 1, 2015
    OpenStack Summit Tokyo 2015: Presentation

    The schedule for the next OpenStack Summit in Tokyo this year was announced some days ago. One of my submissions was accepted. The presentation “99.999% available OpenStack Cloud – A builder’s guide” is scheduled for Thursday, October 29, 09:50 – 10:30.

    Also other presentations from the Ceph Community have been accepted:

    Checkout the links or the schedule for dates and times of the talks

  • August 31, 2015
    CephFS: determine a file location

    {% img center http://sebastien-han.fr/images/cephfs-file-location.jpg CephFS determine a file location %}

    Quick tip to determine the location of a file stored on CephFS.

    To achieve that we simply need the inode number of this file.
    For this we wil…

  • August 24, 2015
    Ceph cluster on Docker for testing

    {% img center http://sebastien-han.fr/images/ceph-docker-demo.jpg Ceph cluster on Docker for testing %}

    I haven’t advertised this one really much (even if I’ve been using it in some articles).
    Since people are still wondering how to quickly get a full…

  • August 23, 2015
    faster debugging of a teuthology workunit

    The Ceph integration tests run via teuthology rely on workunits found in the Ceph repository. For instance: the /cephtool/test.sh workunit is modified it is pushed to a wip- in the official Ceph git repository the gitbuilder will automatically build packages for all supported distributions for this wip- branch the rados/singleton/all/cephtool suite can be run with …Read more

Careers