The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • November 6, 2015
    v9.2.0 Infernalis released

    This major release will be the foundation for the next stable series. There have been some major changes since v0.94.x Hammer, and the upgrade process is non-trivial. Please read these release notes carefully. MAJOR CHANGES FROM HAMMER

  • November 6, 2015
    v0.94.5 Hammer released

    This Hammer point release fixes a critical regression in librbd that can cause Qemu/KVM to crash when caching is enabled on images that have been cloned. All v0.94.4 Hammer users are strongly encouraged to upgrade. NOTABLE CHANGES librbd: potential assertion failure during cache read (issue#13559, pr#6348, Jason Dillaman) osd: osd/ReplicatedPG: remove stray debug line (issue#13455, …Read more

  • October 19, 2015
    v0.94.4 Hammer released

    This Hammer point fixes several important bugs in Hammer, as well as fixing interoperability issues that are required before an upgrade to Infernalis. That is, all users of earlier version of Hammer or any version of Firefly will first need to upgrade to hammer v0.94.4 or later before upgrading to Infernalis (or future releases). All …Read more

  • October 13, 2015
    v9.1.0 Infernalis release candidate released

    This is the first Infernalis release candidate. There have been some major changes since Hammer, and the upgrade process is non-trivial. Please read carefully. GETTING THE RELEASE CANDIDATE The v9.1.0 packages are pushed to the development release repositories: http://download.ceph.com/rpm-testing http://download.ceph.com/debian-testing For for info, see: http://docs.ceph.com/docs/master/install/get-packages/ Or install with ceph-deploy via: ceph-deploy install –testing HOST KNOWN …Read more

  • September 17, 2015
    Important security notice regarding signing key and binary downloads of Ceph

    Last week, Red Hat investigated an intrusion on the sites of both the Ceph community project (ceph.com) and Inktank (download.inktank.com), which were hosted on a computer system outside of Red Hat infrastructure. Ceph.com provided Ceph community versions downloads signed with a Ceph signing key ¬†(id 7EBFDD5D17ED316D). ¬†Download.inktank.com provided releases of the Red Hat Ceph product …Read more

  • September 15, 2015
    OpenStack Nova: configure multiple Ceph backends on one hypervisor

    {% img center http://sebastien-han.fr/images/openstack-nova-configure-multiple-ceph-backends.jpg OpenStack Nova configure multiple Ceph backends on one hypervisor %}

    Configure a Nova hypervisor with more than one backend to store the instance’s root e…

  • September 9, 2015
    Ceph: release RAM used by TCMalloc

    {% img center http://sebastien-han.fr/images/ceph-release-memory-tcmalloc.jpg Ceph release RAM used by TCMalloc %}

    Quick tip to release the memory that tcmalloc has allocated but which is not being used by the Ceph daemon itself.

    $ ceph tell osd.* heap release
    osd.0: osd.0 releasing free RAM back to system.
    osd.1: osd.1 releasing free RAM back to system.
    osd.2: osd.2 releasing free RAM back to system.
    osd.3: osd.3 releasing free RAM back to system.
    osd.4: osd.4 releasing free RAM back to system.
    osd.5: osd.5 releasing free RAM back to system.
    osd.6: osd.6 releasing free RAM back to system.
    osd.7: osd.7 releasing free RAM back to system.
    

    Et voilà !

  • September 7, 2015
    The Ceph and TCMalloc performance story

    {% img center http://sebastien-han.fr/images/ceph-tcmalloc-performance.jpg The Ceph and TCMalloc performance store %}

    This article simply relays some recent discovery made around Ceph performance.
    The finding behind this story is one of the biggest im…

  • September 4, 2015
    Ceph at the OpenStack Summit Tokyo 2015

    {% img center http://sebastien-han.fr/images/openstack-summit-tokyo.jpg OpenStack Summit Tokyo: time to vote %}

    With this article, I would like to take the opportunity to thank you all for voting for our presentation.
    It is always with a great pleasur…

  • September 2, 2015
    Ceph: validate that the RBD cache is active

    {% img center http://sebastien-han.fr/images/ceph-check-rbd-cache-isenabled.jpg Ceph validate that the RBD cache is active %}

    Quick and simple test to validate if the RBD cache is enabled on your client.

    Simple thing first, if you are running a Ceph version newer or equal than 0.87 the cache is enabled by default.
    If not then you can simply enable the cache via the [client]:

    [client]
    rbd cache = true
    rbd cache writethrough until flush = true
    

    Then you need to activate two flags in your ceph.conf within the [client]:

    [client]
    admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/ceph/
    

    Both paths must be writable by the user running using the RBD library and the security context (SELinux or AppArmor) must be configured properly.

    Once this is done, run your application that is supposed to use librbd (a virtual machine or something else) and simply request the admin daemon from a socket:

    $ sudo ceph --admin-daemon /var/run/ceph/ceph-client.admin.66606.140190886662256.asok config show | grep rbd_cache
        "rbd_cache": "true",
        "rbd_cache_writethrough_until_flush": "true",
        "rbd_cache_size": "33554432",
        "rbd_cache_max_dirty": "25165824",
        "rbd_cache_target_dirty": "16777216",
        "rbd_cache_max_dirty_age": "1",
        "rbd_cache_max_dirty_object": "0",
        "rbd_cache_block_writes_upfront": "false",
    

    Verify the cache behaviour

    If you want to go further and test the performance enhancement brought by the cache you can do the following turn it off from the [client] section in your ceph.conf like this rbd cache = false.
    Then a benchmark like so using the following command (assuming the RBD pool exists):

    $ rbd -p rbd bench-write fio —io-size 4096 —io-threads 256 —io-total 1024000000 —io-pattern seq
    

    Eventually run this test with and without the cache section should bring a significant difference :).

    Enjoy!

  • September 1, 2015
    OpenStack Summit Tokyo 2015: Presentation

    The schedule for the next OpenStack Summit in Tokyo this year was announced some days ago. One of my submissions was accepted. The presentation “99.999% available OpenStack Cloud – A builder’s guide” is scheduled for Thursday, October 29, 09:50 – 10:30.

    Also other presentations from the Ceph Community have been accepted:

    Checkout the links or the schedule for dates and times of the talks

  • August 31, 2015
    CephFS: determine a file location

    {% img center http://sebastien-han.fr/images/cephfs-file-location.jpg CephFS determine a file location %}

    Quick tip to determine the location of a file stored on CephFS.

    To achieve that we simply need the inode number of this file.
    For this we wil…

Careers