The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • November 12, 2018
    Announcing The Ceph Foundation

                  Today we are excited to announce the launch of The Ceph Foundation, a new organization to bring industry members together to support the Ceph open source community. The new foundation is organized as a directed fund under the Linux Foundation, which is also home to many other projects and cross-project foundations, including Linux and …Read more

  • November 2, 2018
    Ceph Community Newsletter, October 2018 edition

    Hey Cephers! We are catching up on our Community newsletters. This edition covers content starting from end of July to October 2018. Enjoy! Announcements Ceph Day Berlin Our Ceph Day Berlin on November 12th is officially sold out! We have a great line up of talks and are looking forward to some productive discussions with …Read more

  • October 29, 2018
    Evaluating Ceph Deployments with Rook

    This summer I was lucky to get selected for an internship at CERN. As a CERN openlab Summer Student, I worked in the IT storage group for nine weeks and my summer project was “Evaluating Ceph Deployments with Rook”. I previously had an amazing experience contributing to Ceph as an outreachy intern so I was …Read more

  • September 26, 2018
    New Dashboard landing page for Nautilus has been merged

    One of the future highly user-visible improvements in the Ceph Manager Dashboard is a new landing page that will use “native” JavaScript widgets to inform the user about the current state of the cluster at a glance. This feature has been in the works for quite some time now; after initial discussions on the related …Read more

  • September 11, 2018
    Ceph Day Warsaw (April 25, 2017)

    Hello Cephers! Information about Ceph Days coming to Poland created quite a movement in our development team. We have been working with Ceph from the inside out for few years now and the opportunity to share our view and approach to this innovative SDS was quite exciting! We were also simply curious as to who …Read more

  • August 2, 2018
    Rook: Automating Ceph for Kubernetes

    Rook is an orchestrator for storage services that run in a Kubernetes cluster. In the Rook v0.8 release, we are excited to say that the orchestration around Ceph has stabilized to the point to be declared Beta. If you haven’t yet started a Ceph cluster with Rook, now is the time to take it for a …Read more

  • July 27, 2018
    Meeting report: Ceph Manager Dashboard F2F Meeting in Nuremberg, Germany

    From Tuesday, 17th to Thursday, 19th of July, the developers working on the Ceph Manager Dashboard held a first face to face meeting in Nuremberg, Germany. The meeting was hosted by SUSE and was attended by 23 representatives from Red Hat and SUSE that work on the dashboard or related components like Prometheus/Grafana or the …Read more

  • July 17, 2018
    Ceph User Survey 2018 results

    To better understand how our current users utilize Ceph, we conducted a public community survey from May 1st to May 25th 2018. The Ceph User Survey 2018 Slides contain the graphs from Survey Monkey. We are also making the (only slightly cleaned up) responses available under the Community Data License Agreement – Sharing, Version 1.0 here: …Read more

  • June 29, 2018
    New in Mimic: iostat plugin

    (this is a guest post by Mohamad Gebai from SUSE, who developed the iostat plugin) The Mimic release of Ceph has brought with it a small yet useful feature for monitoring the activity on a Ceph cluster: the iostat command, which comes as a form of a Ceph manager plugin. The iostat module is enabled …Read more

  • June 27, 2018
    New in Mimic: Simplified RBD Image Cloning

    Motivation In previous Ceph releases, to create a clone of an image one must first create a snapshot and then mark the snapshot as protected before attempting to clone: $ rbd snap create parent@snap $ rbd snap protect parent@snap $ rbd clone parent@snap clone This was a necessary evil to ensure RBD performed the proper book-keeping …Read more

  • June 14, 2018
    New in Mimic: centralized configuration management

    One of the key new features in Ceph Mimic is the ability to manage the cluster configuration–what traditionally resides in ceph.conf–in a central fashion.  Starting in Mimic, we also store configuration information in the monitors internal database, and seamlessly manage the distribution of that config info to all daemons and clients in the system. Historically, …Read more

  • June 4, 2018
    Mimic contributor credits

    A new version of Ceph has been released, and we’ve had a steady inflow of new contributors and companies contributing to Ceph. The affiliation of authors to organizations can be updated by submitting a patch to There were around 284 authors affiliated to over 68 companies contributing during this release cycle. Over the Mimic release …Read more