The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • August 2, 2018
    Rook: Automating Ceph for Kubernetes

    Rook is an orchestrator for storage services that run in a Kubernetes cluster. In the Rook v0.8 release, we are excited to say that the orchestration around Ceph has stabilized to the point to be declared Beta. If you haven’t yet started a Ceph cluster with Rook, now is the time to take it for a …Read more

  • July 27, 2018
    Meeting report: Ceph Manager Dashboard F2F Meeting in Nuremberg, Germany

    From Tuesday, 17th to Thursday, 19th of July, the developers working on the Ceph Manager Dashboard held a first face to face meeting in Nuremberg, Germany. The meeting was hosted by SUSE and was attended by 23 representatives from Red Hat and SUSE that work on the dashboard or related components like Prometheus/Grafana or the …Read more

  • July 17, 2018
    Ceph User Survey 2018 results

    To better understand how our current users utilize Ceph, we conducted a public community survey from May 1st to May 25th 2018. The Ceph User Survey 2018 Slides contain the graphs from Survey Monkey. We are also making the (only slightly cleaned up) responses available under the Community Data License Agreement – Sharing, Version 1.0 here: …Read more

  • June 29, 2018
    New in Mimic: iostat plugin

    (this is a guest post by Mohamad Gebai from SUSE, who developed the iostat plugin) The Mimic release of Ceph has brought with it a small yet useful feature for monitoring the activity on a Ceph cluster: the iostat command, which comes as a form of a Ceph manager plugin. The iostat module is enabled …Read more

  • June 27, 2018
    New in Mimic: Simplified RBD Image Cloning

    Motivation In previous Ceph releases, to create a clone of an image one must first create a snapshot and then mark the snapshot as protected before attempting to clone: $ rbd snap create parent@snap $ rbd snap protect parent@snap $ rbd clone parent@snap clone This was a necessary evil to ensure RBD performed the proper book-keeping …Read more

  • June 14, 2018
    New in Mimic: centralized configuration management

    One of the key new features in Ceph Mimic is the ability to manage the cluster configuration–what traditionally resides in ceph.conf–in a central fashion.  Starting in Mimic, we also store configuration information in the monitors internal database, and seamlessly manage the distribution of that config info to all daemons and clients in the system. Historically, …Read more

  • June 4, 2018
    Mimic contributor credits

    A new version of Ceph has been released, and we’ve had a steady inflow of new contributors and companies contributing to Ceph. The affiliation of authors to organizations can be updated by submitting a patch to https://github.com/ceph/ceph/blob/master/.organizationmap There were around 284 authors affiliated to over 68 companies contributing during this release cycle. Over the Mimic release …Read more

  • June 1, 2018
    New in Mimic: Introducing a new Ceph Manager Dashboard

    After an intensive 9 month development cycle, the Ceph project is happy to announce their next stable release: Ceph 13.2.0 “Mimic” is the first version of Ceph that has been published under the revised release schedule in which a new stable release is published every nine months. Previously, Ceph releases were made available in a …Read more

  • April 23, 2018
    Cephalocon APAC 2018 Report

    On March 22-23, 2018 the first Cephalocon in the world was successfully held in Beijing, China. During the two conference days over 1000 people including developers, users, companies, community members and other Ceph enthusiasts attended to the 52 keynotes and talks about Enterprise applications, Development, Operation and Maintenance practices. Cephalocon was possible because the support …Read more

  • October 25, 2017
    New in Luminous: PG overdose protection

    Choosing the right number of PGs (“placement groups”) for your cluster is a bit of black art–and a usability nightmare.  Getting a reasonable value can have a big impact on a cluster’s performance and reliability, both in a good way and a bad way.  Unfortunately, over the past few years we’ve seen our share of …Read more

  • October 23, 2017
    New in Luminous: Zabbix

    The Ceph manager service (ceph-mgr) was introduced in the Kraken release, and in Luminous it has been extended with a number of new python modules. One of these is a module exporting overall cluster status and performance to Zabbix. Enabling the dashboard module The Zabbix module is included in the ceph-mgr package, so if you’ve …Read more

  • October 19, 2017
    New in Luminous: RADOS improvements

    RADOS is the reliable autonomic distributed object store that underpins Ceph, providing a reliable, highly available, and scalable storage service to other components.  As with every Ceph release, Luminous includes a range of improvements to the RADOS core code (mostly in the OSD and monitor) that benefit all object, block, and file users. Parallel monitor …Read more

Careers