The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • June 4, 2018
    Mimic contributor credits

    A new version of Ceph has been released, and we’ve had a steady inflow of new contributors and companies contributing to Ceph. The affiliation of authors to organizations can be updated by submitting a patch to https://github.com/ceph/ceph/blob/master/.organizationmap There were around 284 authors affiliated to over 68 companies contributing during this release cycle. Over the Mimic release …Read more

  • June 1, 2018
    New in Mimic: Introducing a new Ceph Manager Dashboard

    After an intensive 9 month development cycle, the Ceph project is happy to announce their next stable release: Ceph 13.2.0 “Mimic” is the first version of Ceph that has been published under the revised release schedule in which a new stable release is published every nine months. Previously, Ceph releases were made available in a …Read more

  • April 23, 2018
    Cephalocon APAC 2018 Report

    On March 22-23, 2018 the first Cephalocon in the world was successfully held in Beijing, China. During the two conference days over 1000 people including developers, users, companies, community members and other Ceph enthusiasts attended to the 52 keynotes and talks about Enterprise applications, Development, Operation and Maintenance practices. Cephalocon was possible because the support …Read more

  • October 25, 2017
    New in Luminous: PG overdose protection

    Choosing the right number of PGs (“placement groups”) for your cluster is a bit of black art–and a usability nightmare.  Getting a reasonable value can have a big impact on a cluster’s performance and reliability, both in a good way and a bad way.  Unfortunately, over the past few years we’ve seen our share of …Read more

  • October 23, 2017
    New in Luminous: Zabbix

    The Ceph manager service (ceph-mgr) was introduced in the Kraken release, and in Luminous it has been extended with a number of new python modules. One of these is a module exporting overall cluster status and performance to Zabbix. Enabling the dashboard module The Zabbix module is included in the ceph-mgr package, so if you’ve …Read more

  • October 19, 2017
    New in Luminous: RADOS improvements

    RADOS is the reliable autonomic distributed object store that underpins Ceph, providing a reliable, highly available, and scalable storage service to other components.  As with every Ceph release, Luminous includes a range of improvements to the RADOS core code (mostly in the OSD and monitor) that benefit all object, block, and file users. Parallel monitor …Read more

  • October 16, 2017
    New in Luminous: Erasure Coding for RBD and CephFS

    Luminous now fully supports overwrites for erasure coded (EC) RADOS pools, allowing RBD and CephFS (as well as RGW) to directly consume erasure coded pools.  This has the potential to dramatically reduce the overall cost per terabyte of Ceph systems since the usual 3x storage overhead of replication can be reduced to more like 1.2x …Read more

  • October 10, 2017
    New in Luminous: CephFS metadata server memory limits

    The Ceph file system uses a cluster of metadata servers to provide an authoritative cache for the CephFS metadata stored in RADOS. The most basic reason for this is to maintain a hot set of metadata in memory without talking to the metadata pool in RADOS. Another important reason is to allow clients to also …Read more

  • October 2, 2017
    New in Luminous: CephFS subtree pinning

    The Ceph file system (CephFS) allows for portions of the file system tree to be carved up into subtrees which can be managed authoritatively by multiple MDS ranks. This empowers the cluster to scale performance with the size and usage of the file system by simply adding more MDS servers into the cluster. Where possible, …Read more

  • September 28, 2017
    New in Luminous: Dashboard

    The Ceph manager service (ceph-mgr) was introduced in the Kraken release, and in Luminous it has been extended with a number of new python modules. One of these is a simple monitoring web page, simply called `dashboard`. Enabling the dashboard module The dashboard module is included in the ceph-mgr package, so if you’ve upgraded to …Read more

  • September 26, 2017
    New in Luminous: CRUSH device classes

    The flexibility of the CRUSH map in controlling data placement in Ceph is one of the system’s great strengths.  It is also one of the most painful and awkward parts of the cluster to manage.  Previously, any non-trivial data placement policy required manual editing of the CRUSH map, either to adjust the hierarchy or to …Read more

  • September 20, 2017
    New in Luminous: Improved Scalability

    CERN has been a long-time Ceph user and active community member, running one of the largest production OpenStack clouds backed by Ceph.  They are also using Ceph for other storage use-cases, backing a range of high energy physics experiments.  Overall scalability is understandably an area of interest. Big Bang I and II In 2015, CERN …Read more

Careers