The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • October 2, 2017
    New in Luminous: CephFS subtree pinning

    The Ceph file system (CephFS) allows for portions of the file system tree to be carved up into subtrees which can be managed authoritatively by multiple MDS ranks. This empowers the cluster to scale performance with the size and usage of the file system by simply adding more MDS servers into the cluster. Where possible, …Read more

  • September 28, 2017
    New in Luminous: Dashboard

    The Ceph manager service (ceph-mgr) was introduced in the Kraken release, and in Luminous it has been extended with a number of new python modules. One of these is a simple monitoring web page, simply called `dashboard`. Enabling the dashboard module The dashboard module is included in the ceph-mgr package, so if you’ve upgraded to …Read more

  • September 26, 2017
    New in Luminous: CRUSH device classes

    The flexibility of the CRUSH map in controlling data placement in Ceph is one of the system’s great strengths.  It is also one of the most painful and awkward parts of the cluster to manage.  Previously, any non-trivial data placement policy required manual editing of the CRUSH map, either to adjust the hierarchy or to …Read more

  • September 20, 2017
    New in Luminous: Improved Scalability

    CERN has been a long-time Ceph user and active community member, running one of the largest production OpenStack clouds backed by Ceph.  They are also using Ceph for other storage use-cases, backing a range of high energy physics experiments.  Overall scalability is understandably an area of interest. Big Bang I and II In 2015, CERN …Read more

  • September 20, 2017
    New in Luminous: Multiple Active Metadata Servers in CephFS

    The Ceph file system (CephFS) is the file storage solution for Ceph. Since the Jewel release it has been deemed stable in configurations using a single active metadata server (with one or more standbys for redundancy). Now in Luminous, multiple active metadata servers configurations are stable and ready for deployment! This allows the CephFS metadata …Read more

  • September 19, 2017
    Call for mentors!

    The Ceph Project is very pleased to be participating in the upcoming round of Outreachy and we are now looking for mentors to guide the project interns. If you’re not familiar with Outreachy, the program “provides three-month internships for people from groups traditionally underrepresented in tech”, according to the Outreachy website. “Interns work remotely with …Read more

  • September 15, 2017
    New in Luminous: pool tags

    Congratulations on upgrading your cluster to the Luminous release of Ceph. You have diligently followed all the steps outlined in the upgrade notes to the point where you are ready to mark the upgrade complete by executing the following: $ ceph osd require-osd-release luminous Once your cluster has been flagged as fully upgraded to the …Read more

  • September 7, 2017
    New in Luminous: RGW Metadata Search

    RGW metadata search is a new feature that was added in Ceph Luminous.  It enables integration with Elasticsearch to provide a search API to query an object store based on object metadata. A new zone type A zone in the RGW multisite system is a set of radosgw daemons serving the same data, backed by …Read more

  • September 5, 2017
    New in Luminous: Upgrade Complete?

    When upgrading any distributed system, it’s easy to miss a step and have an old daemon running long after you thought the previous version was history. A common mistake is installing new packages and forgetting to restart the processes using them. With distributed storage like Ceph, it’s also important to remember the client side: the VMs, containers, file …Read more

  • September 1, 2017
    New in Luminous: RGW dynamic bucket sharding

    Luminous features a new RGW capability to automatically manage the sharding of RGW bucket index objects. This completely automates management of RGW’s internal index objects, something that until now Ceph administrators had to pay close attention to in order to prevent users with very large buckets from causing performance and reliability problems. Background One of …Read more

  • September 1, 2017
    New in Luminous: BlueStore

    BlueStore is a new storage backend for Ceph.  It boasts better performance (roughly 2x for writes), full data checksumming, and built-in compression.  It is the new default storage backend for Ceph OSDs in Luminous v12.2.z and will be used by default when provisioning new OSDs with ceph-disk, ceph-deploy, and/or ceph-ansible. How fast? Roughly speaking, BlueStore …Read more

  • September 1, 2017
    Contributor credits for Luminous v12.2.0

    Yet another release of Ceph is out, here is a sorted list of authors and organizations who contributed to v12.2.0, by number of commits or reviews back to v11.2.0. Thanks everyone for contributing, and keep up the good work The affiliation of authors to organizations can be updated by submitting a patch to https://github.com/ceph/ceph/blob/master/.organizationmap All commits are reviewed but …Read more

Careers