The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • June 4, 2019
    v13.2.6 Mimic released

    This is the sixth bugfix release of the Mimic v13.2.x long term stable release series. We recommend all Mimic users upgrade. Notable Changes¶ Ceph v13.2.6 now packages python bindings for python3.6 instead of python3.4, because EPEL7 recently switched from python3.4 to python3.6 as the native python3. See the announcement for more details on the background …Read more

  • May 18, 2019
    New in Nautilus: CephFS Improvements

    Work continues to improve the CephFS file system in Nautilus. As with the rest of Ceph, we have been dedicating significant developer time towards improving usability and stability. The following sections go through each of these works in detail MDS stability MDS stability has been a major focus for developers in the past two releases. …Read more

  • May 14, 2019
    New in Nautilus: New Dashboard Functionality

    The Ceph Dashboard shipped with Ceph Mimic was the first step in replacing the original read-only dashboard with a more flexible and extensible architecture and adding management functionality derived from the openATTIC project. One goal for the team working on the dashboard for Ceph Nautilus was to reach feature parity with openATTIC, and we’re quite …Read more

  • May 10, 2019
    New in Nautilus: RADOS Highlights

    BlueStore Nautilus comes with a bunch of new features and improvements for RADOS. To begin with, BlueStore is even more awesome now! If you were ever wondering how BlueStore uses space on your devices, stop wondering any further. With Nautilus, BlueStore space utilization information is much more granular and accurate, with separate accounting of space …Read more

  • May 9, 2019
    Part – 3 : RHCS Bluestore performance Scalability ( 3 vs 5 nodes )

    Introduction Welcome to the episode-3 of the performance blog series. In this blog, we will explain the performance increase we get when scaling-out the Ceph OSD node count of the RHCS cluster. A traditional storage scale-up architecture is built around two controllers connected to disk shelves. When the controllers reach 100% utilization, they create a …Read more

  • May 6, 2019
    Part – 2: Ceph Block Storage Performance on All-Flash Cluster with BlueStore backend

    Introduction Recap: In Blog Episode-1 we have covered RHCS, BlueStore introduction, lab hardware details, benchmarking methodology and performance comparison between Default Ceph configuration vs Tuned Ceph configuration This is the second episode of the performance blog series on RHCS 3.2 BlueStore running on the all-flash cluster. There is no rule of thumb to categorize block …Read more

  • May 2, 2019
    Rook v1.0: Nautilus Support and much more!

    We are excited that Rook has reached a huge milestone… v1.0 has been released! Congrats to the Rook community for all the hard work to reach this critical milestone.  This is another great release with many improvements for Ceph that solidify its use in production with Kubernetes clusters. Of all the many features and bug …Read more

  • May 2, 2019
    Part – 1 : BlueStore (Default vs. Tuned) Performance Comparison

    Acknowledgments We would like to thank BBVA,   Cisco and Intel for providing the cutting edge hardware used to run a Red Hat Ceph Storage 3.2 All-flash performance POC. The tests and results provided in this blog series is a joint effort of the partnership formed by  BBVA, Intel , Cisco and Red Hat. All partners …Read more

  • April 30, 2019
    New in Nautilus: crash dump telemetry

    When Ceph daemons encounter software bugs, unexpected state, failed assertions, or other exceptional cases, they dump a stack trace and recently internal log activity to their log file in /var/log/ceph. On modern systems, systemd will restart the daemon and life will go on–often without the cluster administrator even realizing that there was a problem. This …Read more

  • April 29, 2019
    v14.2.1 Nautilus released

    This is the first bug fix release of Ceph Nautilus release series. We recommend all nautilus users upgrade to this release. For upgrading from older releases of ceph, general guidelines for upgrade to nautilus must be followed Upgrading from Mimic or Luminous. Notable Changes¶ The default value for mon_crush_min_required_version has been changed from firefly to …Read more

  • April 18, 2019
    New in Nautilus: ceph-iscsi Improvements

    The ceph-iscsi project provides a framework, REST API and CLI tool for creating and managing iSCSI targets and gateways for Ceph via LIO. It is the successor and a consolidation of two formerly separate projects, the ceph-iscsi-cli and ceph-iscsi-config which were initially started in 2016 by Paul Cuzner at Red Hat. While this is not a new feature of Ceph Nautilus per se, improving …Read more

  • April 12, 2019
    New in Nautilus: device management and failure prediction

    Ceph storage clusters ultimately rely on physical hardware devices–HDDs or SSDs–that can fail. Starting in Nautilus, management and tracking of physical devices is now handled by Ceph. Furthermore, we’ve added infrastructure to collect device health metrics (e.g., SMART) and to predict device failures before they happen, either via a built-in pre-trained prediction model, or via …Read more