The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

August 2, 2017

v12.1.2 Luminous RC released

This is the third release candidate for Luminous, the next long term stable

Ceph Luminous (v12.2.0) will be the foundation for the next long-term
stable release series. There have been major changes since Kraken
(v11.2.z) and Jewel (v10.2.z), and the upgrade process is non-trivial.
Please read these release notes carefully.

Major Changes from Kraken

  • General:

  • RADOS:

    • BlueStore:

      • The new BlueStore backend for ceph-osd is now stable and the new
        default for newly created OSDs. BlueStore manages data stored by each OSD
        by directly managing the physical HDDs or SSDs without the use of an
        intervening file system like XFS. This provides greater performance
        and features.
      • BlueStore supports full data and metadata checksums of all
        data stored by Ceph.
      • BlueStore supports inline compression using zlib, snappy, or LZ4. (Ceph
        also supports zstd for RGW compression but zstd is not recommended for
        BlueStore for performance reasons.) FIXME DOCS
    • Erasure coded pools now have full support for overwrites,
      allowing them to be used with RBD and CephFS.

    • The configuration option “osd pool erasure code stripe width” has
      been replaced by “osd pool erasure code stripe unit”, and given the
      ability to be overridden by the erasure code profile setting
      “stripe_unit”. For more details see “Erasure Code Profiles” in the

    • rbd and cephfs can use erasure coding with bluestore. This may be
      enabled by setting ‘allow_ec_overwrites’ to ‘true’ for a pool. Since
      this relies on bluestore’s checksumming to do deep scrubbing,
      enabling this on a pool stored on filestore is not allowed.

    • The ‘rados df’ JSON output now prints numeric values as numbers instead of

    • The mon_osd_max_op_age option has been renamed to
      mon_osd_warn_op_age (default: 32 seconds), to indicate we
      generate a warning at this age. There is also a new
      mon_osd_err_op_age_ratio that is a expressed as a multitple of
      mon_osd_warn_op_age (default: 128, for roughly 60 minutes) to
      control when an error is generated.

    • The default maximum size for a single RADOS object has been reduced from
      100GB to 128MB. The 100GB limit was completely impractical in practice
      while the 128MB limit is a bit high but not unreasonable. If you have an
      application written directly to librados that is using objects larger than
      128MB you may need to adjust osd_max_object_size.

    • The semantics of the ‘rados ls’ and librados object listing
      operations have always been a bit confusing in that “whiteout”
      objects (which logically don’t exist and will return ENOENT if you
      try to access them) are included in the results. Previously
      whiteouts only occurred in cache tier pools. In luminous, logically
      deleted but snapshotted objects now result in a whiteout object, and
      as a result they will appear in ‘rados ls’ results, even though
      trying to read such an object will result in ENOENT. The ‘rados
      listsnaps’ operation can be used in such a case to enumerate which
      snapshots are present.

      This may seem a bit strange, but is less strange than having a
      deleted-but-snapshotted object not appear at all and be completely
      hidden from librados’s ability to enumerate objects. Future
      versions of Ceph will likely include an alternative object
      enumeration interface that makes it more natural and efficient to
      enumerate all objects along with their snapshot and clone metadata.

    • ceph-mgr:

      • There is a new daemon, ceph-mgr, which is a required part of any
        Ceph deployment. Although IO can continue when ceph-mgr is
        down, metrics will not refresh and some metrics-related calls
        (e.g., ceph df) may block. We recommend deploying several instances of
        ceph-mgr for reliability. See the notes on Upgrading below.

      • The ceph-mgr daemon includes a REST-based management API. The API is still experimental and somewhat limited but
        will form the basis for API-based management of Ceph going forward.

      • ceph-mgr also includes a Prometheus exporter  plugin, which can provide Ceph perfcounters to Prometheus.
      • The status ceph-mgr module is enabled by default, and initially provides two
        commands: ceph tell mgr osd status and ceph tell mgr fs status. These
        are high level colorized views to complement the existing CLI.

    • The overall scalability of the cluster has improved. We have
      successfully tested clusters with up to 10,000 OSDs.

    • Each OSD can now have a device class associated with
      it (e.g., hdd or ssd), allowing CRUSH rules to trivially map
      data to a subset of devices in the system. Manually writing CRUSH
      rules or manual editing of the CRUSH is normally not required.

    • You can now optimize CRUSH weights to maintain a near-perfect
      distribution of data
      across OSDs. FIXME DOCS

    • There is also a new upmap exception
      mechanism that allows individual PGs to be moved around to achieve
      a perfect distribution (this requires luminous clients).

    • Each OSD now adjusts its default configuration based on whether the
      backing device is an HDD or SSD. Manual tuning generally not required.

    • The prototype mClock QoS queueing algorithm is now

    • There is now a backoff mechanism that prevents OSDs from being
      overloaded by requests to objects or PGs that are not currently able to
      process IO.

    • There is a simplified OSD replacement process that is more

    • You can query the supported features and (apparent) releases of
      all connected daemons and clients with ceph features.

    • You can configure the oldest Ceph client version you wish to allow to
      connect to the cluster via ceph osd set-require-min-compat-client and
      Ceph will prevent you from enabling features that will break compatibility
      with those clients.

    • Several sleep settings, include osd_recovery_sleep,
      osd_snap_trim_sleep, and osd_scrub_sleep have been
      reimplemented to work efficiently. (These are used in some cases
      to work around issues throttling background work.)

    • The deprecated ‘crush_ruleset’ property has finally been removed; please use
      ‘crush_rule’ instead for the ‘osd pool get …’ and ‘osd pool set ..’ commands.

    • The ‘osd pool default crush replicated ruleset’ option has been
      removed and replaced by the ‘osd pool default crush rule’ option.
      By default it is -1, which means the mon will pick the first type
      replicated rule in the CRUSH map for replicated pools. Erasure
      coded pools have rules that are automatically created for them if they are
      not specified at pool creation time.
  • RGW:

    • RGW metadata search backed by ElasticSearch now supports end
      user requests service via RGW itself, and also supports custom
      metadata fields. A query language a set of RESTful APIs were
      created for users to be able to search objects by their
      metadata. New APIs that allow control of custom metadata fields
      were also added.
    • RGW now supports dynamic bucket index sharding. As the number
      of objects in a bucket grows, RGW will automatically reshard the
      bucket index in response. No user intervention or bucket size
      capacity planning is required.
    • RGW introduces server side encryption of uploaded objects with
      three options for the management of encryption keys: automatic
      encryption (only recommended for test setups), customer provided
      keys similar to Amazon SSE-C specification, and through the use of
      an external key management service (Openstack Barbican) similar
      to Amazon SSE-KMS specification.
    • RGW now has preliminary AWS-like bucket policy API support. For
      now, policy is a means to express a range of new authorization
      concepts. In the future it will be the foundation for additional
      auth capabilities such as STS and group policy.
    • RGW has consolidated the several metadata index pools via the use of rados
  • RBD:

    • RBD now has full, stable support for erasure coded pools via the new
      --data-pool option to rbd create.
    • RBD mirroring’s rbd-mirror daemon is now highly available. We
      recommend deploying several instances of rbd-mirror for
    • The default ‘rbd’ pool is no longer created automatically during
      cluster creation. Additionally, the name of the default pool used
      by the rbd CLI when no pool is specified can be overridden via a
      new rbd default pool = <pool name> configuration option.
    • Initial support for deferred image deletion via new rbd
      CLI commands. Images, even ones actively in-use by
      clones, can be moved to the trash and deleted at a later time.
    • New pool-level rbd mirror pool promote and rbd mirror pool
      commands to batch promote/demote all mirrored images
      within a pool.
    • Mirroring now optionally supports a configurable replication delay
      via the rbd mirroring replay delay = <seconds> configuration
    • Improved discard handling when the object map feature is enabled.
    • rbd CLI import and copy commands now detect sparse and
      preserve sparse regions.
    • Images and Snapshots will now include a creation timestamp.
  • CephFS:

    • Multiple active MDS daemons is now considered stable. The number
      of active MDS servers may be adjusted up or down on an active CephFS file
    • CephFS directory fragmentation is now stable and enabled by
      default on new filesystems. To enable it on existing filesystems
      use “ceph fs set <fs_name> allow_dirfrags”. Large or very busy
      directories are sharded and (potentially) distributed across
      multiple MDS daemons automatically.
    • Directory subtrees can be explicitly pinned to specific MDS daemons in
      cases where the automatic load balancing is not desired or effective.
  • Miscellaneous:

    • Release packages are now being built for Debian Stretch. Note
      that QA is limited to CentOS and Ubuntu (xenial and trusty). The
      distributions we build for now includes:

      • CentOS 7 (x86_64 and aarch64)
      • Debian 8 Jessie (x86_64)
      • Debian 9 Stretch (x86_64)
      • Ubuntu 16.04 Xenial (x86_64 and aarch64)
      • Ubuntu 14.04 Trusty (x86_64)
    • CLI changes:
      • The ceph -s or ceph status command has a fresh look.
      • ceph mgr metadata will dump metadata associated with each mgr
      • ceph versions or ceph {osd,mds,mon,mgr} versions
        summarize versions of running daemons.
      • ceph {osd,mds,mon,mgr} count-metadata <property> similarly
        tabulates any other daemon metadata visible via the ceph
        {osd,mds,mon,mgr} metadata
      • ceph features summarizes features and releases of connected
        clients and daemons.
      • ceph osd require-osd-release <release> replaces the old
        require_RELEASE_osds flags.
      • ceph osd pg-upmap, ceph osd rm-pg-upmap, ceph osd
        , ceph osd rm-pg-upmap-items can explicitly
        manage upmap items (see Using the pg-upmap).
      • ceph osd getcrushmap returns a crush map version number on
        stderr, and ceph osd setcrushmap [version] will only inject
        an updated crush map if the version matches. This allows crush
        maps to be updated offline and then reinjected into the cluster
        without fear of clobbering racing changes (e.g., by newly added
        osds or changes by other administrators).
      • ceph osd create has been replaced by ceph osd new. This
        should be hidden from most users by user-facing tools like
      • ceph osd destroy will mark an OSD destroyed and remove its
        cephx and lockbox keys. However, the OSD id and CRUSH map entry
        will remain in place, allowing the id to be reused by a
        replacement device with minimal data rebalancing.
      • ceph osd purge will remove all traces of an OSD from the
        cluster, including its cephx encryption keys, dm-crypt lockbox
        keys, OSD id, and crush map entry.
      • ceph osd ls-tree <name> will output a list of OSD ids under
        the given CRUSH name (like a host or rack name). This is useful
        for applying changes to entire subtrees. For example, ceph
        osd down `ceph osd ls-tree rack1`
      • ceph osd {add,rm}-{noout,noin,nodown,noup} allow the
        noout, noin, nodown, and noup flags to be applied to
        specific OSDs.
      • ceph log last [n] will output the last n lines of the cluster
      • ceph mgr dump will dump the MgrMap, including the currently active
        ceph-mgr daemon and any standbys.
      • ceph mgr module ls will list active ceph-mgr modules.
      • ceph mgr module {enable,disable} <name> will enable or
        disable the named mgr module. The module must be present in the
        configured mgr_module_path on the host(s) where ceph-mgr is
      • ceph osd crush swap-bucket <src> <dest> will swap the
        contents of two CRUSH buckets in the hierarchy while preserving
        the buckets’ ids. This allows an entire subtree of devices to
        be replaced (e.g., to replace an entire host of FileStore OSDs
        with newly-imaged BlueStore OSDs) without disrupting the
        distribution of data across neighboring devices.
      • ceph osd set-require-min-compat-client <release> configures
        the oldest client release the cluster is required to support.
        Other changes, like CRUSH tunables, will fail with an error if
        they would violate this setting. Changing this setting also
        fails if clients older than the specified release are currently
        connected to the cluster.
      • ceph config-key dump dumps config-key entries and their
        contents. (The existing ceph config-key list only dumps the key
        names, not the values.)
      • ceph config-key list is deprecated in favor of ceph config-key ls.
      • ceph auth list is deprecated in favor of ceph auth ls.
      • ceph osd crush rule list is deprecated in favor of ceph osd crush rule ls.
      • ceph osd set-{full,nearfull,backfillfull}-ratio sets the
        cluster-wide ratio for various full thresholds (when the cluster
        refuses IO, when the cluster warns about being close to full,
        when an OSD will defer rebalancing a PG to itself,
      • ceph osd reweightn will specify the reweight values for
        multiple OSDs in a single command. This is equivalent to a series of
        ceph osd reweight commands.
      • ceph osd crush class {rm,ls,ls-osd} manage the new
        CRUSH device class feature. ceph crush set-device-class
        <class> <osd> [<osd>...]
        will set the class for particular devices.
        Note that if you specify a non-existent class, it will be created
        automatically. ceph crush rm-device-class <osd> [<osd>...]
        will instead remove the class for particular devices.
        And if a class contains no more devices, it will be automatically
      • ceph osd crush rule create-replicated replaces the old
        ceph osd crush rule create-simple command to create a CRUSH
        rule for a replicated pool. Notably it takes a class argument
        for the device class the rule should target (e.g., ssd or
      • ceph mon feature ls will list monitor features recorded in the
        MonMap. ceph mon feature set will set an optional feature (none of
        these exist yet).
      • ceph tell <daemon> help will now return a usage summary.

Major Changes from Jewel

  • RADOS:
    • We now default to the AsyncMessenger (ms type = async) instead
      of the legacy SimpleMessenger.  The most noticeable difference is
      that we now use a fixed sized thread pool for network connections
      (instead of two threads per socket with SimpleMessenger).
    • Some OSD failures are now detected almost immediately, whereas
      previously the heartbeat timeout (which defaults to 20 seconds)
      had to expire.  This prevents IO from blocking for an extended
      period for failures where the host remains up but the ceph-osd
      process is no longer running.
    • The size of encoded OSDMaps has been reduced.
    • The OSDs now quiesce scrubbing when recovery or rebalancing is in progress.
  • RGW:
    • RGW now supports the S3 multipart object copy-part API.
    • It is possible now to reshard an existing bucket offline. Offline
      bucket resharding currently requires that all IO (especially
      writes) to the specific bucket is quiesced. (For automatic online
      resharding, see the new feature in Luminous above.)
    • RGW now supports data compression for objects.
    • Civetweb version has been upgraded to 1.8
    • The Swift static website API is now supported (S3 support has been added
    • S3 bucket lifecycle API has been added. Note that currently it only supports
      object expiration.
    • Support for custom search filters has been added to the LDAP auth
    • Support for NFS version 3 has been added to the RGW NFS gateway.
    • A Python binding has been created for librgw.
  • RBD:
    • The rbd-mirror daemon now supports replicating dynamic image
      feature updates and image metadata key/value pairs from the
      primary image to the non-primary image.
    • The number of image snapshots can be optionally restricted to a
      configurable maximum.
    • The rbd Python API now supports asynchronous IO operations.
  • CephFS:
    • libcephfs function definitions have been changed to enable proper
      uid/gid control. The library version has been increased to reflect the
      interface change.
    • Standby replay MDS daemons now consume less memory on workloads
      doing deletions.
    • Scrub now repairs backtrace, and populates damage ls with
      discovered errors.
    • A new pg_files subcommand to cephfs-data-scan can identify
      files affected by a damaged or lost RADOS PG.
    • The false-positive “failing to respond to cache pressure” warnings have
      been fixed.

Upgrade from Jewel or Kraken

  1. Ensure that the sortbitwise flag is enabled:

    # ceph osd set sortbitwise
  2. Make sure your cluster is stable and healthy (no down or
    recoverying OSDs). (Optional, but recommended.)

  3. Do not create any new erasure-code pools while upgrading the monitors.

  4. You can monitor the progress of your upgrade at each stage with the
    ceph versions command, which will tell you what ceph version is
    running for each type of daemon.

  5. Set the noout flag for the duration of the upgrade. (Optional
    but recommended.):

    # ceph osd set noout
  6. Upgrade monitors by installing the new packages and restarting the
    monitor daemons. Note that, unlike prior releases, the ceph-mon
    daemons must be upgraded first:

    # systemctl restart

    Verify the monitor upgrade is complete once all monitors are up by
    looking for the luminous feature string in the mon map. For

    # ceph mon feature ls

    should include luminous under persistent features:

    on current monmap (epoch NNN)
       persistent: [kraken,luminous]
       required: [kraken,luminous]
  7. Add or restart ceph-mgr daemons. If you are upgrading from
    kraken, upgrade packages and restart ceph-mgr daemons with:

    # systemctl restart

    If you are upgrading from kraken, you may already have ceph-mgr
    daemons deployed. If not, or if you are upgrading from jewel, you
    can deploy new daemons with tools like ceph-deploy or ceph-ansible.
    For example:

    # ceph-deploy mgr create HOST

    Verify the ceph-mgr daemons are running by checking ceph -s:

    # ceph -s
       mon: 3 daemons, quorum foo,bar,baz
       mgr: foo(active), standbys: bar, baz
  8. Upgrade all OSDs by installing the new packages and restarting the
    ceph-osd daemons on all hosts:

    # systemctl restart

    You can monitor the progress of the OSD upgrades with the new
    ceph versions or ceph osd versions command:

    # ceph osd versions
       "ceph version 12.2.0 (...) luminous (stable)": 12,
       "ceph version 10.2.6 (...)": 3,
  9. Upgrade all CephFS daemons by upgrading packages and restarting
    daemons on all hosts:

    # systemctl restart
  10. Upgrade all radosgw daemons by upgrading packages and restarting
    daemons on all hosts:

    # systemctl restart
  11. Complete the upgrade by disallowing pre-luminous OSDs:

    # ceph osd require-osd-release luminous

    If you set noout at the beginning, be sure to clear it with:

    # ceph osd unset noout
  12. Verify the cluster is healthy with ceph health.

Upgrading from pre-Jewel releases (like Hammer)

You must first upgrade to Jewel (10.2.z) before attempting an
upgrade to Luminous.

Upgrade compatibility notes, Kraken to Luminous

  • We no longer test the FileStore ceph-osd backend in combination with
    btrfs. We recommend against using btrfs. If you are using
    btrfs-based OSDs and want to upgrade to luminous you will need to
    add the follwing to your ceph.conf:

    enable experimental unrecoverable data corrupting features = btrfs

    The code is mature and unlikely to change, but we are only
    continuing to test the Jewel stable branch against btrfs. We
    recommend moving these OSDs to FileStore with XFS or BlueStore.

  • The ruleset-* properties for the erasure code profiles have been
    renamed to crush-* to (1) move away from the obsolete ‘ruleset’
    term and to be more clear about their purpose. There is also a new
    optional crush-device-class property to specify a CRUSH device
    class to use for the erasure coded pool. Existing erasure code
    profiles will be converted automatically when upgrade completes
    (when the ceph osd require-osd-release luminous command is run)
    but any provisioning tools that create erasure coded pools may need
    to be updated.

  • The structure of the XML output for osd crush tree has changed
    slightly to better match the osd tree output. The top level
    structure is now nodes instead of crush_map_roots.

  • When assigning a network to the public network and not to
    the cluster network the network specification of the public
    network will be used for the cluster network as well.
    In older versions this would lead to cluster services
    being bound to<port>, thus making the
    cluster service even more publicly available than the
    public services. When only specifying a cluster network it
    will still result in the public services binding to

  • In previous versions, if a client sent an op to the wrong OSD, the OSD
    would reply with ENXIO. The rationale here is that the client or OSD is
    clearly buggy and we want to surface the error as clearly as possible.
    We now only send the ENXIO reply if the osd_enxio_on_misdirected_op option
    is enabled (it’s off by default). This means that a VM using librbd that
    previously would have gotten an EIO and gone read-only will now see a
    blocked/hung IO instead.

  • The “journaler allow split entries” config setting has been removed.

  • librados:

    • Some variants of the omap_get_keys and omap_get_vals librados
      functions have been deprecated in favor of omap_get_vals2 and
      omap_get_keys2. The new methods include an output argument
      indicating whether there are additional keys left to fetch.
      Previously this had to be inferred from the requested key count vs
      the number of keys returned, but this breaks with new OSD-side
      limits on the number of keys or bytes that can be returned by a
      single omap request. These limits were introduced by kraken but
      are effectively disabled by default (by setting a very large limit
      of 1 GB) because users of the newly deprecated interface cannot
      tell whether they should fetch more keys or not. In the case of
      the standalone calls in the C++ interface
      (IoCtx::get_omap_{keys,vals}), librados has been updated to loop on
      the client side to provide a correct result via multiple calls to
      the OSD. In the case of the methods used for building
      multi-operation transactions, however, client-side looping is not
      practical, and the methods have been deprecated. Note that use of
      either the IoCtx methods on older librados versions or the
      deprecated methods on any version of librados will lead to
      incomplete results if/when the new OSD limits are enabled.

    • The original librados rados_objects_list_open (C) and objects_begin
      (C++) object listing API, deprecated in Hammer, has finally been
      removed. Users of this interface must update their software to use
      either the rados_nobjects_list_open (C) and nobjects_begin (C++) API or
      the new rados_object_list_begin (C) and object_list_begin (C++) API
      before updating the client-side librados library to Luminous.

      Object enumeration (via any API) with the latest librados version
      and pre-Hammer OSDs is no longer supported. Note that no in-tree
      Ceph services rely on object enumeration via the deprecated APIs, so
      only external librados users might be affected.

      The newest (and recommended) rados_object_list_begin (C) and
      object_list_begin (C++) API is only usable on clusters with the
      SORTBITWISE flag enabled (Jewel and later). (Note that this flag is
      required to be set before upgrading beyond Jewel.)

  • CephFS:

    • When configuring ceph-fuse mounts in /etc/fstab, a new syntax is
      available that uses “ceph.<arg>=<val>” in the options column, instead
      of putting configuration in the device column. The old style syntax
      still works. See the documentation page “Mount CephFS in your
      file systems table” for details.
    • CephFS clients without the ‘p’ flag in their authentication capability
      string will no longer be able to set quotas or any layout fields. This
      flag previously only restricted modification of the pool and namespace
      fields in layouts.
    • CephFS will generate a health warning if you have fewer standby daemons
      than it thinks you wanted. By default this will be 1 if you ever had
      a standby, and 0 if you did not. You can customize this using
      ceph fs set <fs> standby_count_wanted <number>. Setting it
      to zero will effectively disable the health check.
    • The “ceph mds tell …” command has been removed. It is superceded
      by “ceph tell mds.<id> …”

Notable Changes since v12.1.0 (RC1)

  • choose_args encoding has been changed to make it architecture-independent.
    If you deployed Luminous dev releases or 12.1.0 rc release and made use of
    the CRUSH choose_args feature, you need to remove all choose_args mappings
    from your CRUSH map before starting the upgrade.
  • The ‘ceph health’ structured output (JSON or XML) no longer contains
    a ‘timechecks’ section describing the time sync status. This
    information is now available via the ‘ceph time-sync-status’
  • Certain extra fields in the ‘ceph health’ structured output that
    used to appear if the mons were low on disk space (which duplicated
    the information in the normal health warning messages) are now gone.
  • The “ceph -w” output no longer contains audit log entries by default.
    Add a “–watch-channel=audit” or “–watch-channel=*” to see them.
  • The ‘apply’ mode of cephfs-journal-tool has been removed
  • Added new configuration “public bind addr” to support dynamic environments
    like Kubernetes. When set the Ceph MON daemon could bind locally to an IP
    address and advertise a different IP address “public addr” on the network.
  • New “ceph -w” behavior – the “ceph -w” output no longer contains I/O rates,
    available space, pg info, etc. because these are no longer logged to the
    central log (which is what “ceph -w” shows). The same information can be
    obtained by running “ceph pg stat”; alternatively, I/O rates per pool can
    be determined using “ceph osd pool stats”. Although these commands do not
    self-update like “ceph -w” did, they do have the ability to return formatted
    output by providing a “–format=<format>” option.
  • Pools are now expected to be associated with the application using them.
    Upon completing the upgrade to Luminous, the cluster will attempt to associate
    existing pools to known applications (i.e. CephFS, RBD, and RGW). In-use pools
    that are not associated to an application will generate a health warning. Any
    unassociated pools can be manually associated using the new
    “ceph osd pool application enable” command. For more details see
    “Associate Pool to Application” in the documentation.
  • ceph-mgr now has a Zabbix plugin. Using zabbix_sender it sends trapper
    events to a Zabbix server containing high-level information of the Ceph
    cluster. This makes it easy to monitor a Ceph cluster’s status and send
    out notifications in case of a malfunction.
  • The ‘mon_warn_osd_usage_min_max_delta’ config option has been
    removed and the associated health warning has been disabled because
    it does not address clusters undergoing recovery or CRUSH rules that do
    not target all devices in the cluster.
  • Specifying user authorization capabilities for RBD clients has been
    simplified. The general syntax for using RBD capability profiles is
    “mon ‘profile rbd’ osd ‘profile rbd[-read-only][ pool={pool-name}[, …]]’”.
    For more details see “User Management” in the documentation.
  • ceph config-key put has been deprecated in favor of ceph config-key set.

Notable Changes since v12.1.1 (RC2)

  • New “ceph -w” behavior – the “ceph -w” output no longer contains I/O rates,
    available space, pg info, etc. because these are no longer logged to the
    central log (which is what “ceph -w” shows). The same information can be
    obtained by running “ceph pg stat”; alternatively, I/O rates per pool can
    be determined using “ceph osd pool stats”. Although these commands do not
    self-update like “ceph -w” did, they do have the ability to return formatted
    output by providing a “–format=<format>” option.
  • Pools are now expected to be associated with the application using them.
    Upon completing the upgrade to Luminous, the cluster will attempt to associate
    existing pools to known applications (i.e. CephFS, RBD, and RGW). In-use pools
    that are not associated to an application will generate a health warning. Any
    unassociated pools can be manually associated using the new
    “ceph osd pool application enable” command. For more details see
    “Associate Pool to Application” in the documentation.
  • ceph-mgr now has a Zabbix plugin. Using zabbix_sender it sends trapper
    events to a Zabbix server containing high-level information of the Ceph
    cluster. This makes it easy to monitor a Ceph cluster’s status and send
    out notifications in case of a malfunction.
  • The ‘mon_warn_osd_usage_min_max_delta’ config option has been
    removed and the associated health warning has been disabled because
    it does not address clusters undergoing recovery or CRUSH rules that do
    not target all devices in the cluster.
  • Specifying user authorization capabilities for RBD clients has been
    simplified. The general syntax for using RBD capability profiles is
    “mon ‘profile rbd’ osd ‘profile rbd[-read-only][ pool={pool-name}[, …]]’”.
    For more details see “User Management” in the documentation.
  • RGW: bucket index resharding now uses the reshard namespace in log pool
    upgrade scenarios as well this is a changed behaviour from RC1 where a
    new pool for reshard was created
  • RGW multisite now supports for enabling or disabling sync at a bucket level.

Other Notable Changes

  • bluestore: bluestore/BlueFS: pass string as const ref (pr#16600, dingdangzhang)
  • bluestore: common/options: make “blue{fs,store}_allocator” LEVEL_DEV (issue#20660, pr#16645, Kefu Chai)
  • bluestore: os/bluestore/BlueStore: Avoid double counting state_kv_queued_lat (pr#16374, Jianpeng Ma)
  • bluestore: os/bluestore/BlueStore: remove unused code (pr#16522, Jianpeng Ma)
  • bluestore: os/bluestore: move aio.h/cc from fs dir to bluestore dir (pr#16409, Pan Liu)
  • bluestore: os/bluestore/StupidAllocator: rounded down len to an align boundary (issue#20660, pr#16593, Zhu Shangzhong)
  • bluestore: os/bluestore: use reference to avoid string copy (pr#16364, Pan Liu)
  • build/ops: ceph-disk: don’t activate suppressed journal devices (issue#19489, pr#16123, David Disseldorp)
  • build/ops: fix syntax for /bin/sh (doesn’t have +=) (pr#16433, Dan Mick)
  • build/ops: include/assert: test c++ before using static_cast<> (pr#16424, Kefu Chai)
  • build/ops: add missing dependencies for FreeBSD (pr#16545, Alan Somers)
  • build/ops,rbd,rgw: CMakeLists: trim rbd/rgw forced dependencies (pr#16574, Patrick Donnelly)
  • build/ops: rpm: Drop legacy libxio support (pr#16449, Nathan Cutler)
  • build/ops: rpm: fix typo WTIH_BABELTRACE (pr#16366, Nathan Cutler)
  • build/ops: rpm: put mgr python build dependencies in make_check bcond (issue#20425, pr#15940, Nathan Cutler, Tim Serong)
  • build/ops,tests: qa: make run-standalone work on FreeBSD (pr#16595, Willem Jan Withagen)
  • cmake: disable -fvar-tracking-assignments for (pr#16695, Kefu Chai)
  • cmake: use CMAKE_INSTALL_INCLUDEDIR (pr#16483, David Disseldorp)
  • common: buffer: silence unused var warning on FreeBSD (pr#16452, Willem Jan Withagen)
  • common: common/common_init: disable default dout logging for UTILITY_NODOUT too (issue#20771, pr#16578, Sage Weil)
  • common: common/options: refactors to set the properties in a more structured way (pr#16482, Kefu Chai)
  • common: common/WorkQueue: use threadpoolname + threadaddr for heartbeat_han… (pr#16563, huangjun)
  • common,core: osd,mds,mgr: do not dereference null rotating_keys (issue#20667, pr#16455, Sage Weil)
  • common: fix Option set_long_description (pr#16668, Yan Jun)
  • common: follow up to new options infrastructure (pr#16527, John Spray)
  • common: add compat.h for ENODATA (pr#16697, Willem Jan Withagen)
  • common: libradosstriper: fix format injection vulnerability (issue#20240, pr#15674, Stan K)
  • common,mon: crush,mon: add weight-set introspection and manipulation commands (pr#16326, Sage Weil)
  • common: mon/MonClient: scale backoff interval down when we have a healthy mon session (issue#20371, pr#16576, Kefu Chai, Sage Weil)
  • common: prevent unset_dumpable from generating warnings (pr#16462, Willem Jan Withagen)
  • common,rbd: osdc/Objecter: unify disparate EAGAIN handling paths into one (pr#16627, Sage Weil)
  • common: remove config opt conversion utility (pr#16480, John Spray)
  • common: Revamp config option definitions (issue#20627, pr#16211, John Spray, Kefu Chai, Sage Weil)
  • common,rgw: cls/refcount: store and use list of retired tags (issue#20107, pr#15673, Yehuda Sadeh)
  • common: the latency dumped by “ceph osd perf” is not real (issue#20749, pr#16512, Pan Liu)
  • common: use std::move() for better performance (pr#16620, Xinying Song)
  • core: auth: Remove unused function in AuthSessionHandler (pr#16666, Luo Kexue)
  • core: ceph: allow ‘-‘ with -i and -o for stdin/stdout (pr#16359, Sage Weil)
  • core: ceph-disk: support osd new (pr#15432, Loic Dachary, Sage Weil)
  • core: common/options: remove mon_warn_osd_usage_min_max_delta from too (pr#16488, Sage Weil)
  • core: kv: resolve a crash issue in ~LevelDBStore() (pr#16553, wumingqiao)
  • core: kv/RocksDBStore: use vector instead of VLA for holding slices (pr#16615, Kefu Chai)
  • core: messages: default-initialize MOSDPGRecoveryDelete[Reply] members (pr#16584, Greg Farnum)
  • core: mgr/MgrClient: do not attempt to access a global variable for config (pr#16544, Jason Dillaman)
  • core,mgr,tests: qa: flush out monc’s dropped msgs on msgr failure injection (issue#20371, pr#16484, Joao Eduardo Luis)
  • core,mon: crush, mon: simplify device class manipulation commands (pr#16388, xie xingguo)
  • core: mon, osd: misc fixes (pr#16283, xie xingguo)
  • core,mon,rbd: mon,osd: new rbd-based cephx cap profiles (pr#15991, Jason Dillaman)
  • core: msg/async: fix the bug of inaccurate calculation of l_msgr_send_bytes (pr#16526, Jin Cai)
  • core: objclass: modify omap_get_{keys,vals} api (pr#16667, Yehuda Sadeh, Casey Bodley)
  • core: osd/PG: fix warning so we discard_event() on a no-op state change (pr#16655, Sage Weil)
  • core: osd/PG: ignore CancelRecovery in NotRecovering (issue#20804, pr#16638, Sage Weil)
  • core: osd/PGLog: fix inaccurate missing assert (issue#20753, pr#16539, Josh Durgin)
  • core: osd/PrimaryLogPG: fix recovering hang when have unfound objects (pr#16558, huangjun)
  • core: osd/PrimaryLogPG: skip deleted missing objects in pg[n]ls (issue#20739, pr#16490, Josh Durgin)
  • core,performance: kv/RocksDBStore: Table options for indexing and filtering (pr#16450, Mark Nelson)
  • core,performance: osd/PG: make prioritized recovery possible (pr#13723, Piotr Dałek)
  • core: PGLog: store extra duplicate ops beyond the normal log entries (pr#16172, Josh Durgin, J. Eric Ivancich)
  • core,rgw,tests: qa/suits/rados/basic/tasks/rgw_snaps: wait for pools to be created (pr#16509, Sage Weil)
  • core,tests: ceph_test_rados_api_watch_notify: flush after unwatch (issue#20105, pr#16402, Sage Weil)
  • core,tests: ceph_test_rados: max_stride_size must be more than min_stride_size (issue#20775, pr#16590, Lianne Wang)
  • core,tests: qa: move ceph-helpers-based make check tests to qa/standalone; run via teuthology (pr#16513, Sage Weil)
  • core,tests: qa/suites/rados: at-end: ignore PG_{AVAILABILITY,DEGRADED} (issue#20693, pr#16575, Sage Weil)
  • core,tests: qa/tasks/ceph_manager: wait for osd to start after objectstore-tool sequence (issue#20705, pr#16454, Sage Weil)
  • core,tests: qa/tasks/ceph: wait for mgr to activate and pg stats to flush in health() (issue#20744, pr#16514, Sage Weil)
  • core,tests: qa/tasks/dump_stuck: fix dump_stuck test bug (pr#16559, huangjun)
  • core,tests: qa/workunits/cephtool/ add sudo for daemon compact (pr#16500, Sage Weil)
  • core,tests: test: add separate ceph-helpers-based smoke test (pr#16572, Sage Weil)
  • core: throttle: Minimal destructor fix for Luminous (pr#16661, Adam C. Emerson)
  • core: start mgr after mon, before osds (pr#16613, Sage Weil)
  • crush: a couple of weight-set fixes (pr#16623, xie xingguo)
  • crush: enforce buckets-before-rules rule (pr#16453, Sage Weil)
  • crush: s/ruleset/id/ in decompiled output; prevent compilation when ruleset != id (pr#16400, Sage Weil)
  • doc: Add amitkumar50 affiliation to .organizationmap (pr#16475, Amit Kumar)
  • doc: add doc requirements on PR submitters (pr#16394, John Spray)
  • doc: added mgr caps to manual deployment documentation (pr#16660, Nick Erdmann)
  • doc: add instructions for replacing an OSD (pr#16314, Kefu Chai)
  • doc: add rbd new trash cli and cleanups in release-notes.rst (issue#20702, pr#16498, songweibin)
  • doc: Add Zabbix ceph-mgr plugin to PendingReleaseNotes (pr#16412, Wido den Hollander)
  • doc: AUTHORS: update CephFS PTL (pr#16399, Patrick Donnelly)
  • doc: ceph-disk: use ‘-‘ for feeding ceph cli with stdin (pr#16362, Kefu Chai)
  • doc: common/ document bluestore config options (pr#16489, Sage Weil)
  • doc: Describe mClock’s use within Ceph in great detail (pr#16707, J. Eric Ivancich)
  • doc: doc/install/manual-deployment: update osd creation steps (pr#16573, Sage Weil)
  • doc: doc/mon: fix ceph-authtool command in rebuild mon’s sample (pr#16503, huanwen ren)
  • doc: doc/qa: cover config help command (pr#16727, John Spray)
  • doc: doc/rados: add page for health checks and update monitoring.rst (pr#16566, John Spray)
  • doc: doc/rados/operations/health-checks: osd section (pr#16611, Sage Weil)
  • doc: doc/release-notes: fix upmap and osd replacement links; add fixme (pr#16730, Sage Weil)
  • doc: [docs/quick-start]: update quick start to add a note for mgr create command for luminous+ builds (pr#16350, Vasu Kulkarni)
  • doc: Documentation updates for July 2017 releases (pr#16401, Bryan Stillwell)
  • doc: document mClock related options (pr#16552, Kefu Chai)
  • doc: Fixed a typo in yum repo filename script (pr#16431, Jeff Green)
  • doc: fix typo in config.rst (pr#16721, Jos Collin)
  • doc: fix typos in config.rst (pr#16681, Song Shun)
  • doc: mailmap: add affiliation for Zhu Shangzhong (pr#16537, Zhu Shangzhong)
  • doc: .mailmap, .organizationmap: Update ztczll affiliation (pr#16038, zhanglei)
  • doc: PendingReleaseNotes: “ceph -w” behavior has changed drastically (pr#16425, Joao Eduardo Luis, Nathan Cutler)
  • doc: Remove contractions from the documentation (pr#16629, John Wilkins)
  • doc: remove docs on non-existant command (pr#16616, Luo Kexue, Kefu Chai)
  • doc: reword mds deactivate docs; add optional fs_name argument (issue#20607, pr#16471, Jan Fajerski)
  • doc: rgw clarify limitations when creating tenant names (pr#16418, Abhishek Lekshmanan)
  • doc: update ceph(8) man page with new sub-commands (pr#16437, Kefu Chai)
  • doc: Update .organizationmap (pr#16507, luokexue)
  • doc: update the pool names created by by default (pr#16652, Zhu Shangzhong)
  • doc: update the rados namespace docs (pr#15838, Abhishek Lekshmanan)
  • doc: upmap docs; various missing links for release notes (pr#16637, Sage Weil)
  • doc: various fixes (pr#16723, Kefu Chai)
  • librados: add missing implementations for C service daemon API methods (pr#16543, Jason Dillaman)
  • librbd: add compare and write API (pr#14868, Zhengyong Wang, Jason Dillaman)
  • librbd: add LIBRBD_SUPPORTS_WRITESAME support (pr#16583, Xiubo Li)
  • mgr: add per-DaemonState lock (pr#16432, Sage Weil)
  • mgr: fix lock cycle (pr#16508, Sage Weil)
  • mgr: mgr/dashboard: add OSD list view (pr#16373, John Spray)
  • mgr: mgr_module interface to report health alerts (pr#16487, Sage Weil)
  • mgr: mgr/PyState: shut up about get_config on nonexistent keys (pr#16641, Sage Weil)
  • mgr: mon/MgrMonitor: fix standby addition to mgrmap (issue#20647, pr#16397, Sage Weil)
  • mgr,mon: mon/AuthMonitor: generate bootstrap-mgr key on upgrade (issue#20666, pr#16395, Joao Eduardo Luis)
  • mgr,mon: mon/MgrMonitor: reset mgrdigest timer with new subscription (issue#20633, pr#16582, Sage Weil)
  • mgr: perf schema fns/change notification and Prometheus plugin (pr#16406, Dan Mick)
  • mgr: pybind/mgr/zabbix: fix health in non-compat mode (issue#20767, pr#16580, Sage Weil)
  • mgr,pybind,rbd: mgr/dashboard: show rbd image features (pr#16468, Yanhu Cao)
  • mgr,rbd: mgr/dashboard: RBD iSCSI daemon status page (pr#16547, Jason Dillaman)
  • mgr,rbd: mgr/dashboard: rbd mirroring status page (pr#16360, Jason Dillaman)
  • mgr: fix mgr vs restful command startup race (pr#16564, Sage Weil)
  • mon: add force-create-pg back (issue#20605, pr#16353, Kefu Chai)
  • mon: add mgr metdata commands, and overall ‘versions’ command for all daemon versions (pr#16460, Sage Weil)
  • mon: a few health fixes (pr#16415, xie xingguo)
  • mon: ‘config-key put’ -> ‘config-key set’ (pr#16569, Sage Weil)
  • mon: do not dereference empty mgr_commands (pr#16501, Sage Weil)
  • mon: Fix deep_age copy paste error (pr#16434, Brad Hubbard)
  • mon: Fix output text and doc (pr#16367, Yan Jun)
  • mon: ‘* list’ -> ‘* ls’ (pr#16423, Sage Weil)
  • mon: load mgr commands at runtime (pr#16028, John Spray, Sage Weil)
  • mon: mon/HealthMonitor: avoid sending unnecessary MMonHealthChecks to leader (pr#16478, xie xingguo)
  • mon: mon/HealthMonitor: trigger a proposal if stat updated (pr#16477, Kefu Chai)
  • mon: mon/LogMonitor: don’t read list’s end() for log last (pr#16376, Joao Eduardo Luis)
  • mon: mon/MDSMonitor: close object section of formatter (pr#16516, Chang Liu)
  • mon: mon/MgrMonitor: only induce mgr epoch shortly after mkfs (pr#16356, Sage Weil)
  • mon: mon/OSDMonitor: ensure UP is not set for newly-created OSDs (issue#20751, pr#16534, Sage Weil)
  • mon: mon/OSDMonitor: issue pool application related warning (pr#16520, xie xingguo)
  • mon: mon/OSDMonitor: remove zeroed new_state updates (issue#20751, pr#16518, Sage Weil)
  • mon: mon/PGMap: remove skewed utilizatoin warning (issue#20730, pr#16461, Sage Weil)
  • mon: OSDMonitor: check mon_max_pool_pg_num when set pool pg_num (pr#16511, chenhg)
  • mon: prime pg_temp and a few health warning fixes (pr#16530, xie xingguo)
  • mon: show destroyed status in tree view; do not auto-out destroyed osds (pr#16446, xie xingguo)
  • mon: stop issuing not-[deep]-scrubbed warnings if disabled (pr#16465, xie xingguo)
  • mon: support pool application metadata key/values (pr#15763, Jason Dillaman)
  • msg: messages/: always set header.version in encode_payload() (issue#19939, pr#16421, Kefu Chai)
  • msg: mgr/status: row has incorrect number of values (issue#20750, pr#16529, liuchang0812)
  • msg: msg/async: use auto iterator having more simple code and good performance (pr#16524, dingdangzhang)
  • osd: add default_device_class to metadata (pr#16634, Neha Ojha)
  • osd: add dump filter for tracked ops (pr#16561, Yan Jun)
  • osd: Add recovery sleep configuration option for HDDs and SSDs (pr#16328, Neha Ojha)
  • osd: cmpext operator should ignore -ENOENT on read (pr#16622, Jason Dillaman)
  • osd: combine conditional statements (pr#16391, Yan Jun)
  • osd: do not send pg_created unless luminous (issue#20785, pr#16677, Kefu Chai)
  • osd: EC read handling: don’t grab an objectstore error to use as the read error (pr#16663, David Zafman)
  • osd: fix a couple bugs with persisting the missing set when it contains deletes (issue#20704, pr#16459, Josh Durgin)
  • osd: fix OpRequest and tracked op dump information (pr#16504, Yan Jun)
  • osd: fix pg ref leaks when osd shutdown (issue#20684, pr#16408, Yang Honggang)
  • osd: Log audit (pr#16281, Brad Hubbard)
  • osd: moved OpFinisher logic from OSDOp to OpContext (issue#20783, pr#16617, Jason Dillaman)
  • osd: populate last_epoch_split during build_initial_pg_history (issue#20754, pr#16519, Sage Weil)
  • osd: PrimaryLogPG, PGBackend: complete callback even if interval changes (issue#20747, pr#16536, Josh Durgin)
  • osd: process deletes during recovery instead of peering (issue#19971, pr#15952, Josh Durgin)
  • osd: rephrase “wrongly marked me down” clog message (pr#16365, John Spray)
  • osd: scrub_to specifies clone ver, but transaction include head write… (issue#20041, pr#16404, David Zafman)
  • osd: support cmpext operation on EC-backed pools (pr#15693, Zhengyong Wang, Jason Dillaman)
  • performance,rgw: rgw_file: permit dirent offset computation (pr#16275, Matt Benjamin)
  • pybind: pybind/mgr/restful: fix typo (pr#16560, Nick Erdmann)
  • rbd: cls/rbd: silence warning from -Wunused-variable (pr#16670, Yan Jun)
  • rbd: cls/rbd: trash_list should be iterable (issue#20643, pr#16372, Jason Dillaman)
  • rbd: fixed coverity ‘Argument cannot be negative’ warning (pr#16686, amitkuma)
  • rbd: make it more understandable when adding peer returns error (pr#16313, songweibin)
  • rbd-mirror: guard the deletion of non-primary images (pr#16398, Jason Dillaman)
  • rbd-mirror: initialize timer context pointer to null (pr#16603, Jason Dillaman)
  • rbd: modified some commands’ description into imperative sentence (pr#16694, songweibin)
  • rbd,tests: qa/tasks/rbd_fio: bump default fio version to 2.21 (pr#16656, Ilya Dryomov)
  • rbd,tests: qa: thrash tests for backoff and upmap (pr#16428, Ilya Dryomov)
  • rbd,tests: qa/workunits: adjust path to (pr#16599, Sage Weil)
  • rgw: acl grants num limit (pr#16291, Enming Zhang)
  • rgw: check placement existence when create bucket (pr#16385, Jiaying Ren)
  • rgw: check placement target existence during bucket creation (pr#16384, Jiaying Ren)
  • rgw: delete object in error path (issue#20620, pr#16324, Yehuda Sadeh)
  • rgw: Do not decrement stats cache when the cache values are zero (issue#20661, pr#16389, Pavan Rallabhandi)
  • rgw: Drop dump_usage_bucket_info() to silence warning from -Wunused-function (pr#16497, Wei Qiaomiao)
  • rgw: drop unused find_replacement() and some function docs (pr#16386, Jiaying Ren)
  • rgw: fix asctime when logging in rgw_lc (pr#16422, Abhishek Lekshmanan)
  • rgw: fix error message in removing bucket with –bypass-gc flag (issue#20688, pr#16419, Abhishek Varshney)
  • rgw: fix err when copy object in bucket with specified placement rule (issue#20378, pr#15837, fang yuxiang)
  • rgw: Fix for Policy Parse exception in case of multiple statements (pr#16689, Pritha Srivastava)
  • rgw: fix memory leaks during Swift Static Website’s error handling (issue#20757, pr#16531, Radoslaw Zarzynski)
  • rgw: fix parse/eval of policy conditions with IfExists (issue#20708, pr#16463, Casey Bodley)
  • rgw: fix radosgw will crash when service is restarted during lifecycl… (issue#20756, pr#16495, Wei Qiaomiao)
  • rgw: fix rgw hang when do RGWRealmReloader::reload after go SIGHUP (issue#20686, pr#16417, fang.yuxiang)
  • rgw: fix segfault in RevokeThread during its shutdown procedure (issue#19831, pr#15033, Radoslaw Zarzynski)
  • rgw: fix the UTF8 check on bucket entry name in rgw_log_op() (issue#20779, pr#16604, Radoslaw Zarzynski)
  • rgw: modify email to empty by admin RESTful api doesn’t work (pr#16309, fang.yuxiang)
  • rgw: never let http_redirect_code of RGWRedirectInfo to stay uninitialized (issue#20774, pr#16601, Radoslaw Zarzynski)
  • rgw: raise debug level of RGWPostObj_ObjStore_S3::get_policy (pr#16203, Shasha Lu)
  • rgw: req xml params size limitation error msg (pr#16310, Enming Zhang)
  • rgw: restore admin socket path in (pr#16540, Casey Bodley)
  • rgw: rgw_file: properly & |‘d flags (issue#20663, pr#16448, Matt Benjamin)
  • rgw: rgw multisite: feature of bucket sync enable/disable (pr#15801, Zhang Shaowen, Casey Bodley, Zengran Zhang)
  • rgw: should unlock when reshard_log->update() reture non-zero in RGWB… (pr#16502, Wei Qiaomiao)
  • rgw: test,rgw: fix rgw placement rule pool config option (pr#16380, Jiaying Ren)
  • rgw: usage (issue#16191, pr#14287, Ji Chen, Orit Wasserman)
  • rgw: use a namespace for rgw reshard pool for upgrades as well (issue#20289, pr#16368, Karol Mroz, Abhishek Lekshmanan)
  • rgw: Use comparison instead of assignment (pr#16653, amitkuma)
  • tests: add setup/teardown for asok dir (pr#16523, Kefu Chai)
  • tests: cephtool/ Only delete a test pool when no longer needed (pr#16443, Willem Jan Withagen)
  • tests: qa: Added luminous to the mix in (pr#16430, Yuri Weinstein)
  • tests: qa,doc: document and fix tests for pool application warnings (pr#16568, Sage Weil)
  • tests: qa/ fix the find option to be compatible with GNU find (pr#16646, Kefu Chai)
  • tests: qa/suites/rados/singleton/all/erasure-code-nonregression: fix typo (pr#16579, Sage Weil)
  • tests: qa/suites/upgrade/jewel-x: misc fixes for new health checks (pr#16429, Sage Weil)
  • tests: qa/tasks/ceph-deploy: Fix bluestore options for ceph-deploy (pr#16571, Vasu Kulkarni)
  • tests: qa/tasks/reg11184: use literal ‘foo’ instead pool_name (pr#16451, Kefu Chai)
  • tests: qa/workunits/cephtool/ “ceph osd stat” output changed, update accordingly (pr#16444, Willem Jan Withagen, Kefu Chai)
  • tests: qa/workunits/cephtool/ disable ‘fs status’ until bug is fixed (issue#20761, pr#16541, Sage Weil)
  • tests: qa/workunits/cephtool/ fix test to watch audit channel (pr#16470, Sage Weil)
  • tests: test: ceph osd stat out has changed, fix tests for that (pr#16403, Willem Jan Withagen)
  • tests: test: create asok files in a temp directory under $TMPDIR (issue#16895, pr#16445, Kefu Chai)
  • tests: test: Fixes for test_pidfile (issue#20770, pr#16587, David Zafman)
  • tests: test/osd: kill compile warning (pr#16669, Yan Jun)
  • tests: test/rados: fix wrong parameter order of RETURN1_IF_NOT_VAL (pr#16589, Yan Jun)
  • tests: test: reg11184 might not always find pg 2.0 prior to import (pr#16610, David Zafman)
  • tests: test: s/osd_objectstore_type/osd_objectstore (pr#16469, xie xingguo)
  • tests: test: test_pidfile running 2nd mon has unreliable log output (pr#16635, David Zafman)
  • tools: ceph-disk: change the lockbox partition number to 5 (issue#20556, pr#16247, Shangzhong Zhu)
  • tools: ceph-disk: Fix for missing ‘not’ in *_is_diskdevice checks (issue#20706, pr#16481, Nikita Gerasimov)
  • tools: ceph_disk/ FreeBSD root has wheel for group (pr#16609, Willem Jan Withagen)
  • tools: ceph-disk: s/ceph_osd_mkfs/command_check_call/ (issue#20685, pr#16427, Zhu Shangzhong)
  • tools: ceph-release-notes: escape _ for unintended links (issue#17499, pr#16528, Kefu Chai)
  • tools: ceph-release-notes: port it to py3 (pr#16261, Kefu Chai)
  • tools: ceph-release-notes: refactor and fix regressions (pr#16411, Nathan Cutler)
  • tools: os/bluestore/bluestore_tool: add sanity check to get rid of occasionally crash (pr#16013, xie xingguo)
  • tools: script: add docker core dump debugger (pr#16375, Patrick Donnelly)