This is the first stable release of Mimic, the next long term release series. Please read the upgrade notes from previous releases carefully before upgrading.
ceph versions
command, which will tell you what ceph version(s) areMake sure your cluster is stable and healthy (no down or
recoverying OSDs). (Optional, but recommended.)
Set the noout
flag for the duration of the upgrade. (Optional,
but recommended.):
# ceph osd set noout
Upgrade monitors by installing the new packages and restarting the
monitor daemons.:
# systemctl restart ceph-mon.target
Verify the monitor upgrade is complete once all monitors are up by
looking for the mimic
feature string in the mon map. For
example:
# ceph mon feature ls
should include mimic under persistent features:
on current monmap (epoch NNN)
persistent: [kraken,luminous,mimic]
required: [kraken,luminous,mimic]
Upgrade ceph-mgr
daemons by installing the new packages and
restarting with:
# systemctl restart ceph-mgr.target
Verify the ceph-mgr daemons are running by checking ceph -s
:
# ceph -s
...
services:
mon: 3 daemons, quorum foo,bar,baz
mgr: foo(active), standbys: bar, baz
...
Upgrade all OSDs by installing the new packages and restarting the
ceph-osd daemons on all hosts:
# systemctl restart ceph-osd.target
You can monitor the progress of the OSD upgrades with the new
ceph versions
or ceph osd versions
command:
# ceph osd versions
{
"ceph version 12.2.5 (...) luminous (stable)": 12,
"ceph version 13.2.0 (...) mimic (stable)": 22,
}
Upgrade all CephFS MDS daemons. For each CephFS file system,
Reduce the number of ranks to 1. (Make note of the original
number of MDS daemons first if you plan to restore it later.):
# ceph status
# ceph fs set <fs_name> max_mds 1
Wait for the cluster to deactivate any non-zero ranks by
periodically checking the status:
# ceph status
Take all standby MDS daemons offline on the appropriate hosts with:
# systemctl stop ceph-mds@<daemon_name>
Confirm that only one MDS is online and is rank 0 for your FS:
# ceph status
Upgrade the last remaining MDS daemon by installing the new
packages and restarting the daemon:
# systemctl restart ceph-mds.target
Restart all standby MDS daemons that were taken offline:
# systemctl start ceph-mds.target
Restore the original value of max_mds
for the volume:
# ceph fs set <fs_name> max_mds <original_max_mds>
Upgrade all radosgw daemons by upgrading packages and restarting
daemons on all hosts:
# systemctl restart radosgw.target
Complete the upgrade by disallowing pre-mimic OSDs and enabling
all new Mimic-only functionality:
# ceph osd require-osd-release mimic
If you set noout
at the beginning, be sure to clear it with:
# ceph osd unset noout
Verify the cluster is healthy with ceph health
.
You must first upgrade to Luminous (12.2.z) before attempting an
upgrade to Mimic.
These changes occurred between the Luminous and Mimic releases.
core:
pg force-recovery
command will not work for erasure-coded PGs when acrush-location-hook
script has been removed. Its output is-f
option of the rados tool now means --format
instead--force
, for consistency with the ceph tool.config diff
output via the admin socket has changed. Itradosgw-admin
tool as well.osd force-create-pg
command now requires a force option to proceedCephFS:
Upgrading an MDS cluster to 12.2.3+ will result in all active MDS
exiting due to feature incompatibilities once an upgraded MDS comes online
(even as standby). Operators may ignore the error messages and continue
upgrading/restarting or follow this upgrade sequence:
After upgrading the monitors to Mimic, reduce the number of ranks to 1
(ceph fs set <fs_name> max_mds 1), wait for all other MDS to deactivate,
leaving the one active MDS, stop all standbys, upgrade the single active
MDS, then upgrade/start standbys. Finally, restore the previous max_mds.
!! NOTE: see release notes on snapshots in CephFS if you have ever enabled
snapshots on your file system.
See also: https://tracker.ceph.com/issues/23172
Several ceph mds ...
commands have been obsoleted and replaced by
equivalent ceph fs ...
commands:
mds dump
-> fs dump
mds getmap
-> fs dump
mds stop
-> mds deactivate
mds set_max_mds
-> fs set max_mds
mds set
-> fs set
mds cluster_down
-> fs set cluster_down true
mds cluster_up
-> fs set cluster_down false
mds add_data_pool
-> fs add_data_pool
mds remove_data_pool
-> fs rm_data_pool
mds rm_data_pool
-> fs rm_data_pool
New CephFS file system attributes session_timeout and
session_autoclose are configurable via ceph fs set
. The MDS
config options mds_session_timeout, mds_session_autoclose, and
mds_max_file_size are now obsolete.
As the multiple MDS feature is now standard, it is now enabled by
default. ceph fs set allow_multimds
is now deprecated and will be
removed in a future release.
As the directory fragmentation feature is now standard, it is now
enabled by default. ceph fs set allow_dirfrags
is now deprecated and
will be removed in a future release.
MDS daemons now activate and deactivate based on the value of
max_mds. Accordingly, ceph mds deactivate
has been deprecated as it
is now redundant.
Taking a CephFS cluster down is now done by setting the down flag which
deactivates all MDS. For example: ceph fs set cephfs down true.
Preventing standbys from joining as new actives (formerly the now
deprecated cluster_down flag) on a file system is now accomplished by
setting the joinable flag. This is useful mostly for testing so that a
file system may be quickly brought down and deleted.
New CephFS file system attributes session_timeout and session_autoclose
are configurable via ceph fs set. The MDS config options
mds_session_timeout, mds_session_autoclose, and mds_max_file_size are now
obsolete.
Each mds rank now maintains a table that tracks open files and their
ancestor directories. Recovering MDS can quickly get open files’ paths,
significantly reducing the time of loading inodes for open files. MDS
creates the table automatically if it does not exist.
CephFS snapshot is now stable and enabled by default on new filesystems.
To enable snapshot on existing filesystems, use the command:
ceph fs set <fs_name> allow_new_snaps
The on-disk format of snapshot metadata has changed. The old format
metadata can not be properly handled in multiple active MDS configuration.
To guarantee all snapshot metadata on existing filesystems get updated,
perform the sequence of upgrading the MDS cluster strictly.
See http://docs.ceph.com/docs/mimic/cephfs/upgrading/
For filesystems that have ever enabled snapshots, the multiple-active MDS
feature is disabled by the mimic monitor daemon. This will cause the “restore
previous max_mds” step in above URL to fail. To re-enable the feature,
either delete all old snapshots or scrub the whole filesystem:
ceph daemon <mds of rank 0> scrub_path / force recursive repair
ceph daemon <mds of rank 0> scrub_path '~mdsdir' force recursive repair
Support has been added in Mimic for quotas in the Linux kernel client as of v4.17.
Many fixes have been made to the MDS metadata balancer which distributes
load across MDS. It is expected that the automatic balancing should work
well for most use-cases. In Luminous, subtree pinning was advised as a
manual workaround for poor balancer behavior. This may no longer be
necessary so it is recommended to try experimentally disabling pinning as a
form of load balancing to see if the built-in balancer adequately works for
you. Please report any poor behavior post-upgrade.
NFS-Ganesha is an NFS userspace server that can export shares from multiple
file systems, including CephFS. Support for this CephFS client has improved
significantly in Mimic. In particular, delegations are now supported through
the libcephfs library so that Ganesha may issue delegations to its NFS clients
allowing for safe write buffering and coherent read caching. Documentation
is also now available: http://docs.ceph.com/docs/mimic/cephfs/nfs/
MDS uptime is now available in the output of the MDS admin socket status
command.
MDS performance counters for client requests now include average latency as well as the count.
RBD
lock list
JSON and XML output has changed.showmapped
JSON and XML output has changed.RGW
MGR
The (read-only) Ceph manager dashboard introduced in Ceph Luminous has been
replaced with a new implementation, providing a drop-in replacement offering
a number of additional management features. To access the new dashboard, you
first need to define a username and password and create an SSL certificate.
See the dashboard documentation for a feature
overview and installation instructions.
The ceph-rest-api
command-line tool (obsoleted by the MGR
restful module and deprecated since v12.2.5) has been dropped.
There is a MGR module called restful which provides similar functionality
via a “pass through” method. See http://docs.ceph.com/docs/master/mgr/restful
for details.
New command to track throughput and IOPS statistics, also available in
ceph -s
and previously in ceph -w
. To use this command, enable
the iostat
Manager module and invoke it using ceph iostat
. See the
iostat documentation for details.
build/packaging
rcceph
script (systemd/ceph
in the source code tree, shipped as/usr/sbin/rcceph
in the ceph-base package for CentOS and SUSE) has beenceph-osd.target
,ceph-mon.target
, etc.).
|
, race (issue#22676, pr#19946, Sage Weil)