The Ceph Blog

Featured Post

Last week, Red Hat investigated an intrusion on the sites of both the Ceph community project ( and Inktank (, which were hosted on a computer system outside of Red Hat infrastructure. provided Ceph community versions downloads signed with a Ceph signing key  (id 7EBFDD5D17ED316D). provided releases of the Red Hat Ceph product for Ubuntu and CentOS operating systems signed with an Inktank signing key (id 5438C7019DCEEEAD). While the investigation into the intrusion is ongoing, our initial focus was on the integrity of the software and distribution channel for both sites.

To date, our investigation has not discovered any compromised code or binaries available for download on these sites. However, we cannot fully rule out the possibility that some compromised code or binaries were available for download at some point in the past. Further, we can no longer trust the integrity of the Ceph signing key, and therefore have created a new signing key (id E84AC2C0460F3994) for verifying downloads.  This new key is committed to the ceph.git repository and is also available from  All future release git tags will be signed with this new key.

This intrusion did not affect other Ceph sites such as (which contained some Ceph downloads) or (which mirrors various source repositories), and is not known to have affected any other Ceph community infrastructure.  There is no evidence that build systems or the Ceph github source repository were compromised.

New hosts for and have been created and the sites have been rebuilt.  All content available on has been verified, and all URLs for package locations now redirect there.  There is still some content missing from that will appear later today: source tarballs will be regenerated from git, and older release packages are being resigned with the new release key.

The host has been retired and affected Red Hat customers have been notified, further information is available at

Users of Ceph packages should take action as a precautionary measure to download the newly-signed versions.  Please see the instructions below.

The Ceph community would like to thank Kai Fabian for initially alerting us to this issue.

The following steps should be performed on all nodes with Ceph software installed.

Replace APT keys (Debian, Ubuntu)

sudo apt-key del 17ED316D
curl | sudo apt-key add -
sudo apt-get update

Replace RPM keys (Fedora, CentOS, SUSE, etc.)

sudo rpm -e --allmatches gpg-pubkey-17ed316d-4fb96ee8
sudo rpm --import ''

Reinstalling packages (Fedora, CentOS, SUSE, etc.)

sudo yum clean metadata
sudo yum reinstall -y $(repoquery --disablerepo=* --enablerepo=ceph --queryformat='%{NAME}' list '*')
Earlier Posts

v0.94.3 Hammer released

This Hammer point release fixes a critical (though rare) data corruption bug that could be triggered when logs are rotated via SIGHUP. It also fixes a range of other important bugs in the OSD, monitor, RGW, RBD, and CephFS.

All v0.94.x Hammer users are strongly encouraged to upgrade.


  • The pg ls-by-{pool,primary,osd} commands and pg ls now take the argument recovering instead of recovery in order to include the recovering pgs in the listed pgs.


  • librbd: aio calls may block (issue#11770pr#4875, Jason Dillaman)
  • osd: make the all osd/filestore thread pool suicide timeouts separately configurable (issue#11701pr#5159, Samuel Just)
  • mon: ceph fails to compile with boost 1.58 (issue#11982pr#5122, Kefu Chai)
  • tests: TEST_crush_reject_empty must not run a mon (issue#12285,11975pr#5208, Kefu Chai)
  • osd: FAILED assert(!old_value.deleted()) in upgrade:giant-x-hammer-distro-basic-multi run (issue#11983pr#5121, Samuel Just)
  • build/ops: linking ceph to tcmalloc causes segfault on SUSE SLE11-SP3 (issue#12368pr#5265, Thorsten Behrens)
  • common: utf8 and old gcc breakage on RHEL6.5 (issue#7387pr#4687, Kefu Chai)
  • crush: take crashes due to invalid arg (issue#11740pr#4891, Sage Weil)
  • rgw: need conversion tool to handle fixes following #11974 (issue#12502pr#5384, Yehuda Sadeh)
  • rgw: Swift API: support for 202 Accepted response code on container creation (issue#12299pr#5214, Radoslaw Zarzynski)
  • common: Log::reopen_log_file: take m_flush_mutex (issue#12520pr#5405, Samuel Just)
  • rgw: Properly respond to the Connection header with Civetweb (issue#12398pr#5284, Wido den Hollander)
  • rgw: multipart list part response returns incorrect field (issue#12399pr#5285, Henry Chang)
  • build/ops: 95-ceph-osd.rules, mount.ceph, and mount.fuse.ceph not installed properly on SUSE (issue#12397pr#5283, Nathan Cutler)
  • rgw: radosgw-admin dumps user info twice (issue#12400pr#5286, guce)
  • doc: fix doc build (issue#12180pr#5095, Kefu Chai)
  • tests: backport 11493 fixes, and test, preventing ec cache pools (issue#12314pr#4961, Samuel Just)
  • rgw: does not send Date HTTP header when civetweb frontend is used (issue#11872pr#5228, Radoslaw Zarzynski)
  • mon: pg ls is broken (issue#11910pr#5160, Kefu Chai)
  • librbd: A client opening an image mid-resize can result in the object map being invalidated (issue#12237pr#5279, Jason Dillaman)
  • doc: missing man pages for ceph-create-keys, ceph-disk-* (issue#11862pr#4846, Nathan Cutler)
  • tools: ceph-post-file fails on rhel7 (issue#11876pr#5038, Sage Weil)
  • build/ops: rcceph script is buggy (issue#12090pr#5028, Owen Synge)
  • rgw: Bucket header is enclosed by quotes (issue#11874pr#4862, Wido den Hollander)
  • build/ops: packaging: add SuSEfirewall2 service files (issue#12092pr#5030, Tim Serong)
  • rgw: Keystone PKI token expiration is not enforced (issue#11722pr#4884, Anton Aksola)
  • build/ops: debian/control: ceph-common (>> 0.94.2) must be >= 0.94.2-2 (issue#12529,11998pr#5417, Loic Dachary)
  • mon: Clock skew causes missing summary and confuses Calamari (issue#11879pr#4868, Thorsten Behrens)
  • rgw: rados objects wronly deleted (issue#12099pr#5117, wuxingyi)
  • tests: kernel_untar_build fails on EL7 (issue#12098pr#5119, Greg Farnum)
  • fs: Fh ref count will leak if readahead does not need to do read from osd (issue#12319pr#5427, Zhi Zhang)
  • mon: OSDMonitor: allow addition of cache pool with non-empty snaps with co… (issue#12595pr#5252, Samuel Just)
  • mon: MDSMonitor: handle MDSBeacon messages properly (issue#11979pr#5123, Kefu Chai)
  • tools: ceph-disk: get_partition_type fails on /dev/cciss… (issue#11760pr#4892, islepnev)
  • build/ops: max files open limit for OSD daemon is too low (issue#12087pr#5026, Owen Synge)
  • mon: add an “osd crush tree” command (issue#11833pr#5248, Kefu Chai)
  • mon: mon crashes when “ceph osd tree 85 –format json” (issue#11975pr#4936, Kefu Chai)
  • build/ops: ceph / ceph-dbg steal ceph-objecstore-tool from ceph-test / ceph-test-dbg (issue#11806pr#5069, Loic Dachary)
  • rgw: DragonDisk fails to create directories via S3: MissingContentLength (issue#12042pr#5118, Yehuda Sadeh)
  • build/ops: /usr/bin/ceph from ceph-common is broken without installing ceph (issue#11998pr#5206, Ken Dreyer)
  • build/ops: systemd: Increase max files open limit for OSD daemon (issue#11964pr#5040, Owen Synge)
  • build/ops: rgw/logrotate.conf calls service with wrong init script name (issue#12044pr#5055, wuxingyi)
  • common: OPT_INT option interprets 3221225472 as -1073741824, and crashes in Throttle::Throttle() (issue#11738pr#4889, Kefu Chai)
  • doc: doc/release-notes: v0.94.2 (issue#11492pr#4934, Sage Weil)
  • common: admin_socket: close socket descriptor in destructor (issue#11706pr#4657, Jon Bernard)
  • rgw: Object copy bug (issue#11755pr#4885, Javier M. Mellid)
  • rgw: empty json response when getting user quota (issue#12245pr#5237, wuxingyi)
  • fs: cephfs Dumper tries to load whole journal into memory at once (issue#11999pr#5120, John Spray)
  • rgw: Fix tool for #11442 does not correctly fix objects created via multipart uploads (issue#12242pr#5229, Yehuda Sadeh)
  • rgw: Civetweb RGW appears to report full size of object as downloaded when only partially downloaded (issue#12243pr#5231, Yehuda Sadeh)
  • osd: stuck incomplete (issue#12362pr#5269, Samuel Just)
  • osd: start_flush: filter out removed snaps before determining snapc’s (issue#11911pr#4899, Samuel Just)
  • librbd: 1967: FAILED assert(watchers.size() == 1) (issue#12239pr#5243, Jason Dillaman)
  • librbd: new QA client upgrade tests (issue#12109pr#5046, Jason Dillaman)
  • librbd: [ FAILED ] TestLibRBD.ExclusiveLockTransition (issue#12238pr#5241, Jason Dillaman)
  • rgw: Swift API: XML document generated in response for GET on account does not contain account name (issue#12323pr#5227, Radoslaw Zarzynski)
  • rgw: keystone does not support chunked input (issue#12322pr#5226, Hervé Rousseau)
  • mds: MDS is crashed (mds/ 1391: FAILED assert(!is_complete())) (issue#11737pr#4886, Yan, Zheng)
  • cli: ceph: cli interactive mode does not understand quotes (issue#11736pr#4776, Kefu Chai)
  • librbd: add valgrind memory checks for unit tests (issue#12384pr#5280, Zhiqiang Wang)
  • build/ops: admin/build-doc: script fails silently under certain circumstances (issue#11902pr#4877, John Spray)
  • osd: Fixes for rados ops with snaps (issue#11908pr#4902, Samuel Just)
  • build/ops: ceph-common subpackage def needs tweaking for SUSE/openSUSE (issue#12308pr#4883, Nathan Cutler)
  • fs: client: reference counting ‘struct Fh’ (issue#12088pr#5222, Yan, Zheng)
  • build/ops: ceph.spec: update OpenSUSE BuildRequires (issue#11611pr#4667, Loic Dachary)

For more detailed information, see the complete changelog.


v9.0.3 released

This is the second to last batch of development work for the Infernalis cycle. The most intrusive change is an internal (non user-visible) change to the OSD’s ObjectStore interface. Many fixes and improvements elsewhere across RGW, RBD, and another big pile of CephFS scrub/repair improvements.


  • The return code for librbd’s rbd_aio_read and Image::aio_read API methods no longer returns the number of bytes read upon success. Instead, it returns 0 upon success and a negative value upon failure.
  • ‘ceph scrub’, ‘ceph compact’ and ‘ceph sync force are now DEPRECATED. Users should instead use ‘ceph mon scrub’, ‘ceph mon compact’ and ‘ceph mon sync force’.
  • ‘ceph mon_metadata’ should now be used as ‘ceph mon metadata’. There is no need to deprecate this command (same major release since it was first introduced).
  • The –dump-json option of “osdmaptool” is replaced by –dump json.
  • The commands of “pg ls-by-{pool,primary,osd}” and “pg ls” now take “recovering” instead of “recovery”, to include the recovering pgs in the listed pgs.


  • autotools: fix out of tree build (Krxysztof Kosinski)
  • autotools: improve make check output (Loic Dachary)
  • buffer: add invalidate_crc() (Piotr Dalek)
  • buffer: fix zero bug (#12252 Haomai Wang)
  • build: fix junit detection on Fedora 22 (Ira Cooper)
  • ceph-disk: install pip > 6.1 (#11952 Loic Dachary)
  • cephfs-data-scan: many additions, improvements (John Spray)
  • ceph: improve error output for ‘tell’ (#11101 Kefu Chai)
  • ceph-objectstore-tool: misc improvements (David Zafman)
  • ceph-objectstore-tool: refactoring and cleanup (John Spray)
  • ceph_test_rados: test pipelined reads (Zhiqiang Wang)
  • common: fix bit_vector extent calc (#12611 Jason Dillaman)
  • common: make work queue addition/removal thread safe (#12662 Jason Dillaman)
  • common: optracker improvements (Zhiqiang Wang, Jianpeng Ma)
  • crush: add –check to validate dangling names, max osd id (Kefu Chai)
  • crush: cleanup, sync with kernel (Ilya Dryomov)
  • crush: fix subtree base weight on adjust_subtree_weight (#11855 Sage Weil)
  • crypo: fix NSS leak (Jason Dillaman)
  • crypto: fix unbalanced init/shutdown (#12598 Zheng Yan)
  • doc: misc updates (Kefu Chai, Owen Synge, Gael Fenet-Garde, Loic Dachary, Yannick Atchy-Dalama, Jiaying Ren, Kevin Caradant, Robert Maxime, Nicolas Yong, Germain Chipaux, Arthur Gorjux, Gabriel Sentucq, Clement Lebrun, Jean-Remi Deveaux, Clair Massot, Robin Tang, Thomas Laumondais, Jordan Dorne, Yuan Zhou, Valentin Thomas, Pierre Chaumont, Benjamin Troquereau, Benjamin Sesia, Vikhyat Umrao)
  • erasure-code: cleanup (Kefu Chai)
  • erasure-code: improve tests (Loic Dachary)
  • erasure-code: shec: fix recovery bugs (Takanori Nakao, Shotaro Kawaguchi)
  • libcephfs: add pread, pwrite (Jevon Qiao)
  • libcephfs,ceph-fuse: cache cleanup (Zheng Yan)
  • librados: add src_fadvise_flags for copy-from (Jianpeng Ma)
  • librados: respect default_crush_ruleset on pool_create (#11640 Yuan Zhou)
  • librbd: fadvise for copy, export, import (Jianpeng Ma)
  • librbd: handle NOCACHE fadvise flag (Jinapeng Ma)
  • librbd: optionally disable allocation hint (Haomai Wang)
  • librbd: prevent race between resize requests (#12664 Jason Dillaman)
  • log: fix data corruption race resulting from log rotation (#12465 Samuel Just)
  • mds: expose frags via asok (John Spray)
  • mds: fix setting entire file layout in one setxattr (John Spray)
  • mds: fix shutdown (John Spray)
  • mds: handle misc corruption issues (John Spray)
  • mds: misc fixes (Jianpeng Ma, Dan van der Ster, Zhang Zhi)
  • mds: misc snap fixes (Zheng Yan)
  • mds: store layout on header object (#4161 John Spray)
  • misc performance and cleanup (Nathan Cutler, Xinxin Shu)
  • mon: add NOFORWARD, OBSOLETE, DEPRECATE flags for mon commands (Joao Eduardo Luis)
  • mon: add PG count to ‘ceph osd df’ output (Michal Jarzabek)
  • mon: clean up, reorg some mon commands (Joao Eduardo Luis)
  • mon: disallow >2 tiers (#11840 Kefu Chai)
  • mon: fix log dump crash when debugging (Mykola Golub)
  • mon: fix metadata update race (Mykola Golub)
  • mon: fix refresh (#11470 Joao Eduardo Luis)
  • mon: make blocked op messages more readable (Jianpeng Ma)
  • mon: only send mon metadata to supporting peers (Sage Weil)
  • mon: periodic background scrub (Joao Eduardo Luis)
  • mon: prevent pgp_num > pg_num (#12025 Xinxin Shu)
  • mon: reject large max_mds values (#12222 John Spray)
  • msgr: add ceph_perf_msgr tool (Hoamai Wang)
  • msgr: async: fix seq handling (Haomai Wang)
  • msgr: xio: fastpath improvements (Raju Kurunkad)
  • msgr: xio: sync with accellio v1.4 (Vu Pham)
  • osd: clean up temp object if promotion fails (Jianpeng Ma)
  • osd: constrain collections to meta and PGs (normal and temp) (Sage Weil)
  • osd: filestore: clone using splice (Jianpeng Ma)
  • osd: filestore: fix recursive lock (Xinxin Shu)
  • osd: fix dup promotion lost op bug (Zhiqiang Wang)
  • osd: fix temp-clearing (David Zafman)
  • osd: include a temp namespace within each collection/pgid (Sage Weil)
  • osd: low and high speed flush modes (Mingxin Liu)
  • osd: peer_features includes self (David Zafman)
  • osd: recovery, peering fixes (#11687 Samuel Just)
  • osd: require firefly features (David Zafman)
  • osd: set initial crush weight with more precision (Sage Weil)
  • osd: use a temp object for recovery (Sage Weil)
  • osd: use blkid to collection partition information (Joseph Handzik)
  • rados: add –striper option to use libradosstriper (#10759 Sebastien Ponce)
  • radosgw-admin: fix subuser modify output (#12286 Guce)
  • rados: handle –snapid arg properly (Abhishek Lekshmanan)
  • rados: improve bench buffer handling, performance (Piotr Dalek)
  • rados: new pool import implementation (John Spray)
  • rbd: fix link issues (Jason Dillaman)
  • rbd: improve CLI arg parsing, usage (Ilya Dryomov)
  • rbd: recognize queue_depth kernel option (Ilya Dryomov)
  • rbd: support G and T units for CLI (Abhishek Lekshmanan)
  • rbd: use image-spec and snap-spec in help (Vikhyat Umrao, Ilya Dryomov)
  • rest-bench: misc fixes (Shawn Chen)
  • rest-bench: support https (#3968 Yuan Zhou)
  • rgw: add max multipart upload parts (#12146 Abshishek Dixit)
  • rgw: add Trasnaction-Id to response (Abhishek Dixit)
  • rgw: document layout of pools and objects (Pete Zaitcev)
  • rgw: do not preserve ACLs when copying object (#12370 Yehuda Sadeh)
  • rgw: fix Connection: header handling (#12298 Wido den Hollander)
  • rgw: fix data corruptions race condition (#11749 Wuxingyi)
  • rgw: fix JSON response when getting user quota (#12117 Wuxingyi)
  • rgw: force content_type for swift bucket stats requests (#12095 Orit Wasserman)
  • rgw: improved support for swift account metadata (Radoslaw Zarzynski)
  • rgw: make max put size configurable (#6999 Yuan Zhou)
  • rgw: orphan detection tool (Yehuda Sadeh)
  • rgw: swift: do not override sent content type (#12363 Orit Wasserman)
  • rgw: swift: set Content-Length for account GET (#12158 Radoslav Zarzynski)
  • rpm: always rebuild and install man pages for rpm (Owen Synge)
  • rpm: misc fixes (Boris Ranto, Owen Synge, Ken Dreyer, Ira Cooper)
  • systemd: logrotate fixes (Tim Seron, Lars Marowsky-Bree, Nathan Cutler)
  • sysvinit compat: misc fixes (Owen Synge)
  • test: misc fs test improvements (John Spray, Loic Dachary)
  • test: python tests, linter cleanup (Alfredo Deza)


v0.80.10 Firefly released

This is a bugfix release for Firefly.

We recommend that all Firefly users upgrade.

For more detailed information, see the complete changelog.


  • build/ops: package mkcephfs on EL6 (issue#11955pr#4924, Ken Dreyer)
  • build/ops: debian: ceph-test and rest-bench debug packages should require their respective binary packages (issue#11673pr#4766, Ken Dreyer)
  • build/ops: run RGW as root (issue#11453pr#4638, Ken Dreyer)
  • common: messages/MWatchNotify: include an error code in the message (issue#9193pr#3944, Sage Weil)
  • common: Rados.shutdown() dies with Illegal instruction (core dumped) (issue#10153pr#3963, Federico Simoncelli)
  • common: SimpleMessenger: allow RESETSESSION whenever we forget an endpoint (issue#10080pr#3915, Greg Farnum)
  • common: WorkQueue: make wait timeout on empty queue configurable (issue#10817pr#3941, Samuel Just)
  • crush: set_choose_tries = 100 for erasure code rulesets (issue#10353pr#3824, Loic Dachary)
  • doc: backport ceph-disk man page to Firefly (issue#10724pr#3936, Nilamdyuti Goswami)
  • doc: Fix ceph command manpage to match ceph -h (issue#10676pr#3996, David Zafman)
  • fs: mount.ceph: avoid spurious error message (issue#10351pr#3927, Yan, Zheng)
  • librados: Fix memory leak in python rados bindings (issue#10723pr#3935, Josh Durgin)
  • librados: fix resources leakage in RadosClient::connect() (issue#10425pr#3828, Radoslaw Zarzynski)
  • librados: Translate operation flags from C APIs (issue#10497pr#3930, Matt Richards)
  • librbd: acquire cache_lock before refreshing parent (issue#5488pr#4206, Jason Dillaman)
  • librbd: snap_remove should ignore -ENOENT errors (issue#11113pr#4245, Jason Dillaman)
  • mds: fix assertion caused by system clock backwards (issue#11053pr#3970, Yan, Zheng)
  • mon: ignore osd failures from before up_from (issue#10762pr#3937, Sage Weil)
  • mon: MonCap: take EntityName instead when expanding profiles (issue#10844pr#3942, Joao Eduardo Luis)
  • mon: Monitor: fix timecheck rounds period (issue#10546pr#3932, Joao Eduardo Luis)
  • mon: OSDMonitor: do not trust small values in osd epoch cache (issue#10787pr#3823, Sage Weil)
  • mon: OSDMonitor: fallback to json-pretty in case of invalid formatter (issue#9538pr#4475, Loic Dachary)
  • mon: PGMonitor: several stats output error fixes (issue#10257pr#3826, Joao Eduardo Luis)
  • objecter: fix map skipping (issue#9986pr#3952, Ding Dinghua)
  • osd: cache tiering: fix the atime logic of the eviction (issue#9915pr#3949, Zhiqiang Wang)
  • osd: cancel_pull: requeue waiters (issue#11244pr#4415, Samuel Just)
  • osd: check that source OSD is valid for MOSDRepScrub (issue#9555pr#3947, Sage Weil)
  • osd: DBObjectMap: lock header_lock on sync() (issue#9891pr#3948, Samuel Just)
  • osd: do not ignore deleted pgs on startup (issue#10617pr#3933, Sage Weil)
  • osd: ENOENT on clone (issue#11199pr#4385, Samuel Just)
  • osd: erasure-code-profile set races with erasure-code-profile rm (issue#11144pr#4383, Loic Dachary)
  • osd: FAILED assert(soid < scrubber.start || soid >= scrubber.end) (issue#11156pr#4185, Samuel Just)
  • osd: FileJournal: fix journalq population in do_read_entry() (issue#6003pr#3960, Samuel Just)
  • osd: fix negative degraded objects during backfilling (issue#7737pr#4021, Guang Yang)
  • osd: get the currently atime of the object in cache pool for eviction (issue#9985pr#3950, Sage Weil)
  • osd: load_pgs: we need to handle the case where an upgrade from earlier versions which ignored non-existent pgs resurrects a pg with a prehistoric osdmap (issue#11429pr#4556, Samuel Just)
  • osd: ObjectStore: Don’t use largest_data_off to calc data_align. (issue#10014pr#3954, Jianpeng Ma)
  • osd: osd_types: op_queue_age_hist and fs_perf_stat should be in osd_stat_t::o… (issue#10259pr#3827, Samuel Just)
  • osd: PG::actingset should be used when checking the number of acting OSDs for… (issue#11454pr#4453, Guang Yang)
  • osd: PG::all_unfound_are_queried_or_lost for non-existent osds (issue#10976pr#4416, Mykola Golub)
  • osd: PG: always clear_primary_state (issue#10059pr#3955, Samuel Just)
  • osd: PGLog.h: 279: FAILED assert(log.log.size() == log_keys_debug.size()) (issue#10718pr#4382, Samuel Just)
  • osd: PGLog: include rollback_info_trimmed_to in (read|write)_log (issue#10157pr#3964, Samuel Just)
  • osd: pg stuck stale after create with activation delay (issue#11197pr#4384, Samuel Just)
  • osd: ReplicatedPG: fail a non-blocking flush if the object is being scrubbed (issue#8011pr#3943, Samuel Just)
  • osd: ReplicatedPG::on_change: clean up callbacks_for_degraded_object (issue#8753pr#3940, Samuel Just)
  • osd: ReplicatedPG::scan_range: an object can disappear between the list and t… (issue#10150pr#3962, Samuel Just)
  • osd: requeue blocked op before flush it was blocked on (issue#10512pr#3931, Sage Weil)
  • rgw: check for timestamp for s3 keystone auth (issue#10062pr#3958, Abhishek Lekshmanan)
  • rgw: civetweb should use unique request id (issue#11720pr#4780, Orit Wasserman)
  • rgw: don’t allow negative / invalid content length (issue#11890pr#4829, Yehuda Sadeh)
  • rgw: fail s3 POST auth if keystone not configured (issue#10698pr#3966, Yehuda Sadeh)
  • rgw: flush xml header on get acl request (issue#10106pr#3961, Yehuda Sadeh)
  • rgw: generate new tag for object when setting object attrs (issue#11256pr#4571, Yehuda Sadeh)
  • rgw: generate the “Date” HTTP header for civetweb. (issue#11871,11891pr#4851, Radoslaw Zarzynski)
  • rgw: keystone token cache does not work correctly (issue#11125pr#4414, Yehuda Sadeh)
  • rgw: merge manifests correctly when there’s prefix override (issue#11622pr#4697, Yehuda Sadeh)
  • rgw: send appropriate op to cancel bucket index pending operation (issue#10770pr#3938, Yehuda Sadeh)
  • rgw: shouldn’t need to disable rgw_socket_path if frontend is configured (issue#11160pr#4275, Yehuda Sadeh)
  • rgw: Swift API. Dump container’s custom metadata. (issue#10665pr#3934, Dmytro Iurchenko)
  • rgw: Swift API. Support for X-Remove-Container-Meta-{key} header. (issue#10475pr#3929, Dmytro Iurchenko)
  • rgw: use correct objv_tracker for bucket instance (issue#11416pr#4379, Yehuda Sadeh)
  • tests: force checkout of submodules (issue#11157pr#4079, Loic Dachary)
  • tools: Backport ceph-objectstore-tool changes to firefly (issue#12327pr#3866, David Zafman)
  • tools: ceph-objectstore-tool: Output only unsupported features when incomatible (issue#11176pr#4126, David Zafman)
  • tools: ceph-objectstore-tool: Use exit status 11 for incompatible import attemp… (issue#11139pr#4129, David Zafman)
  • tools: Fix so that -L is allowed (issue#11303pr#4247, Alfredo Deza)


v9.0.2 released

This development release features more of the OSD work queue unification, randomized osd scrub times, a huge pile of librbd fixes, more MDS repair and snapshot fixes, and a significant amount of work on the tests and build infrastructure.


  • buffer: some cleanup (Michal Jarzabek)
  • build: cmake: fix nss linking (Danny Al-Gaaf)
  • build: cmake: misc fixes (Orit Wasserman, Casey Bodley)
  • build: install-deps: misc fixes (Loic Dachary)
  • build: (Sage Weil)
  • ceph-detect-init: added Linux Mint (Michal Jarzabek)
  • ceph-detect-init: robust init system detection (Owen Synge)
  • ceph-disk: ensure ‘zap’ only operates on a full disk (#11272 Loic Dachary)
  • ceph-disk: misc fixes to respect init system (Loic Dachary, Owen Synge)
  • ceph-disk: support NVMe device partitions (#11612 Ilja Slepnev)
  • ceph: fix ‘df’ units (Zhe Zhang)
  • ceph: fix parsing in interactive cli mode (#11279 Kefu Chai)
  • ceph-objectstore-tool: many many changes (David Zafman)
  • ceph-post-file: misc fixes (Joey McDonald, Sage Weil)
  • client: avoid sending unnecessary FLUSHSNAP messages (Yan, Zheng)
  • client: exclude setfilelock when calculating oldest tid (Yan, Zheng)
  • client: fix error handling in check_pool_perm (John Spray)
  • client: fsync waits only for inode’s caps to flush (Yan, Zheng)
  • client: invalidate kernel dcache when cache size exceeds limits (Yan, Zheng)
  • client: make fsync wait for unsafe dir operations (Yan, Zheng)
  • client: pin lookup dentry to avoid inode being freed (Yan, Zheng)
  • common: detect overflow of int config values (#11484 Kefu Chai)
  • common: fix json parsing of utf8 (#7387 Tim Serong)
  • common: fix leak of pthread_mutexattr (#11762 Ketor Meng)
  • crush: respect default replicated ruleset config on map creation (Ilya Dryomov)
  • deb, rpm: move ceph-objectstore-tool to ceph (Ken Dreyer)
  • doc: man page updates (Kefu Chai)
  • doc: misc updates (#11396 Nilamdyuti, Fracois Lafont, Ken Dreyer, Kefu Chai)
  • init-radosgw: merge with sysv version; fix enumeration (Sage Weil)
  • librados: add config observer (Alistair Strachan)
  • librbd: add const for single-client-only features (Josh Durgin)
  • librbd: add deep-flatten operation (Jason Dillaman)
  • librbd: avoid blocking aio API methods (#11056 Jason Dillaman)
  • librbd: fix fast diff bugs (#11553 Jason Dillaman)
  • librbd: fix image format detection (Zhiqiang Wang)
  • librbd: fix lock ordering issue (#11577 Jason Dillaman)
  • librbd: flatten/copyup fixes (Jason Dillaman)
  • librbd: lockdep, helgrind validation (Jason Dillaman, Josh Durgin)
  • librbd: only update image flags while hold exclusive lock (#11791 Jason Dillaman)
  • librbd: return result code from close (#12069 Jason Dillaman)
  • librbd: tolerate old osds when getting image metadata (#11549 Jason Dillaman)
  • mds: do not add snapped items to bloom filter (Yan, Zheng)
  • mds: fix handling for missing mydir dirfrag (#11641 John Spray)
  • mds: fix rejoin (Yan, Zheng)
  • mds: fix stra reintegration (Yan, Zheng)
  • mds: fix suicide beason (John Spray)
  • mds: misc repair improvements (John Spray)
  • mds: misc snapshot fixes (Yan, Zheng)
  • mds: respawn instead of suicide on blacklist (John Spray)
  • misc coverity fixes (Danny Al-Gaaf)
  • mon: add ‘mon_metadata <id>’ command (Kefu Chai)
  • mon: add ‘node ls …’ command (Kefu Chai)
  • mon: disallow ec pools as tiers (#11650 Samuel Just)
  • mon: fix mds beacon replies (#11590 Kefu Chai)
  • mon: fix ‘pg ls’ sort order, state names (#11569 Kefu Chai)
  • mon: normalize erasure-code profile for storage and comparison (Loic Dachary)
  • mon: optionally specify osd id on ‘osd create’ (Mykola Golub)
  • mon: ‘osd tree’ fixes (Kefu Chai)
  • mon: prevent pool with snapshot state from being used as a tier (#11493 Sage Weil)
  • mon: refine check_remove_tier checks (#11504 John Spray)
  • mon: remove spurious who arg from ‘mds rm …’ (John Spray)
  • msgr: async: misc fixes (Haomai Wang)
  • msgr: xio: fix ip and nonce (Raju Kurunkad)
  • msgr: xio: improve lane assignment (Vu Pham)
  • msgr: xio: misc fixes (Vu Pham, Cosey Bodley)
  • osd: avoid transaction append in some cases (Sage Weil)
  • osdc/Objecter: allow per-pool calls to op_cancel_writes (John Spray)
  • osd: elminiate txn apend, ECSubWrite copy (Samuel Just)
  • osd: filejournal: cleanup (David Zafman)
  • osd: fix check_for_full (Henry Chang)
  • osd: fix dirty accounting in make_writeable (Zhiqiang Wang)
  • osd: fix osdmap dump of blacklist items (John Spray)
  • osd: fix snap flushing from cache tier (again) (#11787 Samuel Just)
  • osd: fix snap handling on promotion (#11296 Sam Just)
  • osd: handle log split with overlapping entries (#11358 Samuel Just)
  • osd: keyvaluestore: misc fixes (Varada Kari)
  • osd: make suicide timeouts individually configurable (Samuel Just)
  • osd: move scrub in OpWQ (Samuel Just)
  • osd: pool size change triggers new interval (#11771 Samuel Just)
  • osd: randomize scrub times (#10973 Kefu Chai)
  • osd: refactor scrub and digest recording (Sage Weil)
  • osd: refuse first write to EC object at non-zero offset (Jianpeng Ma)
  • osd: stripe over small xattrs to fit in XFS’s 255 byte inline limit (Sage Weil, Ning Yao)
  • osd: sync object_map on syncfs (Samuel Just)
  • osd: take excl lock of op is rw (Samuel Just)
  • osd: WBThrottle cleanups (Jianpeng Ma)
  • pycephfs: many fixes for bindings (Haomai Wang)
  • rados: bench: add –no-verify option to improve performance (Piotr Dalek)
  • rados: misc bench fixes (Dmitry Yatsushkevich)
  • rbd: add disk usage tool (#7746 Jason Dillaman)
  • rgw: alwasy check if token is expired (#11367 Anton Aksola, Riku Lehto)
  • rgw: conversion tool to repair broken multipart objects (#12079 Yehuda Sadeh)
  • rgw: do not enclose bucket header in quotes (#11860 Wido den Hollander)
  • rgw: error out if frontend did not send all data (#11851 Yehuda Sadeh)
  • rgw: fix assignment of copy obj attributes (#11563 Yehuda Sadeh)
  • rgw: fix reset_loc (#11974 Yehuda Sadeh)
  • rgw: improve content-length env var handling (#11419 Robin H. Johnson)
  • rgw: only scan for objects not in a namespace (#11984 Yehuda Sadeh)
  • rgw: remove trailing :port from HTTP_HOST header (Sage Weil)
  • rgw: shard work over multiple librados instances (Pavan Rallabhandi)
  • rgw: swift: enforce Content-Type in response (#12157 Radoslaw Zarzynski)
  • rgw: use attrs from source bucket on copy (#11639 Javier M. Mellid)
  • rocksdb: pass options as single string (Xiaoxi Chen)
  • rpm: many spec file fixes (Owen Synge, Ken Dreyer)
  • tests: fixes for rbd xstests (Douglas Fuller)
  • tests: fix tiering health checks (Loic Dachary)
  • tests for low-level performance (Haomai Wang)
  • tests: many ec non-regression improvements (Loic Dachary)
  • tests: many many ec test improvements (Loic Dachary)
  • upstart: throttle restarts (#11798 Sage Weil, Greg Farnum)


Ceph is becoming more and more popular in China. Intel and Redhat jointly held the Beijing Ceph Day in Intel RYC office on June 6th, 2015. It attracted ~200 developers, end users from 120+ companies. Ten technical sessions were delivered to share Ceph’s transformative power during the event, it also focused on current problems of Ceph and how can we grow the Ceph ecosystem in China.

Keynote Speech

Ziya Ma, General Manager of Intel Bigdata technology team (BDT) introduced Intel’s investments on Ceph. She started from the data bigbang to point out that the data needs are growing at a rate unsustainable with today’s infrastructure and labor costs, and thus we need a fundamental transformation in storage infrastructure to resolve the new challenges. As the most popular Openstack block backend, Ceph attracted more and more interests – e.g., Fujitsu delivered Ceph based storage products CD10K. Intel BDT’s investments on Ceph includes: Ceph performance analysis and tuning on different platforms, key features like Cache tiering, Erasure coding and Newstore development and optimization, toolkit development – COSBench, VSM and CeTune, and promoting Ceph based scale out storage solutions with China local customers. She announced the found of China Ceph user group, Chinese maillist, and the next Ceph Day to be held in Shanghai in October.

Ceph Community director Patrick McGarry from Redhat introduced the Ceph community updates and recent development status. He emphasized that Ceph community’s focus hasn’t change after Redhat’s acquisition of Inktank, and Ceph will provide better support for RHEL/Fedora/Centos. He encouraged developers to attend the first Ceph hackathon to be held in Hillsboro in August, which will focus on performance, RBD and RGW. In the development part, he introduced the CephFS improvements in Hammer release – 366 commits to MDS module, 20K lines of code changes, and we can expect that CephFS would be production ready next release.

Ceph Development

NewStore: Xiaoxi Chen from Intel introduced the design and implementation of NewStore, which is a new storage backend for Ceph target at the next Release. By de-couple the mapping from object name to actual storage path, NewStore is able to manage the data flexibly. Compared to FileStore, NewStore could saving the journal write for create, append and overwrite operations without losing the atomicity and consistency. This feature is not only helping improve performance but also cutting down the TCO for customer.  The initial performance data shared in the talk looks quite promising. Attendees are very interested with Newstore and looking forward trying it when it is ready.

Cache Tiering Optimization: Community active code contributor, Doctor Li Wang from Ubuntukylin introduced their Ceph optimization work on Tianhe-2 supercomputer platform, including CephFS inline data, RDB image offline recovery and Cache tiering optimization. Cache tiering is an important feature since Emperor, it is designed to improve the Ceph cluster’s performance by leveraging a small set of fast devices as cache. However, current evict algorithm is based on the latest access time, which is not very efficient in some scenario. Doctor Wang proposed a temperature based cache management algorithm that evicts objects based on its access time and frequency.  The user survey of Beijing Ceph Day showed Cache tiering was one of the two most interested and would like to try features (the other is Erasure coding), and it still needs more optimization for cache tiering to be production ready.

Ceph-dokan Windows client: Currently Ceph doesn’t has drivers that can be directly used for windows. Zhisheng Meng from Ucloud introduced Ceph-Dokan, which implements a Win32 FS API compatible windows client with the help of Cygwin and MinGw. The next step work is to support CephX, provide librados and librbd dll and get it merged to Ceph master.


Ceph and Containers: Container technology is widely adopted in cloud computing environments. Independent opensource contributor Haomai Wang introduced Ceph and container integration work. He compared the pros and cons of VM+RBD and Container+RBD usage model. The latter mode has better performance in general, but needs more improvement on security. In kubernetes, different containers composed a POD and leverage file as storage, so it looks it is more suitable to use filesystem instead of RBD as containers backend. He also introduced CephFS latest improvements, CephFS deployment and development progress with Nova, Kubernetes integration.

Ceph toolkit: As the only female speaker, Chendi Xue from Intel presented a new ceph profiling and tuning tool called CeTune. It is designed to help system engineers to deploy and benchmark the Ceph cluster in a fast and easy way. CeTune is designed to benchmark Ceph RBD, Object and CephFS interface with fio and Cosbench and other pluggable workloads. It not only monitors system metrics like CPU utilization, memory usage, I/O statistics but also Ceph performance metrics like Ceph perf counter and LTTNG trace data. CeTune analyzes these data offline to reveal system and software stack bottlenecks. It also provides web based visualization of all the processed data to make analysis and tuning more easily.

Ceph and Bigdata: As the rising of IAAS, cloud storage is becoming more and more popular. However this introduced a new problem for big data analytics framework, like Map Reduce, which usually stores the data in specific distribute file system. This would require lots of data movement from IAAS storage to HDFS. Yuan Zhou from Intel introduced how to run Hadoop over Ceph RGW. He introduced the detail design of Hadoop over Ceph Object Storage, following the way of OpenStack Sahara doing on Swift with a new RGWFS driver and RGW proxy component. Some early benchmarking data with various solutions and different deployments were shared, including VM vs. container vs. bare-metal, HDFS vs. Swift.

User Experience Sharing

Ceph and Openstack integration experience sharing: Dexin Wu and Yuting Wu from awcloud shared their experiences on Ceph and Openstack integration. One key takeaway is although Hammer release brought significant performance improvement, it is still not able to fully utilize the capability of SSD devices. Besides, we still need more features like cluster level QoS, multi-geo disaster recovery. They shared one performance tuning example on how to improve the throughput of a 100 OSDs cluster from 2000 to 9000 IOPS through tuning Ceph parameters and redeployment.

One Ceph, two ways of thinking: Xiaoyi Zhang from Perfect world (Top internet gaming vendor in China) shared their feedbacks on Ceph as an end user and provided some optimization proposals. From perfect world’s point of view, Ceph has many advantage features like highly available, high reliability and highly durable, almost unlimited expend on capacity. He shared how they solved several problems to improve the recovery performance with tuning read_ahead_kb on the hard driver, how to reconfigure the ceph.conf and leverage B-cache to improve Ceph cluster stability and performance; and how to deploy multiple directory on a single PCI-E SSD as dedicated OSD storage spaces to improve Ceph all SSD performance.

Ceph based products

Hao Zhou from Sandisk introduced Ceph based all flash production – InfinishFlash and related optimization. InfiniFlash provided up to 512TB space in 3U chassis, with up to 780K IOPS and 7GB/s bandwidth. He introduced optimization efforts like the thread pool sharding, lock sequence and granularity optimization.

Panel Discussion

As the last session of Beijing Ceph Day, the panel discussion covered two topics: What do you think is current problem of Ceph and how can we accelerate the development of Ceph in China. Most concerns are about performance, management, documentation and localization. People provided many suggestions on how to grow the Ceph ecosystem in China, e.g., that the community need more contributions and sharing from users, developers, and partners. Developers can benefit from the real usage scenario or issues met from end users to make Ceph more stable and mature, while end user can become more familiar with Ceph through the engagement.

Technical Slides

All the slides can be downloaded from .

Onsite pictures


Beijing Ceph Day Registeration


Beijing Ceph Day Agenda

Keynote Speech





Media Coverage

The Beijing Ceph Day was a great success, here are some media coverage reports:

Beijing Ceph Day User Survey Results

We run a Ceph survey during Beijing Ceph day. Our initial purpose is to get a general understanding of the Ceph deployment status in China and collect feedbacks & suggestions for our next step development and optimization work. We designed a 16 question questionnaire, including three open questions. We received 110 valid respondents during the event.  We would like to share with you the survey results.


  1. Attendee Role: Most customer are private cloud providers, followed by public cloud service providers.
  2. Cloud OS: Openstack is still the dominate Cloud OS (59%).
  3. Other storage deployed: 26% used commercial storage, HDFS is also very popular.
  4. Ceph deployment phase: Most deployment phase is still very early, 46% of the Ceph deployment is still under QA and testing, while 30% already in production.
  5. Ceph cluster scale: Most of the cluster scale is 10-50 nodes.
  6. Ceph interface used: RBD is mostly used (50%), followed by object storage (23%), CephFS (16%), 6% used Native rados API.
  7. Ceph version: The most popular Ceph version is Hammer (31%).
  8. Replication model: 3x replica is most commonly used (49%).
  9. Next Features interested or would like to try: Cache tiering (26%) and erasure coding (19%) is very attractive to customers. Followed by Full SSD optimization.
  10. Performance metrics most cared: Stability is still No.1 concern (30%).
  11. Deployment tools: Most people use Ceph-deploy (50%).
  12. Monitoring and Management: 35% using calamari for monitoring and management while 33% used nothing.
  13. The Top three issues for Ceph: (1) Performance, (2) Complexity, and (3) too many immature features.
  14. Suggestion to Ceph’s development and optimization Open question: (1) Documentation (2) Stability
  15. Major reason to choose Ceph: (1) Unified Storage, (2) Acceptable Performance, (3) active community
  16. QoS requirement: Diverse requirements.
















Q16: What’s your QoS requirement in your environment?



v0.94.2 Hammer released

This Hammer point release fixes a few critical bugs in RGW that can prevent objects starting with underscore from behaving properly and that prevent garbage collection of deleted objects when using the Civetweb standalone mode.

All v0.94.x Hammer users are strongly encouraged to upgrade, and to make note of the repair procedure below if RGW is in use.


Bug #11442 introduced a change that made rgw objects that start with underscore incompatible with previous versions. The fix to that bug reverts to the previous behavior. In order to be able to access objects that start with an underscore and were created in prior Hammer releases, following the upgrade it is required to run (for each affected bucket):

$ radosgw-admin bucket check --check-head-obj-locator \
                             --bucket=<bucket> [--fix]


  • build: compilation error: No high-precision counter available (armhf, powerpc..) (#11432, James Page)
  • ceph-dencoder links to libtcmalloc, and shouldn’t (#10691, Boris Ranto)
  • ceph-disk: disk zap sgdisk invocation (#11143, Owen Synge)
  • ceph-disk: use a new disk as journal disk,ceph-disk prepare fail (#10983, Loic Dachary)
  • ceph-objectstore-tool should be in the ceph server package (#11376, Ken Dreyer)
  • librados: can get stuck in redirect loop if osdmap epoch == last_force_op_resend (#11026, Jianpeng Ma)
  • librbd: A retransmit of proxied flatten request can result in -EINVAL (Jason Dillaman)
  • librbd: ImageWatcher should cancel in-flight ops on watch error (#11363, Jason Dillaman)
  • librbd: Objectcacher setting max object counts too low (#7385, Jason Dillaman)
  • librbd: Periodic failure of TestLibRBD.DiffIterateStress (#11369, Jason Dillaman)
  • librbd: Queued AIO reference counters not properly updated (#11478, Jason Dillaman)
  • librbd: deadlock in image refresh (#5488, Jason Dillaman)
  • librbd: notification race condition on snap_create (#11342, Jason Dillaman)
  • mds: Hammer uclient checking (#11510, John Spray)
  • mds: remove caps from revoking list when caps are voluntarily released (#11482, Yan, Zheng)
  • messenger: double clear of pipe in reaper (#11381, Haomai Wang)
  • mon: Total size of OSDs is a maginitude less than it is supposed to be. (#11534, Zhe Zhang)
  • osd: don’t check order in finish_proxy_read (#11211, Zhiqiang Wang)
  • osd: handle old semi-deleted pgs after upgrade (#11429, Samuel Just)
  • osd: object creation by write cannot use an offset on an erasure coded pool (#11507, Jianpeng Ma)
  • rgw: Improve rgw HEAD request by avoiding read the body of the first chunk (#11001, Guang Yang)
  • rgw: civetweb is hitting a limit (number of threads 1024) (#10243, Yehuda Sadeh)
  • rgw: civetweb should use unique request id (#10295, Orit Wasserman)
  • rgw: critical fixes for hammer (#11447, #11442, Yehuda Sadeh)
  • rgw: fix swift COPY headers (#10662, #10663, #11087, #10645, Radoslaw Zarzynski)
  • rgw: improve performance for large object (multiple chunks) GET (#11322, Guang Yang)
  • rgw: init-radosgw: run RGW as root (#11453, Ken Dreyer)
  • rgw: keystone token cache does not work correctly (#11125, Yehuda Sadeh)
  • rgw: make quota/gc thread configurable for starting (#11047, Guang Yang)
  • rgw: make swift responses of RGW return last-modified, content-length, x-trans-id headers.(#10650, Radoslaw Zarzynski)
  • rgw: merge manifests correctly when there’s prefix override (#11622, Yehuda Sadeh)
  • rgw: quota not respected in POST object (#11323, Sergey Arkhipov)
  • rgw: restore buffer of multipart upload after EEXIST (#11604, Yehuda Sadeh)
  • rgw: shouldn’t need to disable rgw_socket_path if frontend is configured (#11160, Yehuda Sadeh)
  • rgw: swift: Response header of GET request for container does not contain X-Container-Object-Count, X-Container-Bytes-Used and x-trans-id headers (#10666, Dmytro Iurchenko)
  • rgw: swift: Response header of POST request for object does not contain content-length and x-trans-id headers (#10661, Radoslaw Zarzynski)
  • rgw: swift: response for GET/HEAD on container does not contain the X-Timestamp header (#10938, Radoslaw Zarzynski)
  • rgw: swift: response for PUT on /container does not contain the mandatory Content-Length header when FCGI is used (#11036, #10971, Radoslaw Zarzynski)
  • rgw: swift: wrong handling of empty metadata on Swift container (#11088, Radoslaw Zarzynski)
  • tests: races with (#11217, Xinze Chi)
  • tests: ceph-helpers kill_daemons fails when kill fails (#11398, Loic Dachary)

For more detailed information, see the complete changelog.


v9.0.1 released

This development release is delayed a bit due to tooling changes in the build environment. As a result the next one (v9.0.2) will have a bit more work than is usual.

Highlights here include lots of RGW Swift fixes, RBD feature work surrounding the new object map feature, more CephFS snapshot fixes, and a few important CRUSH fixes.


  • auth: cache/reuse crypto lib key objects, optimize msg signature check (Sage Weil)
  • build: allow tcmalloc-minimal (Thorsten Behrens)
  • build: do not build ceph-dencoder with tcmalloc (#10691 Boris Ranto)
  • build: fix pg ref disabling (William A. Kennington III)
  • build: improvements (Loic Dachary)
  • build: misc fixes (Boris Ranto, Ken Dreyer, Owen Synge)
  • ceph-authtool: fix return code on error (Gerhard Muntingh)
  • ceph-disk: fix zap sgdisk invocation (Owen Synge, Thorsten Behrens)
  • ceph-disk: pass –cluster arg on prepare subcommand (Kefu Chai)
  • ceph-fuse, libcephfs: drop inode when rmdir finishes (#11339 Yan, Zheng)
  • ceph-fuse,libcephfs: fix uninline (#11356 Yan, Zheng)
  • ceph-monstore-tool: fix store-copy (Huangjun)
  • common: add perf counter descriptions (Alyona Kiseleva)
  • common: fix throttle max change (Henry Chang)
  • crush: fix crash from invalid ‘take’ argument (#11602 Shiva Rkreddy, Sage Weil)
  • crush: fix divide-by-2 in straw2 (#11357 Yann Dupont, Sage Weil)
  • deb: fix rest-bench-dbg and ceph-test-dbg dependendies (Ken Dreyer)
  • doc: document region hostnames (Robin H. Johnson)
  • doc: update release schedule docs (Loic Dachary)
  • init-radosgw: run radosgw as root (#11453 Ken Dreyer)
  • librados: fadvise flags per op (Jianpeng Ma)
  • librbd: allow additional metadata to be stored with the image (Haomai Wang)
  • librbd: better handling for dup flatten requests (#11370 Jason Dillaman)
  • librbd: cancel in-flight ops on watch error (#11363 Jason Dillaman)
  • librbd: default new images to format 2 (#11348 Jason Dillaman)
  • librbd: fast diff implementation that leverages object map (Jason Dillaman)
  • librbd: fix snapshot creation when other snap is active (#11475 Jason Dillaman)
  • librbd: new diff_iterate2 API (Jason Dillaman)
  • librbd: object map rebuild support (Jason Dillaman)
  • logrotate.d: prefer service over invoke-rc.d (#11330 Win Hierman, Sage Weil)
  • mds: avoid getting stuck in XLOCKDONE (#11254 Yan, Zheng)
  • mds: fix integer truncateion on large client ids (Henry Chang)
  • mds: many snapshot and stray fixes (Yan, Zheng)
  • mds: persist completed_requests reliably (#11048 John Spray)
  • mds: separate safe_pos in Journaler (#10368 John Spray)
  • mds: snapshot rename support (#3645 Yan, Zheng)
  • mds: warn when clients fail to advance oldest_client_tid (#10657 Yan, Zheng)
  • misc cleanups and fixes (Danny Al-Gaaf)
  • mon: fix average utilization calc for ‘osd df’ (Mykola Golub)
  • mon: fix variance calc in ‘osd df’ (Sage Weil)
  • mon: improve callout to crushtool (Mykola Golub)
  • mon: prevent bucket deletion when referenced by a crush rule (#11602 Sage Weil)
  • mon: prime pg_temp when CRUSH map changes (Sage Weil)
  • monclient: flush_log (John Spray)
  • msgr: async: many many fixes (Haomai Wang)
  • msgr: simple: fix clear_pipe (#11381 Haomai Wang)
  • osd: add latency perf counters for tier operations (Xinze Chi)
  • osd: avoid multiple hit set insertions (Zhiqiang Wang)
  • osd: break PG removal into multiple iterations (#10198 Guang Yang)
  • osd: check scrub state when handling map (Jianpeng Ma)
  • osd: fix endless repair when object is unrecoverable (Jianpeng Ma, Kefu Chai)
  • osd: fix pg resurrection (#11429 Samuel Just)
  • osd: ignore non-existent osds in unfound calc (#10976 Mykola Golub)
  • osd: increase default max open files (Owen Synge)
  • osd: prepopulate needs_recovery_map when only one peer has missing (#9558 Guang Yang)
  • osd: relax reply order on proxy read (#11211 Zhiqiang Wang)
  • osd: skip promotion for flush/evict op (Zhiqiang Wang)
  • osd: write journal header on clean shutdown (Xinze Chi)
  • qa: script (Loic Dachary)
  • rados bench: misc fixes (Dmitry Yatsushkevich)
  • rados: fix error message on failed pool removal (Wido den Hollander)
  • radosgw-admin: add ‘bucket check’ function to repair bucket index (Yehuda Sadeh)
  • rbd: allow unmapping by spec (Ilya Dryomov)
  • rbd: deprecate –new-format option (Jason Dillman)
  • rgw: do not set content-type if length is 0 (#11091 Orit Wasserman)
  • rgw: don’t use end_marker for namespaced object listing (#11437 Yehuda Sadeh)
  • rgw: fail if parts not specified on multipart upload (#11435 Yehuda Sadeh)
  • rgw: fix GET on swift account when limit == 0 (#10683 Radoslaw Zarzynski)
  • rgw: fix broken stats in container listing (#11285 Radoslaw Zarzynski)
  • rgw: fix bug in domain/subdomain splitting (Robin H. Johnson)
  • rgw: fix civetweb max threads (#10243 Yehuda Sadeh)
  • rgw: fix copy metadata, support X-Copied-From for swift (#10663 Radoslaw Zarzynski)
  • rgw: fix locator for objects starting with _ (#11442 Yehuda Sadeh)
  • rgw: fix mulitipart upload in retry path (#11604 Yehuda Sadeh)
  • rgw: fix quota enforcement on POST (#11323 Sergey Arkhipov)
  • rgw: fix return code on missing upload (#11436 Yehuda Sadeh)
  • rgw: force content type header on responses with no body (#11438 Orit Wasserman)
  • rgw: generate new object tag when setting attrs (#11256 Yehuda Sadeh)
  • rgw: issue aio for first chunk before flush cached data (#11322 Guang Yang)
  • rgw: make read user buckets backward compat (#10683 Radoslaw Zarzynski)
  • rgw: merge manifests properly with prefix override (#11622 Yehuda Sadeh)
  • rgw: return 412 on bad limit when listing buckets (#11613 Yehuda Sadeh)
  • rgw: send ETag, Last-Modified for swift (#11087 Radoslaw Zarzynski)
  • rgw: set content length on container GET, PUT, DELETE, HEAD (#10971, #11036 Radoslaw Zarzynski)
  • rgw: support end marker on swift container GET (#10682 Radoslaw Zarzynski)
  • rgw: swift: fix account listing (#11501 Radoslaw Zarzynski)
  • rgw: swift: set content-length on keystone tokens (#11473 Herv Rousseau)
  • rgw: use correct oid for gc chains (#11447 Yehuda Sadeh)
  • rgw: use unique request id for civetweb (#10295 Orit Wasserman)
  • rocksdb, leveldb: fix compact_on_mount (Xiaoxi Chen)
  • rocksdb: add perf counters for get/put latency (Xinxin Shu)
  • rpm: add suse firewall files (Tim Serong)
  • rpm: misc systemd and suse fixes (Owen Synge, Nathan Cutler)


Ceph Developer Summit: Jewel

Hey Cephers, welcome to another Ceph Developer Summit cycle! As Infernalis filters down through the fancy new testing hardware and QA processes it’s time to start thinking about what ‘Jewel’ will hold in store for us (beyond Sage’s hope for a robust and ready CephFS!!!).

Blueprint submissions are now open for any and all work that that you would like to contribute or request of community developers. Please submit as soon as possible to ensure that it gets a CDS slot. We know this is still a little early, but the community has asked for a bit more lead time from finished schedule to actual event, so we’re trying to push the submissions cycle forward a bit.

This cycle we are in the middle of our wiki transition, so we will have a bit of a different process which I ask you to be patient with us on. This cycle will be the first to utilize the Redmine wiki (on, but migration is ongoing so it will be a little rough.

The link below will take you to the edit page for the Jewel blueprints. From that page you just need to add in your title in the format of [[My Awesome Blueprint]] and save the page. You can then just click that link and enter your information. There is a sample blueprint page there to get you started, but please don’t hesitate to ask ‘scuttlemonkey’ on IRC or ‘pmcgarry at redhat dot com’ via email if you have any issues. We really appreciate your patience on this.

The rough schedule (updated) of CDS and Jewel in general should look something like this:

Date Milestone
26 MAY Blueprint submissions begin
12 JUN Blueprint submissions end
17 JUN Summit agenda announced
01 JUL Ceph Developer Summit: Day 1
02 JUL Ceph Developer Summit: Day 2 (if needed)
NOV 2015 Jewel Released

As always, this event will be an online event (utilizing the BlueJeans system) so that everyone can attend from their own timezone. If you are interested in submitting a blueprint or collaborating on an existing blueprint, please click the big red button below!


Submit Blueprint

scuttlemonkey out

v9.0.0 released

This is the first development release for the Infernalis cycle, and the first Ceph release to sport a version number from the new numbering scheme. The “9” indicates this is the 9th release cycle–I (for Infernalis) is the 9th letter. The first “0” indicates this is a development release (“1” will mean release candidate and “2” will mean stable release), and the final “0” indicates this is the first such development release.

A few highlights include:

  • a new ‘ceph daemonperf’ command to watch perfcounter stats in realtime
  • reduced MDS memory usage
  • many MDS snapshot fixes
  • librbd can now store options in the image itself
  • many fixes for RGW Swift API support
  • OSD performance improvements
  • many doc updates and misc bug fixes


read more…

Page 1 of 1612345...10...Last »
© 2015, Red Hat, Inc. All rights reserved.