v0.94.4 Hammer released

This Hammer point fixes several important bugs in Hammer, as well as fixing interoperability issues that are required before an upgrade to Infernalis. That is, all users of earlier version of Hammer or any version of Firefly will first need to upgrade to hammer v0.94.4 or later before upgrading to Infernalis (or future releases).

All v0.94.x Hammer users are strongly encouraged to upgrade.


  • build/ops: 50-rbd.rules conditional is wrong (issue#12166, pr#5207, Nathan Cutler)
  • build/ops: ceph-common needs python-argparse on older distros, but doesn’t require it (issue#12034, pr#5216, Nathan Cutler)
  • build/ops: radosgw requires apache for SUSE only – makes no sense (issue#12358, pr#5411, Nathan Cutler)
  • build/ops: rpm: cephfs_java not fully conditionalized (issue#11991, pr#5202, Nathan Cutler)
  • build/ops: rpm: not possible to turn off Java (issue#11992, pr#5203, Owen Synge)
  • build/ops: running fdupes unnecessarily (issue#12301, pr#5223, Nathan Cutler)
  • build/ops: snappy-devel for all supported distros (issue#12361, pr#5264, Nathan Cutler)
  • build/ops: SUSE/openSUSE builds need libbz2-devel (issue#11629, pr#5204, Nathan Cutler)
  • build/ops: useless %py_requires breaks SLE11-SP3 build (issue#12351, pr#5412, Nathan Cutler)
  • build/ops: error in ext_mime_map_init() when /etc/mime.types is missing (issue#11864, pr#5385, Ken Dreyer)
  • build/ops: upstart: limit respawn to 3 in 30 mins (instead of 5 in 30s) (issue#11798, pr#5930, Sage Weil)
  • build/ops: With root as default user, unable to have multiple RGW instances running (issue#10927, pr#6161, Sage Weil)
  • build/ops: With root as default user, unable to have multiple RGW instances running (issue#11140, pr#6161, Sage Weil)
  • build/ops: With root as default user, unable to have multiple RGW instances running (issue#11686, pr#6161, Sage Weil)
  • build/ops: With root as default user, unable to have multiple RGW instances running (issue#12407, pr#6161, Sage Weil)
  • cli: ceph: cli throws exception on unrecognized errno (issue#11354, pr#5368, Kefu Chai)
  • cli: ceph tell: broken error message / misleading hinting (issue#11101, pr#5371, Kefu Chai)
  • common: arm: all programs that link to librados2 hang forever on startup (issue#12505, pr#5366, Boris Ranto)
  • common: buffer: critical bufferlist::zero bug (issue#12252, pr#5365, Haomai Wang)
  • common: ceph-object-corpus: add 0.94.2-207-g88e7ee7 hammer objects (issue#13070, pr#5551, Sage Weil)
  • common: do not insert emtpy ptr when rebuild emtpy bufferlist (issue#12775, pr#5764, Xinze Chi)
  • common: [ FAILED ] TestLibRBD.BlockingAIO (issue#12479, pr#5768, Jason Dillaman)
  • common: LibCephFS.GetPoolId failure (issue#12598, pr#5887, Yan, Zheng)
  • common: Memory leak in, pthread_mutexattr_init without pthread_mutexattr_destroy (issue#11762, pr#5378, Ketor Meng)
  • common: object_map_update fails with -EINVAL return code (issue#12611, pr#5559, Jason Dillaman)
  • common: Pipe: Drop connect_seq increase line (issue#13093, pr#5908, Haomai Wang)
  • common: recursive lock of md_config_t (0) (issue#12614, pr#5759, Josh Durgin)
  • crush: ceph osd crush reweight-subtree does not reweight parent node (issue#11855, pr#5374, Sage Weil)
  • doc: update docs to point to (issue#13162, pr#6156, Alfredo Deza)
  • fs: ceph-fuse 0.94.2-1trusty segfaults / aborts (issue#12297, pr#5381, Greg Farnum)
  • fs: segfault launching ceph-fuse with bad –name (issue#12417, pr#5382, John Spray)
  • librados: Change radosgw pools default crush ruleset (issue#11640, pr#5754, Yuan Zhou)
  • librbd: correct issues discovered via lockdep / helgrind (issue#12345, pr#5296, Jason Dillaman)
  • librbd: Crash during TestInternal.MultipleResize (issue#12664, pr#5769, Jason Dillaman)
  • librbd: deadlock during cooperative exclusive lock transition (issue#11537, pr#5319, Jason Dillaman)
  • librbd: Possible crash while concurrently writing and shrinking an image (issue#11743, pr#5318, Jason Dillaman)
  • mon: add a cache layer over MonitorDBStore (issue#12638, pr#5697, Kefu Chai)
  • mon: fix crush testing for new pools (issue#13400, pr#6192, Sage Weil)
  • mon: get pools health’info have error (issue#12402, pr#5369, renhwztetecs)
  • mon: implicit erasure code crush ruleset is not validated (issue#11814, pr#5276, Loic Dachary)
  • mon: PaxosService: call post_refresh() instead of post_paxos_update() (issue#11470, pr#5359, Joao Eduardo Luis)
  • mon: pgmonitor: wrong at/near target max“ reporting (issue#12401, pr#5370, huangjun)
  • mon: register_new_pgs() should check ruleno instead of its index (issue#12210, pr#5377, Xinze Chi)
  • mon: Show osd as NONE in ceph osd map <pool> <object> output (issue#11820, pr#5376, Shylesh Kumar)
  • mon: the output is wrong when runing ceph osd reweight (issue#12251, pr#5372, Joao Eduardo Luis)
  • osd: allow peek_map_epoch to return an error (issue#13060, pr#5892, Sage Weil)
  • osd: cache agent is idle although one object is left in the cache (issue#12673, pr#5765, Loic Dachary)
  • osd: copy-from doesn’t preserve truncate_{seq,size} (issue#12551, pr#5885, Samuel Just)
  • osd: crash creating/deleting pools (issue#12429, pr#5527, John Spray)
  • osd: fix repair when recorded digest is wrong (issue#12577, pr#5468, Sage Weil)
  • osd: include/ceph_features: define HAMMER_0_94_4 feature (issue#13026, pr#5687, Sage Weil)
  • osd: is_new_interval() fixes (issue#10399, pr#5691, Jason Dillaman)
  • osd: is_new_interval() fixes (issue#11771, pr#5691, Jason Dillaman)
  • osd: long standing slow requests: connection->session->waiting_for_map->connection ref cycle (issue#12338, pr#5761, Samuel Just)
  • osd: Mutex Assert from PipeConnection::try_get_pipe (issue#12437, pr#5758, David Zafman)
  • osd: pg_interval_t::check_new_interval – for ec pool, should not rely on min_size to determine if the PG was active at the interval (issue#12162, pr#5373, Guang G Yang)
  • osd: 732: FAILED assert(log.log.size() == log_keys_debug.size()) (issue#12652, pr#5763, Sage Weil)
  • osd: PGLog::proc_replica_log: correctly handle case where entries between olog.head and log.tail were split out (issue#11358, pr#5380, Samuel Just)
  • osd: read on chunk-aligned xattr not handled (issue#12309, pr#5367, Sage Weil)
  • osd: suicide timeout during peering – search for missing objects (issue#12523, pr#5762, Guang G Yang)
  • osd: WBThrottle::clear_object: signal on cond when we reduce throttle values (issue#12223, pr#5757, Samuel Just)
  • rbd: crash during shutdown after writeback blocked by IO errors (issue#12597, pr#5767, Jianpeng Ma)
  • rgw: add delimiter to prefix only when path is specified (issue#12960, pr#5860, Sylvain Baubeau)
  • rgw: create a tool for orphaned objects cleanup (issue#9604, pr#5717, Yehuda Sadeh)
  • rgw: don’t preserve acls when copying object (issue#11563, pr#6039, Yehuda Sadeh)
  • rgw: don’t preserve acls when copying object (issue#12370, pr#6039, Yehuda Sadeh)
  • rgw: don’t preserve acls when copying object (issue#13015, pr#6039, Yehuda Sadeh)
  • rgw: Ensure that swift keys don’t include backslashes (issue#7647, pr#5716, Yehuda Sadeh)
  • rgw: GWWatcher::handle_error -> common/ 95: FAILED assert(r == 0) (issue#12208, pr#6164, Yehuda Sadeh)
  • rgw: HTTP return code is not being logged by CivetWeb (issue#12432, pr#5498, Yehuda Sadeh)
  • rgw: init_rados failed leads to repeated delete (issue#12978, pr#6165, Xiaowei Chen)
  • rgw: init some manifest fields when handling explicit objs (issue#11455, pr#5732, Yehuda Sadeh)
  • rgw: Keystone Fernet tokens break auth (issue#12761, pr#6162, Abhishek Lekshmanan)
  • rgw: region data still exist in region-map after region-map update (issue#12964, pr#6163, dwj192)
  • rgw: remove trailing :port from host for purposes of subdomain matching (issue#12353, pr#6042, Yehuda Sadeh)
  • rgw: rest-bench common/ 54: FAILED assert(_threads.empty()) (issue#3896, pr#5383, huangjun)
  • rgw: returns requested bucket name raw in Bucket response header (issue#12537, pr#5715, Yehuda Sadeh)
  • rgw: segmentation fault when rgw_gc_max_objs > HASH_PRIME (issue#12630, pr#5719, Ruifeng Yang)
  • rgw: segments are read during HEAD on Swift DLO (issue#12780, pr#6160, Yehuda Sadeh)
  • rgw: setting max number of buckets for user via ceph.conf option (issue#12714, pr#6166, Vikhyat Umrao)
  • rgw: Swift API: X-Trans-Id header is wrongly formatted (issue#12108, pr#5721, Radoslaw Zarzynski)
  • rgw: testGetContentType and testHead failed (issue#11091, pr#5718, Radoslaw Zarzynski)
  • rgw: testGetContentType and testHead failed (issue#11438, pr#5718, Radoslaw Zarzynski)
  • rgw: testGetContentType and testHead failed (issue#12157, pr#5718, Radoslaw Zarzynski)
  • rgw: testGetContentType and testHead failed (issue#12158, pr#5718, Radoslaw Zarzynski)
  • rgw: testGetContentType and testHead failed (issue#12363, pr#5718, Radoslaw Zarzynski)
  • rgw: the arguments ‘domain’ should not be assigned when return false (issue#12629, pr#5720, Ruifeng Yang)
  • tests: qa/workunits/cephtool/ don’t assume crash_replay_interval=45 (issue#13406, pr#6172, Sage Weil)
  • tests: TEST_crush_rule_create_erasure consistently fails on i386 builder (issue#12419, pr#6201, Loic Dachary)
  • tools: ceph-disk zap should ensure block device (issue#11272, pr#5755, Loic Dachary)

For more detailed information, see the complete changelog.

v9.1.0 Infernalis release candidate released

This is the first Infernalis release candidate. There have been some major changes since Hammer, and the upgrade process is non-trivial. Please read carefully.


The v9.1.0 packages are pushed to the development release repositories:

For for info, see:

Or install with ceph-deploy via:

ceph-deploy install --testing HOST


  • librbd and librados ABI compatibility is broken. Be careful installing this RC on client machines (e.g., those running qemu). It will be fixed in the final v9.2.0 release.


  • General:
    • Ceph daemons are now managed via systemd (with the exception of Ubuntu Trusty, which still uses upstart).
    • Ceph daemons run as ‘ceph’ user instead root.
    • On Red Hat distros, there is also an SELinux policy.
  • RADOS:
    • The RADOS cache tier can now proxy write operations to the base tier, allowing writes to be handled without forcing migration of an object into the cache.
    • The SHEC erasure coding support is no longer flagged as experimental. SHEC trades some additional storage space for faster repair.
    • There is now a unified queue (and thus prioritization) of client IO, recovery, scrubbing, and snapshot trimming.
    • There have been many improvements to low-level repair tooling (ceph-objectstore-tool).
    • The internal ObjectStore API has been significantly cleaned up in order to faciliate new storage backends like NewStore.
  • RGW:
    • The Swift API now supports object expiration.
    • There are many Swift API compatibility improvements.
  • RBD:
    • The rbd du command shows actual usage (quickly, when object-map is enabled).
    • The object-map feature has seen many stability improvements.
    • Object-map and exclusive-lock features can be enabled or disabled dynamically.
    • You can now store user metadata and set persistent librbd options associated with individual images.
    • The new deep-flatten features allows flattening of a clone and all of its snapshots. (Previously snapshots could not be flattened.)
    • The export-diff command command is now faster (it uses aio). There is also a new fast-diff feature.
    • The –size argument can be specified with a suffix for units (e.g., --size 64G).
    • There is a new rbd status command that, for now, shows who has the image open/mapped.
  • CephFS:
    • You can now rename snapshots.
    • There have been ongoing improvements around administration, diagnostics, and the check and repair tools.
    • The caching and revocation of client cache state due to unused inodes has been dramatically improved.
    • The ceph-fuse client behaves better on 32-bit hosts.


We have decided to drop support for many older distributions so that we can move to a newer compiler toolchain (e.g., C++11). Although it is still possible to build Ceph on older distributions by installing backported development tools, we are not building and publishing release packages for

In particular,

  • CentOS 7 or later; we have dropped support for CentOS 6 (and other RHEL 6 derivatives, like Scientific Linux 6).
  • Debian Jessie 8.x or later; Debian Wheezy 7.x’s g++ has incomplete support for C++11 (and no systemd).
  • Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer supported.
  • Fedora 22 or later.


Upgrading directly from Firefly v0.80.z is not possible. All clusters must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only then is it possible to upgrade to Infernalis 9.2.z.

Note that v0.94.4 isn’t released yet, but you can upgrade to a test build from gitbuilder with:

ceph-deploy install --dev hammer HOST

The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis is.


  • For all distributions that support systemd (CentOS 7, Fedora, Debian Jessie 8.x, OpenSUSE), ceph daemons are now managed using native systemd files instead of the legacy sysvinit scripts. For example,:

    systemctl start       # start all daemons
    systemctl status ceph-osd@12      # check status of osd.12

    The main notable distro that is not yet using systemd is Ubuntu trusty 14.04. (The next Ubuntu LTS, 16.04, will use systemd instead of upstart.)

  • Ceph daemons now run as user and group ceph by default. The ceph user has a static UID assigned by Fedora and Debian (also used by derivative distributions like RHEL/CentOS and Ubuntu). On SUSE the ceph user will currently get a dynamically assigned UID when the user is created.

    If your systems already have a ceph user, upgrading the package will cause problems. We suggest you first remove or rename the existing ‘ceph’ user before upgrading.

    When upgrading, administrators have two options:

    1. Add the following line to ceph.conf on all hosts:

      setuser match path = /var/lib/ceph/$type/$cluster-$id

      This will make the Ceph daemons run as root (i.e., not drop privileges and switch to user ceph) if the daemon’s data directory is still owned by root. Newly deployed daemons will be created with data owned by user ceph and will run with reduced privileges, but upgraded daemons will continue to run as root.

    2. Fix the data ownership during the upgrade. This is the preferred option, but is more work. The process for each host would be to:

      1. Upgrade the ceph package. This creates the ceph user and group. For example:

        ceph-deploy install --stable infernalis HOST
      2. Stop the daemon(s).:

        service ceph stop           # fedora, centos, rhel, debian
        stop ceph-all               # ubuntu
      3. Fix the ownership:

        chown -R ceph:ceph /var/lib/ceph
      4. Restart the daemon(s).:

        start ceph-all                # ubuntu
        systemctl start   # debian, centos, fedora, rhel
  • The on-disk format for the experimental KeyValueStore OSD backend has changed. You will need to remove any OSDs using that backend before you upgrade any test clusters that use it.


  • When a pool quota is reached, librados operations now block indefinitely, the same way they do when the cluster fills up. (Previously they would return -ENOSPC). By default, a full cluster or pool will now block. If your librados application can handle ENOSPC or EDQUOT errors gracefully, you can get error returns instead by using the new librados OPERATION_FULL_TRY flag.


NOTE: These notes are somewhat abbreviated while we find a less time-consuming process for generating them.

  • build: C++11 now supported
  • build: many cmake improvements
  • build: OSX build fixes (Yan, Zheng)
  • build: remove rest-bench
  • ceph-disk: many fixes (Loic Dachary)
  • ceph-disk: support for multipath devices (Loic Dachary)
  • ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
  • ceph-objectstore-tool: many improvements (David Zafman)
  • common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
  • common: make mutex more efficient
  • common: some async compression infrastructure (Haomai Wang)
  • librados: add FULL_TRY and FULL_FORCE flags for dealing with full clusters or pools (Sage Weil)
  • librados: fix notify completion race (#13114 Sage Weil)
  • librados, libcephfs: randomize client nonces (Josh Durgin)
  • librados: pybind: fix binary omap values (Robin H. Johnson)
  • librbd: fix reads larger than the cache size (Lu Shi)
  • librbd: metadata filter fixes (Haomai Wang)
  • librbd: use write_full when possible (Zhiqiang Wang)
  • mds: avoid emitting cap warnigns before evicting session (John Spray)
  • mds: fix expected holes in journal objects (#13167 Yan, Zheng)
  • mds: fix SnapServer crash on deleted pool (John Spray)
  • mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
  • mon: add cache over MonitorDBStore (Kefu Chai)
  • mon: ‘ceph osd metadata’ can dump all osds (Haomai Wang)
  • mon: detect kv backend failures (Sage Weil)
  • mon: fix CRUSH map test for new pools (Sage Weil)
  • mon: fix min_last_epoch_clean tracking (Kefu Chai)
  • mon: misc scaling fixes (Sage Weil)
  • mon: streamline session handling, fix memory leaks (Sage Weil)
  • mon: upgrades must pass through hammer (Sage Weil)
  • msg/async: many fixes (Haomai Wang)
  • osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
  • osd: configure promotion based on write recency (Zhiqiang Wang)
  • osd: don’t send dup MMonGetOSDMap requests (Sage Weil, Kefu Chai)
  • osd: erasure-code: fix SHEC floating point bug (#12936 Loic Dachary)
  • osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
  • osd: fix hitset object naming to use GMT (Kefu Chai)
  • osd: fix misc memory leaks (Sage Weil)
  • osd: fix peek_queue locking in FileStore (Xinze Chi)
  • osd: fix promotion vs full cache tier (Samuel Just)
  • osd: fix replay requeue when pg is still activating (#13116 Samuel Just)
  • osd: fix scrub stat bugs (Sage Weil, Samuel Just)
  • osd: force promotion for ops EC can’t handle (Zhiqiang Wang)
  • osd: improve behavior on machines with large memory pages (Steve Capper)
  • osd: merge multiple setattr calls into a setattrs call (Xinxin Shu)
  • osd: newstore prototype (Sage Weil)
  • osd: ObjectStore internal API refactor (Sage Weil)
  • osd: SHEC no longer experimental
  • osd: throttle evict ops (Yunchuan Wen)
  • osd: upgrades must pass through hammer (Sage Weil)
  • osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
  • rbd: rbd-replay-prep and rbd-replay improvements (Jason Dillaman)
  • rgw: expose the number of unhealthy workers through admin socket (Guang Yang)
  • rgw: fix casing of Content-Type header (Robin H. Johnson)
  • rgw: fix decoding of X-Object-Manifest from GET on Swift DLO (Radslow Rzarzynski)
  • rgw: fix sysvinit script
  • rgw: fix sysvinit script w/ multiple instances (Sage Weil, Pavan Rallabhandi)
  • rgw: improve handling of already removed buckets in expirer (Radoslaw Rzarzynski)
  • rgw: log to /var/log/ceph instead of /var/log/radosgw
  • rgw: rework X-Trans-Id header to be conform with Swift API (Radoslaw Rzarzynski)
  • rgw: s3 encoding-type for get bucket (Jeff Weber)
  • rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
  • rgw: support for Swift expiration API (Radoslaw Rzarzynski, Yehuda Sadeh)
  • rgw: user rm is idempotent (Orit Wasserman)
  • selinux policy (Boris Ranto, Milan Broz)
  • systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan van der Ster)
  • systemd: run daemons as user ceph

v0.87 Giant released

This release will form the basis for the stable release Giant, v0.87.x. Highlights for Giant include:

  • RADOS Performance: a range of improvements have been made in the OSD and client-side librados code that improve the throughput on flash backends and improve parallelism and scaling on fast machines.
  • CephFS: we have fixed a raft of bugs in CephFS and built some basic journal recovery and diagnostic tools. Stability and performance of single-MDS systems is vastly improved in Giant. Although we do not yet recommend CephFS for production deployments, we do encourage testing for non-critical workloads so that we can better guage the feature, usability, performance, and stability gaps.
  • Local Recovery Codes: the OSDs now support an erasure-coding scheme that stores some additional data blocks to reduce the IO required to recover from single OSD failures.
  • Degraded vs misplaced: the Ceph health reports from ‘ceph -s’ and related commands now make a distinction between data that is degraded (there are fewer than the desired number of copies) and data that is misplaced (stored in the wrong location in the cluster). The distinction is important because the latter does not compromise data safety.
  • Tiering improvements: we have made several improvements to the cache tiering implementation that improve performance. Most notably, objects are not promoted into the cache tier by a single read; they must be found to be sufficiently hot before that happens.
  • Monitor performance: the monitors now perform writes to the local data store asynchronously, improving overall responsiveness.
  • Recovery tools: the ceph_objectstore_tool is greatly expanded to allow manipulation of an individual OSDs data store for debugging and repair purposes. This is most heavily used by our QA infrastructure to exercise recovery code.

read more…

Teuthology docker targets hack (1/3)

teuthology runs jobs testing the Ceph integration on targets that can either be virtual machines or bare metal. The container hack adds support for docker containers as a replacement.

Running task exec...
Executing custom commands...
Running commands on role mon.a host container002
running sudo 'TESTDIR=/home/ubuntu/cephtest' bash '-c' '/bin/true'
running docker exec container002 bash /tmp/tmp/tmptJ7hxa
Duration was 0.088931 seconds

A worker with this hack listens on the container tube:

$ mkdir /tmp/a /tmp/logs
$ ./virtualenv/bin/teuthology-worker \
  -l /tmp/logs --archive-dir /tmp/a \
  --tube container

A noop job

machine_type: container
os_type: ubuntu
os_version: "14.04"
- - mon.a
  - osd.0
- - osd.1
  - client.0
- exec:
      - /bin/true

is scheduled via the container tube

./virtualenv/bin/teuthology-schedule --name foo \
  --worker container --owner \
Job scheduled with name foo and ID 29
2014-10-29 14:28:28,415.415 results_server \
  set in /home/loic/.teuthology.yaml; cannot report results

The implementation relies on the docker 1.3 docker exec command. It is used as a replacement for ssh connections.

2014-10-29 13:48:34,996.996 INFO:teuthology.lockstatus:lockstatus::get_status uri = http://localhost:8080/nodes/container002/
2014-10-29 13:48:35,009.009 INFO:teuthology.containers:sleeper_running  140380434393616
2014-10-29 13:48:35,032.032 INFO:teuthology.containers:running 'docker' 'run' '--rm=true' '--volume' '/tmp:/tmp/tmp' '--name' 'container002' 'ceph-ubuntu-14.04' 'bash' '-c' 'echo running ; sleep 1000000'
2014-10-29 13:48:36,132.132 INFO:teuthology.containers:run_sleeper: running

2014-10-29 13:48:36,133.133 INFO:teuthology.containers:sleeper_running  140380434393616
2014-10-29 13:48:36,133.133 INFO:teuthology.containers:start: container container002 started
2014-10-29 13:48:36,133.133 INFO:teuthology.lockstatus:lockstatus::get_status uri = http://localhost:8080/nodes/container001/
2014-10-29 13:48:36,149.149 INFO:teuthology.containers:sleeper_running  140380258955216
2014-10-29 13:48:36,169.169 INFO:teuthology.containers:running 'docker' 'run' '--rm=true' '--volume' '/tmp:/tmp/tmp' '--name' 'container001' 'ceph-ubuntu-14.04' 'bash' '-c' 'echo running ; sleep 1000000'
2014-10-29 13:48:37,244.244 INFO:teuthology.containers:run_sleeper: running

2014-10-29 13:48:37,244.244 INFO:teuthology.containers:sleeper_running  140380258955216
2014-10-29 13:48:37,245.245 INFO:teuthology.containers:start: container container001 started
2014-10-29 13:48:37,245.245 INFO:teuthology.task.internal:roles:  - ['mon.a', 'osd.0']
2014-10-29 13:48:37,245.245 INFO:teuthology.task.internal:roles:  - ['osd.1', 'client.0']
2014-10-29 13:48:37,245.245 INFO:teuthology.run_tasks:Running task internal.push_inventory...
2014-10-29 13:48:37,245.245 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles...
2014-10-29 13:48:37,247.247 INFO:teuthology.run_tasks:Running task internal.check_conflict...
2014-10-29 13:48:40,033.033 INFO:teuthology.containers:running rmdir '--' '/home/ubuntu/cephtest'
2014-10-29 13:48:40,034.034 INFO:teuthology.containers:running docker exec container002 bash /tmp/tmp/tmpxaD5xX
2014-10-29 13:48:40,145.145 INFO:teuthology.containers:completed ['docker', 'exec', u'container001', 'bash', '/tmp/tmp/tmpqhYczm'] on container001:
2014-10-29 13:48:40,145.145 INFO:teuthology.containers:completed ['docker', 'exec', u'container002', 'bash', '/tmp/tmp/tmpxaD5xX'] on container002:
2014-10-29 13:48:40,147.147 data:
{duration: 0.0005440711975097656, failure_reason: need more than 0 values to unpack,
  owner:, status: fail, success: false}

2014-10-29 13:48:40,148.148 results_server set in /home/loic/.teuthology.yaml; cannot report results
2014-10-29 13:48:40,149.149

The containers were added to the paddles database, using the new is_container field.

for id in 1 2 3 ; do
 sqlite3 dev.db "insert into nodes (id,name,machine_type,is_container,is_vm,locked,up) values ($id, 'container00$id', 'container', 1, 0, 0, 1);"

They were not pre-provisionned because they are created on demand. Since docker provides a repository of images, downburst is not used.

Remove Pool Without Name

For exemple :

# rados lspools
                            <---- ?????

# ceph osd dump | grep "pool 4 "
pool 4 '' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 1668 stripe_width 0

# rados rmpool "" "" --yes-i-really-really-mean-it
successfully deleted pool

Ceph Developer Summit 2014 – Hammer

The Ceph Developer Summit (CDS) for the next major Ceph release called Hammer started today some hours ago (2014/10/28). It’s again a virtual summit via video conference calls.

I’ve submitted three blueprints:

We already discussed the Ceph security and enterprise topics. You can find the results/logs in the pad. The sessions are recorded and will be available afterwards.

If you are interested in Ceph development: now it’s time to join the video conference call. You can find all links, the timetable and blueprints to discuss here. There will be a second track with a lot of interesting discussions tomorrow.

If you are interested to work e.g. on the Ceph security topic: check the pad and feel free to contact me.

The next OpenStack summit is just around the corner and as usual Josh Durgin and I will lead the Ceph and OpenStack design session.
This session is scheduled for November 3 from 11:40 to 13:10, find the description link here.
The etherpad is already available here so don’t hesitate to add your name to the list along with your main subject of interest.
See you in Paris!

The next OpenStack summit is just around the corner and as usual Josh Durgin and I will lead the Ceph and OpenStack design session.
This session is scheduled for November 3 from 11:40 to 13:10, find the description link here.
The etherpad is already available here so don’t hesitate to add your name to the list along with your main subject of interest.
See you in Paris!

© 2015, Red Hat, Inc. All rights reserved.