Ceph Community Newsletter, December 2018 edition

thingee

Hey Cephers happy new year! We are catching up again on our community newsletters, so this will include November and December.

Announcements

The Ceph Foundation

On November 12 at Ceph Day Berlin we announced the Ceph Foundation, a new organization to bring industry members together to support the Ceph open source community. The new foundation is organized as a directed fund under the Linux Foundation, which is also home to many other projects and cross-project foundations, including Linux and the Cloud Native Computing Foundation (CNCF) that hosts Kubernetes and Rook. Read more

Cephalocon Barcelona 2019

Cephalocon Barcelona 2019 aims to bring together more than 800 technologists and adopters from across the globe to showcase Ceph’s history and its future, demonstrate real-world applications, and highlight vendor solutions. Join us in Barcelona, Spain on 19-20 May 2019 for our second international conference event. The CFP is now open and sponsorship opportunities are available!

Project updates

RADOS

  • Ability to adjust legacy custom CRUSH maps to use the new 'device classes' without triggering any data migration

  • New, streamlined ceph::mutex to replace old Mutex

  • librados3 cleans up some of the cruft in the librados2 interface, and also separates out the C and C++ libraries so that they can be revised and versioned independently

RGW

  • New in-place appendable object S3 extension

  • Dynamic resharding bugfixes (important)

  • Permit S3 server object encryption when SSL is provided by a proxy

RBD

  • Live image migration: an in-use image can be migrated to a new pool or to a new image with different layout settings with minimal downtime.

  • Simplified mirroring setup: The monitor addresses and CephX keys for remote clusters can now be stored in the local Ceph cluster.

  • Initial support for namespace isolation: a single RBD pool can be used to store RBD images for multiple tenants

  • Simplified configuration overrides: global, pool-, and image-level configuration overrides now supported

  • Image timestamps: last modified and accessed timestamps are now supported

  • RBD performance metrics gathering is work-in-progress

CephFS

New 'volumes' mgr module to streamline creation of cephfs volumes (file systems) and subvolumes (shared subdirctories with quota and independent access).  Creating a new volume will also trigger creation of MDS daemons via the new orchestrator interface (if it is enabled), and conversely deleting volumes will tear down ceph-mds daemons.

Dashboard

A lot has happened since the last Ceph Newsletter was published! Tina Kallio has been selected as our outreachy intern and she started this week, currently working on getting her development environment up and finalizing her next code contribution (mgr/dashboard: Filter out tasks depending on permissions — https://github.com/ceph/ceph/pull/25426)).

Fujitsu has appointed three engineers to work on the dashboard and orchestrator. They're based in Poland and are working on setting up their development environments this week. Igor Podoski (aka "aiicore" on IRC) represents the team. Welcome!

Lenz Grimmer's talk for DevConf.CZ (https://devconf.info/cz/2019)) in Brno (CZ) has been accepted, he will be talking about managing and monitoring Ceph via the Ceph Manager Dashboard.

Some representatives of the Dashboard and Orchestrator teams met after Ceph Day in Berlin to further discuss the integration and development of these features: https://pad.ceph.com/p/ceph-dashboard&orchestrator-f2f-2018-11

The team working on the Ceph Manager Dashboard added the following new features and noteworthy changes:

Orchestrator

  • Merged initial Ansible orchestrator module.

  • Merged initial DeepSea orchestrator module.

  • Rook orchestrator adds the ability to remove RGW and MDS services.

  • Adding and removing OSDs in the Ansible orchestrator is work in progress.

  • Configuring RBD mirroring via the dashboard is work-in-progress

Rook

  • v0.9 release

  • rbd-mirror support

  • ceph-volume support

Releases

Ceph Planet

Project meetings

Ceph Developers Monthly

Ceph Performance Weekly

Ceph Testing Weekly

Ceph Code Walkthrough

Recent events

Ceph Day Berlin

KubeCon Seattle 2019

Ceph was well represented at the Rook booth providing an versatile open-source persistent storage solution in Kubernetes. We gave demos on Rook orchestrating deploying a containerized Ceph Luminous environment, as well as doing a rolling upgrade to Mimic. See our blog post

Upcoming conferences

November

December

January

February