The Ceph Blog

Featured Post

When Ceph was originally designed a decade ago, the concept was that “intelligent” disk drives with some modest processing capability could store objects instead of blocks and take an active role in replicating, migrating, or repairing data within the system.  In contrast to conventional disk drives, a smart object-based drive could coordinate with other drives in the system in a peer-to-peer fashion to build a more scalable storage system.

Today an ethernet-attached hard disk drive from WDLabs is making this architecture a reality. WDLabs has assembled over 500 drives from the early production line and assembled them into a 4 PB (3.6 PiB) Ceph cluster running Jewel and the prototype BlueStore storage backend.  WDLabs has been working on validating the need to apply an open source compute environment within the storage device and is now beginning to understand the use cases as thought leaders such as Red Hat work with the early units.  This test seeks to demonstrate that the second generation converged microserver has become a viable solution for distributed storage use cases like Ceph. Building an open platform that can run open source software is a key underpinning of the concept.
read more…

Earlier Posts

The Ceph project would like to congratulate the following students on their acceptance to the 2016 Google Summer of Code program, and the Ceph project:

Student Project
Shehbaz Jaffer BlueStore
Victor Araujo End-to-end Performance Visualization
Aburudha Bose Improve Overall Python Infrastructure
Zhao Junwang Over-the-wire Encryption Support
Oleh Prypin Python 3 Support for Ceph

These five students represent the best of the almost 70 project submissions that we fielded from students around the world. For those not familiar with the Google Summer of Code program, this means that Google will generously fund these students during their summer work.

Thanks to everyone who applied this year, the selection process was made very challenging by the number of highly qualified applicants. We look forward to mentoring students to a successful summer of coding and Open Source, both this year and in the years to come.

v10.2.0 Jewel released

This major release of Ceph will be the foundation for the next long-term stable release. There have been many major changes since the Infernalis (9.2.x) and Hammer (0.94.x) releases, and the upgrade process is non-trivial. Please read these release notes carefully.

MAJOR CHANGES FROM INFERNALIS

read more…

v10.0.4 released

This is the fourth and last development release before Jewel. The next release will be a release candidate with the final set of features. Big items include RGW static website support, librbd journal framework, fixed mon sync of config-key data, C++11 updates, and bluestore/kstore.

Note that, due to general developer busyness, we aren’t building official release packages for this dev release. You can fetch autobuilt gitbuilder packages from the usual location (http://gitbuilder.ceph.com).

NOTABLE CHANGES

read more…

v9.2.1 Infernalis released

This Infernalis point release fixes several packagins and init script issues, enables the librbd objectmap feature by default, a few librbd bugs, and a range of miscellaneous bug fixes across the system.

We recommend that all infernalis v9.2.0 users upgrade.

For more detailed information, see the complete changelog.

NOTABLE CHANGES

read more…

v0.94.6 Hammer released

This Hammer point release fixes a range of bugs, most notably a fix for unbounded growth of the monitor’s leveldb store, and a workaround in the OSD to keep most xattrs small enough to be stored inline in XFS inodes.

We recommend that all hammer v0.94.x users upgrade.

For more detailed information, see the complete changelog.

NOTABLE CHANGES

read more…

v10.0.3 released

This is the fourth development release for Jewel. Several big pieces have been added this release, including BlueStore (a new backend for OSD to replace FileStore), many ceph-disk fixes, a new CRUSH tunable that improves mapping stability, a new librados object enumeration API, and a whole slew of OSD and RADOS optimizations.

Note that, due to general developer busyness, we aren’t building official release packages for this dev release. You can fetch autobuilt gitbuilder packages from the usual location (http://gitbuilder.ceph.com).

NOTABLE CHANGES

read more…

Community Update: Welcome to 2016!

It has been quite a while since a coordinated Ceph update has made it to the Ceph blog, so I figured it was time to gather all of the various threads and make sure they were in a single place for consumption.

Quite a lot is happening in the Ceph world and, depending on what part of the project you are involved with, there is more than likely to be a place for you to deepen your engagement with the community. So, let’s do the highlight reel:

read more…

v10.0.2 released

This development release includes a raft of changes and improvements for Jewel. Key additions include CephFS scrub/repair improvements, an AIX and Solaris port of librados, many librbd journaling additions and fixes, extended per-pool options, and NBD driver for RBD (rbd-nbd) that allows librbd to present a kernel-level block device on Linux, multitenancy support for RGW, RGW bucket lifecycle support, RGW support for Swift static large objects (SLO), and RGW support for Swift bulk delete.

There are also lots of smaller optimizations and performance fixes going in all over the tree, particular in the OSD and common code.

NOTABLE CHANGES

read more…

v10.0.0 released

This is the first development release for the Jewel cycle.  We are off to a good start, with lots of performance improvements flowing into the tree.  We are targeting sometime in Q1 2016 for the final Jewel.

NOTABLE CHANGES

read more…

Page 1 of 1812345...10...Last »
© 2016, Red Hat, Inc. All rights reserved.