I have recently been working on adding metadata search to rgw. It’s not in yet, nor is it completely ready. I do think that it’s at a point where it would be great to get some feedback. This feature is built on top of another feature that I talked about a few months ago on CDM, which is the “sync modules” (formerly known as “sync plugins”) feature. The current code can be found in the following PR:
Gluster and Ceph are delighted to be hosting a Software Defined Storage devroom at FOSDEM 2017.
This year, we’re looking for conversations about open source software defined storage, use cases in the real world, and where the future lies. We’re inviting any Free/Libre/Open Source Software for software defined storage.
Please include the following information when submitting a proposal:
The deadline for submissions is November 16th 2016. FOSDEM will be held on the weekend of February 4-5, 2017 and the Software Defined Storage DevRoom will take place on Sunday, February 5, 2017. Please use the following website to submit your proposals:
In addition to (or in place of) the submissions site, you can also send your information to email@example.com for consideration. Thanks!
This development checkpoint release includes a lot of changes and
improvements to Kraken. This is the first release introducing ceph-mgr,
a new daemon which provides additional monitoring & interfaces to
external monitoring/management systems. There are also many improvements
to bluestore, RGW introduces sync modules, copy part for multipart
uploads and metadata search via elastic search as a tech preview. We’ve
had to skip releasing 11.0.1 due to an issue with git tags and package
versions as we were transitioning away from autotools to cmake and the
new build system in place.
“Ceph is awesome and so is its community” and last year Packt Publishing and Karan Singh from community came up with very first book on Ceph titled “Learning Ceph“.
The overwhelming response to the first book + maturity and popularity of Ceph becomes the base for the next title on Ceph, “Ceph Cookbook“. Author and publisher together has spent several months to produce 326 pages quality content on Ceph including 100 ready to use recipes. And here is the deal, you will get 50% discount on both these books by using this discount code ceph-50 while purchasing the eBook online at packtpub.com. This offer is valid till 31Dec 2016.
This Hammer point release fixes several minor bugs. It also includes a backport of an improved ‘ceph osd reweight-by-utilization’ command for handling OSDs with higher-than-average utilizations.
We recommend that all hammer v0.94.x users upgrade.
For more detailed information, see the complete changelog.
When Ceph was originally designed a decade ago, the concept was that “intelligent” disk drives with some modest processing capability could store objects instead of blocks and take an active role in replicating, migrating, or repairing data within the system. In contrast to conventional disk drives, a smart object-based drive could coordinate with other drives in the system in a peer-to-peer fashion to build a more scalable storage system.
Today an ethernet-attached hard disk drive from WDLabs is making this architecture a reality. WDLabs has assembled over 500 drives from the early production line and assembled them into a 4 PB (3.6 PiB) Ceph cluster running Jewel and the prototype BlueStore storage backend. WDLabs has been working on validating the need to apply an open source compute environment within the storage device and is now beginning to understand the use cases as thought leaders such as Red Hat work with the early units. This test seeks to demonstrate that the second generation converged microserver has become a viable solution for distributed storage use cases like Ceph. Building an open platform that can run open source software is a key underpinning of the concept.