The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • October 19, 2016
    FOSDEM 2017: Software Defined Storage Devroom

    Gluster and Ceph are delighted to be hosting a Software Defined Storage devroom at FOSDEM 2017. Important dates: Nov 16: Deadline for submissions Dec 1: Speakers notified of acceptance Dec 5: Schedule published This year, we’re looking for conversations about open source software defined storage, use cases in the real world, and where the future …Read more

  • October 18, 2016
    Kraken 11.0.2 released

    This development checkpoint release includes a lot of changes and improvements to Kraken. This is the first release introducing ceph-mgr, a new daemon which provides additional monitoring & interfaces to external monitoring/management systems. There are also many improvements to bluestore, RGW introduces sync modules, copy part for multipart uploads and metadata search via elastic search …Read more

  • September 28, 2016
    v10.2.3 Jewel Released

    This point release fixes several important bugs in RBD mirroring, RGW multi-site, CephFS, and RADOS. We recommend that all v10.2.x users upgrade. For more detailed information, see the complete changelog.

  • September 7, 2016
    A Look at the Ceph Cookbook

    “Ceph is awesome and so is its community” and last year Packt Publishing and Karan Singh from community came up with very first book on Ceph titled “Learning Ceph“. The overwhelming response to the first book + maturity and popularity of Ceph becomes the base for the next title on Ceph, “Ceph Cookbook“. Author and …Read more

  • August 26, 2016
    v0.94.8 Hammer released

    This Hammer point release fixes several bugs. We recommend that all hammer v0.94.x users upgrade. For more detailed information, see the complete changelog. NOTABLE CHANGES

  • July 6, 2016
    De-mystifying gluster shards

    Recently I’ve been working on converging glusterfs with oVirt – hyperconverged, open source style. oVirt has supported glusterfs storage domains for a while, but in the past a virtual disk was stored as a single file on a gluster volume. This helps some workloads, but file distribution and functions like self heal and rebalance have …Read more

  • June 15, 2016
    v10.2.2 Jewel released

    This point release fixes several important bugs in RBD mirroring, RGW multi-site, CephFS, and RADOS. We recommend that all v10.2.x users upgrade. For more detailed information, see the complete changelog. NOTABLE CHANGES

  • May 29, 2016
    Making gluster play nicely with others

    These days hyperconverged strategies are everywhere. But when you think about it, sharing the finite resources within a physical host requires an effective means of prioritisation and enforcement. Luckily, the Linux kernel already provides an infrastructure for this in the shape of cgroups, and the interface to these controls is now simplified with systemd integration. …Read more

  • May 16, 2016
    v10.2.1 Jewel released

    This is the first bugfix release for Jewel. It contains several annoying packaging and init system fixes and a range of important bugfixes across RBD, RGW, and CephFS. We recommend that all v10.2.x users upgrade. For more detailed information, see the complete changelog. NOTABLE CHANGES

  • May 13, 2016
    v0.94.7 Hammer released

    This Hammer point release fixes several minor bugs. It also includes a backport of an improved ‘ceph osd reweight-by-utilization’ command for handling OSDs with higher-than-average utilizations. We recommend that all hammer v0.94.x users upgrade. For more detailed information, see the complete changelog. NOTABLE CHANGES

  • April 26, 2016
    504 OSD Ceph cluster on converged microserver ethernet drives

    When Ceph was originally designed a decade ago, the concept was that “intelligent” disk drives with some modest processing capability could store objects instead of blocks and take an active role in replicating, migrating, or repairing data within the system.  In contrast to conventional disk drives, a smart object-based drive could coordinate with other drives …Read more

  • April 22, 2016
    2016 Google Summer of Code Underway

    The Ceph project would like to congratulate the following students on their acceptance to the 2016 Google Summer of Code program, and the Ceph project: Student Project Shehbaz Jaffer BlueStore Victor Araujo End-to-end Performance Visualization Aburudha Bose Improve Overall Python Infrastructure Zhao Junwang Over-the-wire Encryption Support Oleh Prypin Python 3 Support for Ceph These five …Read more

Careers