Planet Ceph

Aggregated news from external sources

  • July 31, 2014
    Brace yourself, DevStack Ceph is here!

    For more than a year, Ceph has become increasingly popular and saw several deployments inside and outside OpenStack. For those of you who do not know Ceph is unified, distributed and massively scalable open source storage technology that provides several ways to access and consume your data such as object, block and filesystem. The community and Ceph itself has greatly… Read more →

  • July 30, 2014
    v0.83 released

    Another Ceph development release! This has been a longer cycle, so there has been quite a bit of bug fixing and stabilization in this round. There is also a bunch of packaging fixes for RPM distros (RHEL/CentOS, Fedora, and SUSE) and for systemd. We’ve also added a new librados-striper library from Sebastien Ponce that provides …Read more

  • July 29, 2014
    v0.80.5 Firefly released

    This release fixes a few important bugs in the radosgw and fixes several packaging and environment issues, including OSD log rotation, systemd environments, and daemon restarts on upgrade. We recommend that all v0.80.x Firefly users upgrade, particularly if they are using upstart, systemd, or radosgw. NOTABLE CHANGES ceph-dencoder: do not needlessly link to librgw, librados, …Read more

  • July 29, 2014
    Lots Going on with Ceph

    While we knew that after the acquisition of Inktank life would accelerate again, it seems like the Ceph community is quickly approaching ludicrous speed, and it shows no sign of slowing down. We have had some amazing participation in the various endeavors, but it would be completely understandable if you had missed something amidst the …Read more

  • July 21, 2014
    Ceph Turns 10 Twitter Photo Contest

    OSCON has arrived (although if you came in for the Ceph tutorial session that’s old news to you)! As a part of our participation in OSCON, and as a way to celebrate the fact that Ceph turned 10 years old this year, we have decided to have our party be a distributed one. We would …Read more

  • July 17, 2014
    Celebrate 10 Years of Ceph at OSCON!

    Ceph is coming back to OSCON next week (July 20-24 in Portland, OR). The difference however, is that this year we need two digits to tell people how old we are. Stop by for some mild festivities at the Ceph booth (P2) as we share cupcakes, and t-shirts that salute the hard work of all …Read more

  • July 16, 2014
    Inktank Ceph Enterprise 1.2 arrives with erasure coding and cache-tiering

    Today, we’re proud to announce the availability of Red Hat’s Inktank Ceph Enterprise 1.2, a solution based on the Firefly release from the Ceph community. This latest version brings some major new features to enterprises – ones we believe will both further cement Ceph’s position as a leading storage solution for OpenStack infrastructure while promoting […]

  • July 15, 2014
    v0.80.4 Firefly released

    This Firefly point release fixes an potential data corruption problem when ceph-osd daemons run on top of XFS and service Firefly librbd clients. A recently added allocation hint that RBD utilizes triggers an XFS bug on some kernels (Linux 3.2, and likely others) that leads to data corruption and deep-scrub errors (and inconsistent PGs). This …Read more

  • July 11, 2014
    v0.80.3 Firefly released

    V0.80.3 FIREFLY This is the third Firefly point release. It includes a single fix for a radosgw regression that was discovered in v0.80.2 right after it was released. We recommand that all v0.80.x Firefly users upgrade. NOTABLE CHANGES radosgw: fix regression in manifest decoding (#8804, Sage Weil) For more detailed information, see the complete changelog. V0.80.2 FIREFLY This …Read more

  • July 7, 2014
    Start with the RBD support for TGT

    {% img center Start with the RBD support for TGT %}

    A couple a months ago, Dan Mick posted a nice article that introduced the RBD support for iSCSI / TGT.
    In this article, I will have a look at it.

  • July 4, 2014
    Ceph disaster recovery scenario

    A datacenter containing three hosts of a non profit Ceph and OpenStack cluster suddenly lost connectivity and it could not be restored within 24h. The corresponding OSDs were marked out manually. The Ceph pool dedicated to this datacenter became unavailable … Continue reading

  • July 4, 2014
    Remove Big RBD Image

    Create a disk with an insane size can be fun, but a little hard to remove.

    Here’s a little trick (use with caution!) to remove a too big image for rm rbd (if image is not initialized, or not fully).

    Image format 1 :

    $ rbd info rbdbigsize