Planet Ceph

Aggregated news from external sources

  • October 8, 2016
    How DreamHost Builds Its Cloud: Architecting the Network

    This is the fourth in a series of posts about how DreamHost builds its Cloud products. Written by Luke Odom straight from the data center operations. So far in this series, we have looked at what processors to use in our machines and what drives we should use. The next thing we are going to …Read more

  • October 5, 2016
    CephFS and LXC: Container High Availability and Scalability, Redefined

    An overview of applying CephFS to LXC containers. // Use the arrow keys to navigate through the presentation, hit Esc to zoom out for an overview, or just advance by hitting the spacebar. Source: Hastexo (CephFS and LXC: Container High Availability and Scalability, Redefined)

  • September 29, 2016
    It’s Hammer Time

    I am happy to announce our latest Hammer release of Red Hat Ceph Storage, minor release 3 — also known as 1.3.3. This release rebases to the latest upstream 0.94.9, and we are quite proud to say we accomplished this in just 30 days, combining quality and speedy delivery in one swift, tentacular package. Our …Read more

  • September 27, 2016
    Conference Report: Ceph Days 2016 Munich, Germany

    Friday last week (23rd of September), I traveled to Munich, to attend and talk about openATTIC at the Ceph Day. This Ceph Day was sponsored by Red Hat and SUSE and it was nice to see many representatives of both companies attending and speaking about Ceph-related topics. Even though the event was organized on short …Read more

  • August 31, 2016
    AWS4 chunked upload goes upstream in Ceph RGW S3

    With AWS Signature Version 4 (AWS4) you have the option of uploading the payload in fixed or variable-size chunks. This chunked upload option, also known as Transfer payload in multiple chunks or STREAMING-AWS4-HMAC-SHA256-PAYLOAD feature in the Amazon S3 ecosystem, avoids reading the payload twice (or buffer it in memory) to compute the signature in the …Read more

  • August 24, 2016
    Jewels of Distributed Storage

    OpenStack Days NYC, Operators Midcycle and Red Hat Ceph Storage 2.0 Today, while I was enjoying the keynotes of old friends at OpenStack Days New York City, the Ceph team at Red Hat was hard at work releasing RHCS 2.0 — the most significant update to Red Hat Ceph Storage since we acquired Inktank in …Read more

  • August 9, 2016
    Chown Ceph OSD data directory using GNU Parallel

    Starting with Ceph version Jewel (10.2.X) all daemons (MON and OSD) will run under the privileged user ceph. Prior to Jewel daemons were running under root which is a potential security issue. This means data has to change ownership before a daemon running the Jewel code can run. Chown data As the Release Notes state …Read more

  • July 5, 2016
    OpenStack and Ceph: like Peanut Butter & Jelly

    The Three musketeers (as our marketing colleagues have started to call us now) were at the Red Hat Summit last week to walk the assembled crowd of CIOs through all the reasons why Ceph is the most successful storage technology in the OpenStack market segment. Ceph is the most widely deployed storage technology used with …Read more

  • June 28, 2016
    Top Ten Mellanox Blogs for Your Cool Summer Reading Summer is finally here and with it, many avid readers like to compile a summer reading list for all those blistering summer days by the pool, basking at the beach, or even while waiting for the coals to heat up for that annual family bar-be-que. Instead of tackling the Lord of the Rings again …Read more

  • June 21, 2016
    Ansible AWS S3 core module now supports Ceph RGW S3

    The Ansible AWS S3 core module now supports Ceph RGW S3. The patch was upstream today and it will be included in Ansible 2.2 This post will introduce the new RGW S3 support in Ansible together with the required bits to run Ansible playbooks handling S3 use cases in Ceph Jewel.

  • May 26, 2016
    Check OSD Version

    Occasionally it may be useful to check the version of the OSD on the entire cluster : 1 ceph tell osd.* version

  • May 24, 2016
    Find the OSD Location

    Of course, the simplest way is using the command ceph osd tree. Note that, if an osd is down, you can see “last address” in ceph health detail : 1 2 3 $ ceph health detail … osd.37 is down since epoch 16952, last address Also, you can use: 1 2 3 4 5 …Read more