Planet Ceph

Aggregated news from external sources

  • October 20, 2016
    How DreamHost Builds Its Cloud: To Converge or Not to Converge?

    This is No. 5 in a series of posts about how DreamHost builds its Cloud products. Written by Luke Odom straight from the data center operations.  With the first DreamCompute cluster, we used specialized hardware for both the Ceph storage nodes and the hypervisors. The Ceph storage nodes had little RAM, low frequency and low …Read more

  • October 12, 2016
    Multipart Upload (Copy part) goes upstream in Ceph

    The last Upload Part (Copy) patches went upstream in Ceph some days ago. This new feature is available in the master branch now, and it will ship with the first development checkpoint for Kraken. In S3, this feature is used to copy/move data using an existing object as data source in the storage backend instead …Read more

  • October 11, 2016
    Ceph For Databases? Yes You Can, and Should

    Ceph is traditionally known for both object and block storage, but not for database storage. While its scale-out design supports both high capacity and high throughput, the stereotype is that Ceph doesn’t support the low latency and high IOPS typically required by database workloads. However, recent testing by Red Hat, Supermicro, and Percona—one of the …Read more

  • October 8, 2016
    How DreamHost Builds Its Cloud: Architecting the Network

    This is the fourth in a series of posts about how DreamHost builds its Cloud products. Written by Luke Odom straight from the data center operations. So far in this series, we have looked at what processors to use in our machines and what drives we should use. The next thing we are going to …Read more

  • October 5, 2016
    CephFS and LXC: Container High Availability and Scalability, Redefined

    An overview of applying CephFS to LXC containers. // Use the arrow keys to navigate through the presentation, hit Esc to zoom out for an overview, or just advance by hitting the spacebar. Source: Hastexo (CephFS and LXC: Container High Availability and Scalability, Redefined)

  • September 29, 2016
    It’s Hammer Time

    I am happy to announce our latest Hammer release of Red Hat Ceph Storage, minor release 3 — also known as 1.3.3. This release rebases to the latest upstream 0.94.9, and we are quite proud to say we accomplished this in just 30 days, combining quality and speedy delivery in one swift, tentacular package. Our …Read more

  • September 27, 2016
    Conference Report: Ceph Days 2016 Munich, Germany

    Friday last week (23rd of September), I traveled to Munich, to attend and talk about openATTIC at the Ceph Day. This Ceph Day was sponsored by Red Hat and SUSE and it was nice to see many representatives of both companies attending and speaking about Ceph-related topics. Even though the event was organized on short …Read more

  • August 31, 2016
    AWS4 chunked upload goes upstream in Ceph RGW S3

    With AWS Signature Version 4 (AWS4) you have the option of uploading the payload in fixed or variable-size chunks. This chunked upload option, also known as Transfer payload in multiple chunks or STREAMING-AWS4-HMAC-SHA256-PAYLOAD feature in the Amazon S3 ecosystem, avoids reading the payload twice (or buffer it in memory) to compute the signature in the …Read more

  • August 24, 2016
    Jewels of Distributed Storage

    OpenStack Days NYC, Operators Midcycle and Red Hat Ceph Storage 2.0 Today, while I was enjoying the keynotes of old friends at OpenStack Days New York City, the Ceph team at Red Hat was hard at work releasing RHCS 2.0 — the most significant update to Red Hat Ceph Storage since we acquired Inktank in …Read more

  • August 9, 2016
    Chown Ceph OSD data directory using GNU Parallel

    Starting with Ceph version Jewel (10.2.X) all daemons (MON and OSD) will run under the privileged user ceph. Prior to Jewel daemons were running under root which is a potential security issue. This means data has to change ownership before a daemon running the Jewel code can run. Chown data As the Release Notes state …Read more

  • July 5, 2016
    OpenStack and Ceph: like Peanut Butter & Jelly

    The Three musketeers (as our marketing colleagues have started to call us now) were at the Red Hat Summit last week to walk the assembled crowd of CIOs through all the reasons why Ceph is the most successful storage technology in the OpenStack market segment. Ceph is the most widely deployed storage technology used with …Read more

  • June 28, 2016
    Top Ten Mellanox Blogs for Your Cool Summer Reading Summer is finally here and with it, many avid readers like to compile a summer reading list for all those blistering summer days by the pool, basking at the beach, or even while waiting for the coals to heat up for that annual family bar-be-que. Instead of tackling the Lord of the Rings again …Read more