Planet Ceph

Aggregated news from external sources

  • April 25, 2013
    Ceph and Cinder multi-backend

    Grizzly brought the multi-backend functionality to cinder and tons of new drivers. The main purpose of this article is to demonstrate how we can take advantage of the tiering capability of Ceph.

    I. Ceph

    To configure Ceph to use different storage…

  • April 22, 2013
    Play with Ceph – Vagrant Box

    Materials to start playing with Ceph. This Vagrant box contains a all-in-one Ceph installation.

    I. Setup

    First Download and Install Vagrant.

    Download the Ceph box: here. This box contains one virtual machine:

    Ceph VM contains 2 OSDs (1 disk e…

  • April 15, 2013
    Ceph Mania

    Ceph is super hot. When people tell me that storage can’t be sexy, I can’t help but feel like Ceph can be! I was out in our L.A. office last week, and the first thing I saw when I showed up was this: A Ceph fanboy working with a multi-petabyte deployment of Ceph decided to […]

  • April 15, 2013
    One More chef-client Run

    Carrying on from my last post, the failed chef-client run came down to the init script in ceph 0.56 not yet knowing how to iterate /var/lib/ceph/{mon,osd,mds} and automatically start the appropriate daemons. This functionality seems to have been introduced in 0.58 or so by commit c8f528a. So I gave it another shot with a build …Read more

  • April 11, 2013
    The Ceph Chef Experiment

    Sometimes it’s most interesting to just dive in and see what breaks. There’s a Chef cookbook for Ceph on github which seems rather more recently developed than the one in SUSE-Cloud/barclamp-ceph, and seeing as its use is documented in the Ceph manual, I reckon that’s the one I want to be using. Of course, the README says “Tested …Read more

  • March 28, 2013
    Ceph is in EPEL, and why Red Hat users should care

    EPEL is Extra Packages for Enterprise Linux, a project that ports software that is part of the Fedora community distribution to the slower-moving RHEL (Red Hat Enterprise Linux) distribution (and its derivatives) used by many enterprises. One problem for RHEL users is that although the distribution tends to be rock solid, that stability comes at […]

  • March 28, 2013
    Supporting Ceph With new hastexo Partnership

    Today, I am excited to announce our partnership with hastexo who provides Ceph-related professional services world-wide. We have been working closely with hastexo for some time, including collaboration on the development of new Ceph courseware for new Inktank training courses that are now available. hastexo is a professional services company located in Austria and serving customers on five continents. The […]

  • March 25, 2013
    Puppet modules for Ceph finally landed!

    Quite recently François Charlier and I worked together on the Puppet modules for Ceph on behalf of our employer eNovance. In fact, François started to work on them last summer, back then he achieved the Monitor manifests. So basically, we worked on the OSD manifest. Modules are in pretty good shape thus we thought it was important to communicate to the community…. Read more →

  • March 21, 2013
    Inktank’s Roadmap for Ceph: Massive scale with Minimal effort

    It’s been just over a year since Inktank was formed with a goal of making Ceph the ‘Linux of Storage’. But while Inktank is a young company, Ceph itself has a longer history, going back to Sage Weil’s PhD in 2005. The intervening years saw much of the engineering work focused on building the foundations […]

  • February 14, 2013
    Deploying Ceph with ComodIT

    At this year’s Cloud Expo Europe I had a nice chat with the guys from ComodIT who are making some interesting deployment and orchestration tools. They were kind enough to include their work in a blog post earlier this week and give me permission to replicate it here for your consumption. As always, if any […]

  • February 14, 2013
    v0.56.3 released

    We’ve fixed an important bug that a few users were hitting with unresponsive OSDs and internal heartbeat timeouts.  This, along with a range of less critical fixes, was sufficient to justify another point release.  Any production users should upgrade. Notable changes include: osd: flush peering work queue prior to start osd: persist osdmap epoch for […]

  • February 13, 2013
    CEPH – THE CRUSH DIFFERENCE!

    So many people have been asking us for more details on CRUSH, I thought it would be worthwhile to share more about it on our blog. If you have not heard of CRUSH, it stands for “Controlled Replication Under Scalable Hashing”. CRUSH is the pseudo-random data placement algorithm that efficiently distributes object replicas across a […]

Careers