Planet Ceph

Aggregated news from external sources

  • December 24, 2012
    Ceph Over Fibre for VMWare

    We always love it when Ceph users choose to share what they have been doing with the community. Recently, a couple of regulars to the #ceph IRC channel were good enough to give us a very detailed look at how they were using Ceph to power their VMWare infrastructure. So, without further ado, read on […]

  • December 20, 2012
    Argonaut vs Bobtail Performance Preview

    Contents Introduction System Setup Test Setup 4KB Write Results 4KB Read Results 128KB Write Results 128KB Read Results 4MB Write Results 4MB Read Results Conclusion INTRODUCTION Hello again! War. War never changes. Some of you may have been following my bitter rivalry with Mark Shuttleworth.  Now, I am perfectly aware that I share nearly as […]

  • December 18, 2012
    What’s New in the Land of OSD?

    It’s been a few months since the last named release, Argonaut, and we’ve been busy! Well, in retrospect, most of the time was spent on finding a cephalopod name that starts with “b”, but once we got that done, we still had a few weeks left to devote to technical improvements. In particular, the OSD […]

  • December 13, 2012
    v0.55.1 released

    There were some packaging and init script issues with v0.55, so a small point release is out. It fixes a few odds and ends: init-ceph: typo (new ‘fs type’ stuff was broken) debian: fixed conflicting upstart and sysvinit scripts auth: fixed default auth settings osd: dropped some broken asserts librbd: fix locking bug, race with […]

  • December 13, 2012
    Deploying Ceph with a Crowbar

    We have seen users deploying Ceph in a number of different ways, which is just plain awesome! I have spoken with people deploying with makecephfs, ceph-deploy, Juju, Chef, and even the beginnings of some Puppet work. However, thanks to collaboration between Inktank and Dell there is a really solid deployment pathway using Dell’s Crowbar tool […]

  • December 6, 2012
    Monitoring a Ceph Cluster

    Ok, so you have gone through the five minute quickstart guide, learned a bit about Ceph, and stood a pre-production server up to test real data and operations…now what? Over the past couple of weeks we have gotten quite a few questions about monitoring and troubleshooting a Ceph cluster once you have one. Thankfully, our […]

  • December 4, 2012
    v0.55 released

    We had originally planned to make v0.55 a long-term stable release, but a lot of last-minute changes and fixes went into this cycle, so we are going to wait another cycle and make v0.56 bobtail.   A lot of work went into v0.55, however.  If you aren’t running argonaut (v0.48.*), please give v0.55 a try […]

  • November 19, 2012
    Getting Involved with Ceph

    The Ceph community is made up of many individuals with a wide variety of backgrounds, from FOSS hacker to corporate architect. We feel very fortunate to have such a great, and active, community. Even more so lately, as we have been fielding a number of questions on how best to become a more active participant […]

  • November 14, 2012
    v0.54 released

    The v0.54 development release is ready!  This will be the last development release before v0.55 “bobtail,” our next long-term stable release, is ready.  Notable changes this time around include: osd: use entire device if journal is a block device osd: new caps structure (see below) osd: backfill target reservations (improve performance during recovery) ceph-fuse: many […]

  • November 9, 2012
    Ceph Performance Part 2: Write Throughput Without SSD Journals

    INTRODUCTION Hello again! If you are new around these parts you may want to start out by reading the first article in this series available here. For the rest of you, I am sure you are no doubt aware by now of the epic battle that Mark Shuttleworth and I are waging over who can […]

  • November 6, 2012
    Our Very First Ceph Day

    Last Friday we had our very first day-long workshop dedicated to Ceph…in beautiful Amsterdam! The Ceph project has had a nice, long string of “firsts” lately and it was exciting to witness this one in person. The event was organized by Inktank and 42on, a new Ceph company and this month’s Featured Contributor! The team […]

  • October 26, 2012
    RBD support in CloudStack 4.0

    For the past few months I have been working towards a way to use Ceph for virtual machine images in Apache CloudStack. This integration is important to end users because it allows them to use Ceph’s distributed block device (RBD) to speed up provisioning of virtual machines. We (my company) have been long-time contributors to […]

Careers