The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • April 12, 2019
    New in Nautilus: device management and failure prediction

    Ceph storage clusters ultimately rely on physical hardware devices–HDDs or SSDs–that can fail. Starting in Nautilus, management and tracking of physical devices is now handled by Ceph. Furthermore, we’ve added infrastructure to collect device health metrics (e.g., SMART) and to predict device failures before they happen, either via a built-in pre-trained prediction model, or via …Read more

  • April 4, 2019
    New in Nautilus: PG merging and autotuning

    Since the beginning, choosing and tuning the PG count in Ceph has been one of the more frustrating parts of managing a Ceph cluster.  Guidance for choosing an appropriate pool is confusing, inconsistent between sources, and frequently surrounded by caveats and exceptions.  And most importantly, if a bad value is chosen, it can’t always be …Read more

  • October 29, 2013
    Dynamic Object Interfaces with Lua

    In this post I’m going to demonstrate how to dynamically extend the interface of objects in RADOS using the Lua scripting language, and then build an example service for image thumbnail generation and storage that performs remote image processing inside a target object storage device (OSD). We’re gonna have a lot of fun. Before we […]

  • February 12, 2013
    Ceph Comes to Synnefo and Ganeti

    During my most recent schlep through Europe I met some really great people, and heard some awesome Ceph use cases. One particularly interesting case was the work the guys at Synnefo shared with me at FOSDEM that they have been doing with Ganeti and RADOS. They were nice enough to write up some of the […]

  • February 4, 2013
    Ceph Bobtail JBOD Performance Tuning

    Contents Introduction System Setup Test Setup 4KB Results 128KB Results 4MB Results Results Summary Conclusion INTRODUCTION One of the things that makes Ceph particularly powerful is the number of tunable options it provides. You can control how much data and how many operations are buffered at nearly every stage of the pipeline. You can introduce […]

  • January 22, 2013
    Ceph Bobtail Performance – IO Scheduler Comparison

    Contents Introduction System Setup Test Setup 4KB Results 128KB Results 4MB Results Results Summary Conclusion INTRODUCTION One of the strangest things about the holidays is how productive I am. Maybe it’s the fact that Minnesota is bitterly cold this time of year, and the only entertaining things to do outside often involve subzero winds rushing […]

  • November 9, 2012
    Ceph Performance Part 2: Write Throughput Without SSD Journals

    INTRODUCTION Hello again! If you are new around these parts you may want to start out by reading the first article in this series available here. For the rest of you, I am sure you are no doubt aware by now of the epic battle that Mark Shuttleworth and I are waging over who can […]

  • October 17, 2012
    Ceph is the new black. It goes with everything!

    In my (rather brief) time digging in to Ceph and working with the community, most discussions generally boil down to two questions: “How does Ceph work?” and “What can I do with Ceph?” The first question has garnered a fair amount of attention in our outreach efforts. Ross Turk’s post “More Than an Object Store” […]

  • November 7, 2011
    Atomicity of RESTful radosgw operations

    A while back we worked on radosgw doing atomic reads and writes. The first issue was making sure that two or more concurrent writers that write to the same object don’t end up with an inconsistent object. That is the “atomic PUT” issue. We also wanted to be able to make sure that when one […]

  • December 22, 2010
    RBD upstream updates

    QEMU-RBD The QEMU-RBD block device has been merged upstream into the QEMU project. QEMU-RBD was created originally by Christian Brunner, and is binary compatible with the linux native RBD driver. It allows the creation of QEMU block devices that are striped over objects in RADOS — the Ceph distributed object store. As with the corresponding […]

  • November 13, 2010
    S3-compatible object storage with radosgw

    The radosgw has been around for a while, but it hasn’t been well publicized or documented, so I thought I’d mention it here.  The idea is this: Ceph’s architecture is based on a robust, scalable distributed object store called RADOS. Amazon’s S3 has shown that a simple object-based storage interface is a convenient way to […]

  • August 30, 2010
    rbd (rados block device) status

    The rados block device (rbd) is looking pretty good at this point.  The basic feature set: network block device backed by objects in the Ceph distributed object store (rados) thinly provisioned image resizing image export/import/copy/rename read-only snapshots revert to snapshot Linux and qemu/kvm clients Main items on the the to-do list: TRIM image layering/copy-on-write locking […]

Careers