Planet Ceph

Aggregated news from external sources

  • November 16, 2013
    Display the default Ceph configuration

    The ceph-conf command line queries the /etc/ceph/ceph.conf file. # ceph-conf –lookup fsid 571bb920-6d85-44d7-9eca-1bc114d1cd75 The –show-config option can be used to display the config of a running daemon: ceph -n osd.123 –show-config When no name is specified, it will show the … Continue reading

  • November 13, 2013
    Migrating from ganeti to OpenStack via Ceph

    On ganeti, shutdown the instance and activate its disks: z2-8:~# gnt-instance shutdown nerrant Waiting for job 1089813 for nerrant… z2-8:~# gnt-instance activate-disks nerrant z2-8.host.gnt:disk/0:/dev/drbd10 On an OpenStack Havana installation using a Ceph cinder backend, create a volume with the same … Continue reading

  • November 12, 2013
    Collocating Ceph volumes and instances in a multi-datacenter setup

    OpenStack Havana is installed on machines rented from OVH and Hetzner. An aggregate is created for machines hosted at OVH and another for machines hosted at Hetzner. A Ceph cluster is created with a pool using disks from OVH and … Continue reading

  • October 10, 2013
    Ceph Days London: Ceph Performance slides

    CephDays is the only official event entirely devoted to Ceph. It’s usually one day long, organised by our partner Inktank where several speakers perform talks about Ceph. In a nutshell, it’s a good event to share thoughts, ideas, experiences and debate about Ceph in general with the community. The first event was held last year in the beautiful center of… Read more →

  • June 11, 2013
    New Ceph Backend to Lower Disk Requirements

    I get a fair number of questions on the current Ceph blueprints, especially those coming from the community. Loic Dachary, one of the owners of the Erasure Encoding blueprint, has done a great job taking a look at some of issues at hand. When evaluating Ceph to run a new storage service, the replication factor […]

  • May 7, 2013
    Ceph Cuttlefish Release has Arrived!

    ­­Today marks another milestone for Ceph with the release of Cuttlefish (v0.61), the third stable release of Ceph. Inktank’s development efforts for the Cuttlefish release have been focused around Red Hat support and making it easier to install and configure Ceph while improving the operational ease of integrating with 3rd party tools, such as provisioning […]

  • April 15, 2013
    Ceph Mania

    Ceph is super hot. When people tell me that storage can’t be sexy, I can’t help but feel like Ceph can be! I was out in our L.A. office last week, and the first thing I saw when I showed up was this: A Ceph fanboy working with a multi-petabyte deployment of Ceph decided to […]

  • March 28, 2013
    Ceph is in EPEL, and why Red Hat users should care

    EPEL is Extra Packages for Enterprise Linux, a project that ports software that is part of the Fedora community distribution to the slower-moving RHEL (Red Hat Enterprise Linux) distribution (and its derivatives) used by many enterprises. One problem for RHEL users is that although the distribution tends to be rock solid, that stability comes at […]

  • March 28, 2013
    Supporting Ceph With new hastexo Partnership

    Today, I am excited to announce our partnership with hastexo who provides Ceph-related professional services world-wide. We have been working closely with hastexo for some time, including collaboration on the development of new Ceph courseware for new Inktank training courses that are now available. hastexo is a professional services company located in Austria and serving customers on five continents. The […]

  • March 25, 2013
    Puppet modules for Ceph finally landed!

    Quite recently François Charlier and I worked together on the Puppet modules for Ceph on behalf of our employer eNovance. In fact, François started to work on them last summer, back then he achieved the Monitor manifests. So basically, we worked on the OSD manifest. Modules are in pretty good shape thus we thought it was important to communicate to the community…. Read more →

  • March 21, 2013
    Inktank’s Roadmap for Ceph: Massive scale with Minimal effort

    It’s been just over a year since Inktank was formed with a goal of making Ceph the ‘Linux of Storage’. But while Inktank is a young company, Ceph itself has a longer history, going back to Sage Weil’s PhD in 2005. The intervening years saw much of the engineering work focused on building the foundations […]

  • February 13, 2013
    CEPH – THE CRUSH DIFFERENCE!

    So many people have been asking us for more details on CRUSH, I thought it would be worthwhile to share more about it on our blog. If you have not heard of CRUSH, it stands for “Controlled Replication Under Scalable Hashing”. CRUSH is the pseudo-random data placement algorithm that efficiently distributes object replicas across a […]

Careers