Planet Ceph

Aggregated news from external sources

  • July 30, 2013
    Using Ceph-deploy

    Install the ceph cluster

    On each node :

    create a user “ceph” and configure sudo for nopassword :

    1
    2
    3
    4
    $ useradd -d /home/ceph -m ceph
    $ passwd ceph
    $ echo “ceph ALL = (root) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/ceph
    $ chmod 0440 /e…

  • July 26, 2013
    Ceph: update Cephx Keys

    It’s not really clear from the command line

    Generate a dummy key for the exercise

    1
    2
    3
    4
    $ ceph auth get-or-create client.dummy mon 'allow r' osd 'allow rwx pool=dummy'

    [client.dummy]
    key = AQAPiu1RCMb4CxAAmP7rrufwZP…

  • July 25, 2013
    v0.61.7 Cuttlefish update released

    This release fixes another regression preventing monitors to start after undergoing certain upgrade sequences, as well as some corner cases with Paxos and unusual device names in ceph-disk/cephde-loy. Notable changes: mon: fix regression in latest full osdmap retrieval mon: fix a long-standing bug with a paxos corner case ceph-disk: improved support for unusual device names […]

  • July 25, 2013
    v0.67-rc2 Dumpling release candidate

    Hi everyone, We have a release candidate for v0.67 Dumpling! There are a handful of remaining known issues (which I suppose means it is technically *not* an actual candidate for the final release), but for the most part we are happy with the stability so far, and encourage anyone with test clusters to give it […]

  • July 25, 2013
    Countdown to Ceph Day: NYC

    The time is fast approaching when legacy storage, like all mothers, must say farewell to her children. To help facilitate that process Inktank is kicking off another Ceph day next week (Aug 1st) in New York, NY. This gathering is for everyone from the inquisitive and uninitiated to the experienced and opinionated. If you have […]

  • July 24, 2013
    v0.61.6 Cuttlefish update released

    There was a problem with the monitor daemons in v0.61.5 that would prevent them from restarting after some period of time.  This release fixes the bug and works around the issue to allow affected monitors to restart.  All v0.61.5 users are strongly recommended to upgrade. Notable changes: mon: record latest full osdmap mon: work around […]

  • July 18, 2013
    Ceph Developer Summit: Emperor

    It’s that time again! Time for the (virtual) Ceph Developer Summit. We are currently accepting community blueprints for ‘Emperor,’ the next stable release of Ceph, which is due out in November. This summit will be slightly different from the Dumpling summit in that it will be spread over two days to give some of our […]

  • July 18, 2013
    v0.61.5 Cuttlefish update released

    We’ve prepared another update for the Cuttlefish v0.61.x series. This release primarily contains monitor stability improvements, although there are also some important fixes for ceph-osd for large clusters and a few important CephFS fixes. We recommend that all v0.61.x users upgrade. mon: misc sync improvements (faster, more reliable, better tuning) mon: enable leveldb cache by […]

  • July 16, 2013
    Ceph early adopter: Université de Nantes

    In case you missed Loic’s account of a recent visit to the Université de Nantes, we are replicating his blog here. It’s always great to see the community adopting Ceph and doing great things with it, even if they are doing it without Inktank support. Read on for a great look at a Ceph early […]

  • July 12, 2013
    Ceph Cuttlefish VS Bobtail Part 5: Results Summary & Conclusion

    Contents RESULTS SUMMARY 4K RELATIVE PERFORMANCE 128K RELATIVE PERFORMANCE 4M RELATIVE PERFORMANCE CONCLUSION RESULTS SUMMARY For those of you that may have just wandered in from some obscure corner of the internet and haven’t seen the earlier parts of this series, you may want to go back and start at the beginning. If you’ve made […]

  • July 12, 2013
    Ceph Cuttlefish VS Bobtail Part 4: 4M RBD Performance

    Contents Introduction Sequential Writes Random Writes Sequential Reads Random Reads Conclusion INTRODUCTION This is the part I’ve been waiting for. We’ll be testing just how fast we can make Ceph perform with 4M IOs on Kernel and QEMU/KVM RBD volumes. Again, we’ll be looking at how performance scales as the number of concurrent IOs increases […]

  • July 11, 2013
    Inktank Presenting on Ceph at FinTech Demo Day!

    It’s been quite a year for Inktank and the Ceph community. We are super excited to announce another major milestone for Inktank – our participation in the third annual FinTech Innovation Lab in New York City. The goal of the Lab – established in 2010 by Accenture and the Partnership Fund for New York City […]

Careers