Planet Ceph

Aggregated news from external sources

  • September 19, 2013
    How I barely got my first Ceph monitor running in Docker

    Docker is definitely the new trend. Thus I quickly wanted to try to put a Ceph monitor inside a Docker container. Story of a tough journey…

    First let’s start with the DockerFile, this makes the setup easy and repeatable by anybod…

  • September 12, 2013
    Using Ceph Rbd With Libvirt on Debian Wheezy

    Howto add support of rbd device on debian wheezy.

    libvirt

    Since wheezy, libvirt supports rbd device.

    qemu-kvm

    If you do not add the support in qemu-kvm you may have an error like this:

    error: Failed to start domain ubuntu
    error: internal error …

  • September 10, 2013
    Object Difference Between RBD Format1 and Format2

    Lets take a look how rbd object are store on rados and the difference between format 1 and format 2.

    Format 1

    $ rbd create myrbd –size=10
    $ rados ls -p rbd
    myrbd.rbd
    rbd_directory

    $ rbd map myrbd
    $ dd if=/dev/zero of=/dev/rbd/rbd/myrbd

  • September 9, 2013
    A gentle introduction to the erasure coding

    The erasure coding is currently a very hot topic for distributed storage systems.
    It has been part of the Ceph roadmap for almost a year and Swift guys recently brought the discussion to the table.
    Both of them have planned to implement the erase cod…

  • September 6, 2013
    OpenStack Heat and Ceilometer got their dashboard panel

    The Havana milestone release of the Horizon dashboard brought an absolutely wonderful panel for Heat, the orchestration service and Ceilometer, the metering service.
    Quick preview before the Havana’s official release.

    I. Heat

    Grab a sim…

  • September 6, 2013
    OpenStack Havana flush token manually

    It has always been a huge pain to manage token in MySQL espacially with PKI token since they are larger than UUID token.
    Almost a year ago I wrote an article to purge token via a script.
    So finally, we have an easy option to purge all expired token.

  • September 3, 2013
    Who You Callin’ Wimpy? Storage on ARM-based Servers Is Brawny!

    A Guest Blogpost from John Mao, Product Marketing, Calxeda Wimpy a good thing? For CPU cores? Maybe so…. Now, I’m not exactly a fan of the term wimpy—the voice in this 80s trash bag commercial tends to haunt me—but in his research note last Wednesday called “Are Wimpy Cores Good for Brawny Storage?”, Paul Teich of Moor Insights & […]

  • September 2, 2013
    First glimpse at CoreOS

    CoreOS is an emergent project that aims to address one of the most pressing questions in the server’s world.
    We at eNovance, therefore released eDeploy: a tool that performs bare metal deployment and manages upgrades with ease.
    Deploying and up…

  • August 29, 2013
    Mon Failed to Start

    Some common problems when adding a monitor to an existing cluster, for example if config is not found :

     $ service ceph start mon.ceph-03
     /etc/init.d/ceph: mon.ceph-03 not found (/etc/ceph/ceph.conf defines osd.2 , /var/lib/ceph defines osd.2)
    

    If you do not want to specify a section mon.ceph-03 in ceph.conf, you need to have a file sysvinit in /var/lib/ceph/mon/ceph-ceph-03/

    $   ls -l /var/lib/ceph/mon/ceph-ceph-03/
    total 8
    -rw-r--r-- 1 root root   77 août  29 16:56 keyring
    drwxr-xr-x 2 root root 4096 août  29 17:03 store.db
    

    Just create the file, then it should start :

    $ touch /var/lib/ceph/mon/ceph-ceph-03/sysvinit
    $ service ceph start mon.ceph-03
    === mon.ceph-03 === 
    Starting Ceph mon.ceph-03 on ceph-03...
    failed: 'ulimit -n 32768;  /usr/bin/ceph-mon -i ceph-03 --pid-file /var/run/ceph/mon.ceph-03.pid -c /etc/ceph/ceph.conf '
    Starting ceph-create-keys on ceph-03...
    

    Next error on starting monitor, if you have a look to log you can see :

    $ tail -f ceph-mon.ceph-03.log
    mon.ceph-03 does not exist in monmap, will attempt to join an existing cluster
    no public_addr or public_network specified, and mon.ceph-03 not present in monmap or ceph.conf
    

    You shoud verify, that you do not have ceph-create-keys process that hang, if so you can kill it :

    $ ps aux | grep create-keys
    root      1317  0.1  1.4  36616  7168 pts/0    S    17:13   0:00 /usr/bin/python /usr/sbin/ceph-create-keys -i ceph-03
    
    $ kill 1317
    

    Verify that you have defined this mon on the current monmap

    $  ceph mon dump
    dumped monmap epoch 6
    epoch 6
    fsid e0506c4d-e86a-40a8-8306-4856f9ccb989
    last_changed 2013-08-29 16:58:06.145127
    created 0.000000
    0: 10.2.4.10:6789/0 mon.ceph-01
    1: 10.2.4.11:6789/0 mon.ceph-02
    2: 10.2.4.12:6789/0 mon.ceph-03
    

    You need to retrieve the current monmap and add if for this node :

    $ ceph mon getmap -o /tmp/monmap
    2013-08-29 17:36:36.204257 7f641a54d700  0 -- :/1005682 >> 10.2.4.12:6789/0 pipe(0x2283400 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x2283660).fault
    got latest monmap
    
    $ ceph-mon -i ceph-03 --inject-monmap /tmp/monmap
    

    Try again :

    $ service ceph start mon.ceph-03
    === mon.ceph-03 === 
    Starting Ceph mon.ceph-03 on ceph-03...
    Starting ceph-create-keys on ceph-03...
    

    It’s seems working fine. You can verify the state of monitor quorum :

    $ ceph mon stat
    e6: 3 mons at {ceph-01=10.2.4.10:6789/0,ceph-02=10.2.4.11:6789/0,ceph-03=10.2.4.12:6789/0}, election epoch 1466, quorum 0,1,2 ceph-01,ceph-02,ceph-03
    

    For more information, have a look to the documentation :
    http://ceph.com/docs/master/rados/operations/add-or-rm-mons/

  • August 28, 2013
    RBD Image Real Size

    To get the real size used by a rbd image :

    rbd diff $POOL/$IMAGE | awk ‘{ SUM += $2 } END { print SUM/1024/1024 ” MB” }’

    For exemple :

    $rbd info myrbd
    rbd image ‘myrbd’:
    size 2048 MB in 512 objects
    order 22 (4096 KB objects)
    block_na…

  • August 27, 2013
    Deep Scrub Distribution

    To verify the integrity of data, Ceph uses a mechanism called deep scrubbing which browse all your data once per week for each placement group.
    This can be the cause of overload when all osd running deep scrubbing at the same time.

    You can easly see i…

  • August 22, 2013
    Configure Ceph RBD caching on OpenStack Nova

    By default, OpenStack doesn’t use any caching. However, you might want to enable the RBD caching.

    As you may recall, the current implementation of the RBD caching is in-memory caching solution.
    Although, at the last Ceph Developer Summit (l…

Careers