Planet Ceph

Aggregated news from external sources

  • November 17, 2014
    Teuthology docker targets hack (2/4)

    The teuthology container hack is improved to snapshot the container after Ceph and its dependencies have been installed. It helps quickly testing ceph-qa-suite tasks. A job doing nothing but install the Firefly version of Ceph takes 14 seconds after the … Continue reading

  • November 14, 2014
    OpenNebula 4.8 With Ceph Support on Debian Wheezy

    A quick howto to install OpenNebula 4.8 with support for Ceph on Debian Wheezy.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    
    $ onedatastore show cephds
    DATASTORE 101 INFORMATION
    ID             : 101
    NAME           : cephds
    USER           : oneadmin
    GROUP          : oneadmin
    CLUSTER        : -
    TYPE           : IMAGE
    DS_MAD         : ceph
    TM_MAD         : ceph
    BASE PATH      : /var/lib/one//datastores/101
    DISK_TYPE      : RBD
    

    OpenNebula Installation

    OpenNebula Frontend

    Install OpenNebula repo

    wget -q -O- http://downloads.opennebula.org/repo/Ubuntu/repo.key | apt-key add -
    echo "deb http://downloads.opennebula.org/repo/4.8/Debian/7/ stable opennebula" > /etc/apt/sources.list.d/opennebula.list
    

    Download packages

    apt-get update
    apt-get install opennebula opennebula-sunstone nfs-kernel-server
    

    Configure and start the service

    sed -i -e 's/:host: 127.0.0.1/:host: 0.0.0.0/g' /etc/one/sunstone-server.conf
    /etc/init.d/opennebula-sunstone restart
    

    Export NFS

    echo "/var/lib/one/ *(rw,sync,no_subtree_check,root_squash)" >> /etc/exports
    service nfs-kernel-server restart
    

    Configure SSH Public Key

    su - oneadmin
    $ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
    
    
    $ cat << EOT > ~/.ssh/config
    Host *
        StrictHostKeyChecking no
        UserKnownHostsFile /dev/null
    EOT
    $ chmod 600 ~/.ssh/config
    

    OpenNebula Compute Nodes

    Install OpenNebula repo

    wget -q -O- http://downloads.opennebula.org/repo/Ubuntu/repo.key | apt-key add -
    echo "deb http://downloads.opennebula.org/repo/4.8/Debian/7/ stable opennebula" > /etc/apt/sources.list.d/opennebula.list
    

    Download packages

    apt-get update
    apt-get install opennebula-node nfs-common bridge-utils
    

    Network config

    vim /etc/network/interfaces
        ...
        bridge_ports eth0
        bridge_fd 9
        bridge_hello 2
        bridge_maxage 12
        bridge_stp off
    

    Fstab

    vim /etc/fstab
    10.2.0.130:/var/lib/one/  /var/lib/one/  nfs   soft,intr,rsize=8192,wsize=8192,noauto
    mount /var/lib/one/
    

    Qemu config

    cat << EOT > /etc/libvirt/qemu.conf
    user  = "oneadmin"
    group = "oneadmin"
    dynamic_ownership = 0
    EOT
    

    Ceph configuration

    I suppose that you have already a Ceph cluster running.

    Now create a pool “one” for Opennebula and create auth key.

    ceph osd pool create one 128 128
    ceph osd pool set one crush_ruleset 2
    ceph auth get-or-create client.one mon 'allow r' osd 'allow rwx pool=one'
    [client.one]
        key = AQCfTjVUeOPqIhAAiCAiBIgYd85fuMFT0dXVpA==
    

    Add Ceph firefly for support rbd on libvirt + qemu

    Before, you need compile rbd support for libvirt and qemu. You can have a look to this post : http://cephnotes.ksperis.com/blog/2013/09/12/using-ceph-rbd-with-libvirt-on-debian-wheezy (If you do not want to compile qemu you can directluy download those packages here : http://ksperis.com/files/qemu-kvm_1.1.2+dfsg-6_amd64.deb, http://ksperis.com/files/qemu-kvm_1.1.2+dfsg-6_amd64.deb)

    On each node, add Ceph firefly repo, and reinstall libvirt and qemu packages

    apt-get install lsb-release
    wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
    echo deb http://ceph.com/debian-firefly/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
    apt-get update
    apt-get install librados2 librbd1
    
    dpkg -i qemu-kvm_1.1.2+dfsg-6+deb7u4_amd64.deb qemu-utils_1.1.2+dfsg-6a+deb7u4_amd64.deb
    

    Create the secret for libvirt :

    echo "
    <secret ephemeral='no' private='no'>
       <usage type='ceph'>
         <name>client.one secret</name>
       </usage>
    </secret>" > secret.xml
    
    
    service libvirt reload
    

    Add ceph datastore on OpenNebula

    $ su - oneadmin
    $ vim ceph.one
    NAME         = cephds
    DS_MAD       = ceph
    TM_MAD       = ceph
    DISK_TYPE    = RBD
    POOL_NAME    = one
    BRIDGE_LIST  ="onenode1 onenode2 onenode3"
    CEPH_HOST    ="192.168.0.1:6789 192.168.0.2:6789 192.168.0.3:6789"
    CEPH_SECRET  ="26a8b4d7-eb24-bf85-396d-fbf0x252e402"
    CEPH_USER    ="one"
    
    $ onedatastore create ceph.one
    ID: 101
    

    Example config :

    $ onedatastore show cephds
    DATASTORE 101 INFORMATION                                                       
    ID             : 101                 
    NAME           : cephds              
    USER           : oneadmin            
    GROUP          : oneadmin            
    CLUSTER        : -                   
    TYPE           : IMAGE               
    DS_MAD         : ceph                
    TM_MAD         : ceph                
    BASE PATH      : /var/lib/one//datastores/101
    DISK_TYPE      : RBD                 
    
    DATASTORE CAPACITY                                                              
    TOTAL:         : 15.9T               
    FREE:          : 13.4T               
    USED:          : 2.5T                
    LIMIT:         : -                   
    
    PERMISSIONS                                                                     
    OWNER          : um-                 
    GROUP          : u--                 
    OTHER          : ---                 
    
    DATASTORE TEMPLATE                                                              
    BASE_PATH="/var/lib/one//datastores/"
    BRIDGE_LIST="onenode1 onenode2 onenode3"
    CEPH_HOST="192.168.0.1:6789 192.168.0.2:6789 192.168.0.3:6789"
    CEPH_SECRET="26a8b4d7-eb24-bf85-396d-fbf0x252e402"
    CEPH_USER="one"
    CLONE_TARGET="SELF"
    DISK_TYPE="RBD"
    DS_MAD="ceph"
    LN_TARGET="NONE"
    POOL_NAME="one"
    TM_MAD="ceph"
    TYPE="IMAGE_DS"
    
    IMAGES         
    9              
    10             
    11             
    12             
    16             
    21             
    24             
    27             
    28             
    29             
    31             
    32             
    

    Try import image in new datastore :

    $ oneimage create
        --name "CentOS-6.5_x86_64"
        --path "http://appliances.c12g.com/CentOS-6.5/centos6.5.qcow2.gz"
        --driver qcow2
        --datastore cephds
    

    Datastore view in Sunstone :

    More details here :

    http://ceph.com/community/ceph-support-in-opennebula-4-0/

    http://archives.opennebula.org/documentation:archives:rel4.0:ceph_ds

  • November 12, 2014
    v0.88 released

    This is the first development release after Giant. The two main features merged this round are the new AsyncMessenger (an alternative implementation of the network layer) from Haomai Wang at UnitedStack, and support for POSIX file locks in ceph-fuse and libcephfs from Yan, Zheng. There is also a big pile of smaller items that re …Read more

  • November 11, 2014
    OpenStack Glance: import images and convert them directly in Ceph

    Ceph, to work in optimal circumstances requires the usage of RAW images.
    However, it is painful to upload RAW images in Glance because it takes a while.
    Let see how we can make our life easier.

    First let’s upload our image, for the purpose o…

  • November 10, 2014
    Running make check on Ceph pull requests

    Each Ceph contribution is expected to successfully run make check and pass all the unit tests it contains. The developer runs make check locally before submitting his changes but the result may be influenced by the development environment. A draft … Continue reading

  • November 6, 2014
    make -j150 ceph

    A power8 machine was recently donated to the GCC compile farm and /proc/cpuinfo shows 160 processors. Compiling Ceph from sources with make -j150 makes for a nice htop display. The result of the compilation passes most of the unit tests, … Continue reading

  • November 3, 2014
    OpenStack Glance: disable cache management while using Ceph RBD

    The OpenStack documentation often recommends to enable the Glance cache while using the default store file, with the Ceph RBD backend things are slightly different.

    Depending on how you are consuming your Cloud platform, using the keystone+cachem…

  • October 29, 2014
    v0.87 Giant released

    This release will form the basis for the stable release Giant, v0.87.x. Highlights for Giant include: RADOS Performance: a range of improvements have been made in the OSD and client-side librados code that improve the throughput on flash backends and improve parallelism and scaling on fast machines. CephFS: we have fixed a raft of bugs …Read more

  • October 29, 2014
    Teuthology docker targets hack (1/3)

    teuthology runs jobs testing the Ceph integration on targets that can either be virtual machines or bare metal. The container hack adds support for docker containers as a replacement. … Running task exec… Executing custom commands… Running commands on role … Continue reading

  • October 29, 2014
    Remove Pool Without Name

    For exemple :

    # rados lspools
    data
    metadata
    rbd
    <—- ?????
    .eu.rgw.root
    .eu-west-1.domain.rgw
    .eu-west-1.rgw.root
    .eu-west-1.rgw.control
    .eu-west-1.rgw.gc
    .eu-west-1.rgw.buckets.index
    .eu-west-1.rgw.buckets
    .eu-west-1.l…

  • October 28, 2014
    Ceph Developer Summit 2014 – Hammer

    The Ceph Developer Summit (CDS) for the next major Ceph release called Hammer started today some hours ago (2014/10/28). It’s again a virtual summit via video conference calls.I’ve submitted three blueprints:Ceph Security hardening [pad]How t…

  • October 27, 2014
    Ceph: monitor store taking up a lot of space

    During some strange circonstances, the levelDB monitor store can start taking up a substancial amount of space.
    Let quickly see how we can workaround that.

    The result is basically overgrown SSTS file from levelDB.
    SST stands for Sorted String Tab…

Careers