The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • January 21, 2014
    Ceph and Mirantis OpenStack

    Last week Dmitry Borodaenko presented his talk on Ceph and OpenStack at the inaugural Silicon Valley Ceph User Group meeting. The meeting was well attended and also featured talks from Mellanox’s Eli Karpilovski and Inktank’s Kyle Bader. However, if you were unable to attend, the following transcript from Dmitry’s talk is a good recap just …Read more

  • December 23, 2013
    Ceph and OpenStack in a Nutshell

    Ceph and OpenStack in a Nutshell Ceph and Openstack in a Nutshell from Karan Singh

  • December 5, 2013
    Ceph + OpenStack :: Part-5

    OpenStack Instance boot from Ceph VolumeFor a list of images to choose from to create a bootable volume[root@rdo /(keystone_admin)]# nova image-list+————————————–+—————————–+——–+——–+| ID …

  • December 5, 2013
    Ceph + OpenStack :: Part-4

    Testing OpenStack Glance + RBDTo allow glance to keep images on ceph RBD volume , edit /etc/glance/glance-api.confdefault_store = rbd# ============ RBD Store Options =============================# Ceph configuration file path# If using cephx …

  • December 5, 2013
    Ceph + OpenStack :: Part-3

    Testing OpenStack Cinder + RBDCreating a cinder volume provided by ceph backend[root@rdo /]#[root@rdo /]# cinder create –display-name cinder-ceph-vol1 –display-description “first cinder volume on ceph backend” 10+———————+—–…

  • December 5, 2013
    Ceph + OpenStack :: Part-2

    Configuring OpenStack

    Two parts of openstack integrates with Ceph’s block devices:

    • Images: OpenStack Glance manages images for VMs.
    • Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs, or to attach volumes to running VMs. OpenStack manages volumes using Cinder services.
      • Create pools for volumes and images:
    ceph osd pool create volumes 128
    ceph osd pool create images 128
    • Configure OpenStack Ceph Client – The nodes running glance-api and cinder-volume act as Ceph clients. Each requires the ceph.conf file:
    [root@ceph-mon1 ceph]# scp ceph.conf openstack:/etc/ceph
    • Installing ceph client packages on openstack node
      • First install Python bindings for librbd
    yum install python-ceph
      • Install ceph
    [root@ceph-mon1 ceph]# ceph-deploy install openstack
    • Setup Ceph Client Authentication for both pools along with keyrings
      • Create a new user for Nova/Cinder and Glance.
    ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
    ceph auth get-or-create client.images mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' 
      • Add these keyrings to glance-api and cinder-volume nodes.
    ceph auth get-or-create client.images | ssh openstack tee /etc/ceph/ceph.client.images.keyring
    ssh openstack chown glance:glance /etc/ceph/ceph.client.images.keyring
    ceph auth get-or-create client.volumes | ssh openstack tee /etc/ceph/ceph.client.volumes.keyring
    ssh openstack chown cinder:cinder /etc/ceph/ceph.client.volumes.keyring
      • Hosts running nova-compute do not need the keyring. Instead, they store the secret key in libvirt. To create libvirt secret key you will need key from client.volumes.key
    ceph auth get-key client.volumes | ssh openstack tee client.volumes.key
      • on the compute nodes, add the secret key to libvirt create a secret.xml file
    cat > secret.xml < <EOF
    <secret ephemeral='no' private='no'>
    <usage type='ceph'>
    <name>client.volumes secret</name>
    </usage>
    EOF
      • Generate secret from created secret.xml file , make a note of uuid of secret output
    # virsh secret-define --file secret.xml 
      • Set libvirt secret using above key
    # virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.volumes.key) && rm client.volumes.key secret.xml
    • Configure OpenStack-Glance to use CEPH
      • Glance can use multiple back ends to store images. To use Ceph block devices by default, edit /etc/glance/glance-api.conf and add:
    default_store=rbd
    rbd_store_user=images
    rbd_store_pool=images
      • If want to enable copy-on-write cloning of images into volumes, also add:
    show_image_direct_url=True
    • Configure OpenStack – Cinder to use CEPH 
      • OpenStack requires a driver to interact with Ceph block devices. You must specify the pool name for the block device. On your OpenStack node, edit/etc/cinder/cinder.conf by adding:
    volume_driver=cinder.volume.drivers.rbd.RBDDriver
    rbd_pool=volumes
    glance_api_version=2
    • If you’re using cephx authentication also configure the user and uuid of the secret you added to libvirt earlier:
    rbd_user=volumes
    rbd_secret_uuid={uuid of secret}
    • Restart Openstack
    service glance-api restart
    service nova-compute restart
    service cinder-volume restart
    • Once OpenStack is up and running, you should be able to create a volume with OpenStack on a Ceph block device.
    • NOTE : Make sure /etc/ceph/ceph.conf file have sufficient rights to be ready by cinder and glance users.

    Please Follow Ceph + OpenStack :: Part-3 for next step in installation


  • December 5, 2013
    Ceph + OpenStack :: Part-1

    Ceph & OpenStack IntegrationWe can use Ceph Block Device with openstack through libvirt, which configures the QEMU interface tolibrbd. To use Ceph Block Devices with openstack , we must install QEMU, libvirt, and&…

  • December 5, 2013
    Ceph Installation :: Part-3

    Creating Block Device from CephFrom monitor node , use ceph-deploy to install Ceph on your ceph-client1 node.[root@ceph-mon1 ~]# ceph-deploy install ceph-client1[ceph_deploy.cli][INFO ] Invoked (1.3): /usr/bin/ceph-deploy install ceph-client1[ceph_dep…

  • December 5, 2013
    Ceph Installation :: Part-2

    CEPH Storage ClusterInstalling Ceph Deploy ( ceph-mon1 )Update your repository and install ceph-deploy on ceph-mon1 node[ceph@ceph-mon1 ~]$ sudo yum update && sudo yum install ceph-deployLoaded plugins: downloadonly, fastestmirror, securityLoad…

  • December 5, 2013
    Ceph Installation :: Part-1
    Ceph Installation Step by Step

    This quick start setup helps to deploy ceph with 3 Monitors and 2 OSD nodes with 4 OSD each node. In this we are using commodity hardware running CentOS 6.4

    Ceph-mon1 : First Monitor + Ceph-deploy machine (will be used to deploy ceph to other nodes )

    Ceph-mon2 : Second Monitor ( for monitor quorum )

    Ceph-mon3 : Third Monitor ( for monitor quorum )

    Ceph-node1 : OSD node 1 with 10G X 1 for OS , 440G X 4 for 4 OSD

    Ceph-node2 : OSD node 2 with 10G X 1 for OS , 440G X 4 for 4 OSD

    Ceph-Deploy Version is 1.3.2 , Ceph Version 0.67.4 ( Dumpling )

    Preflight Checklist 

    All the Ceph Nodes may require some basic configuration work prior to deploying a Ceph Storage Cluster.

    CEPH node setup

    • Create a user on each Ceph Node.
    sudo useradd -d /home/ceph -m ceph
    sudo passwd ceph
    • Add root privileges for the user on each Ceph Node.
    echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers
    sudo chmod 0440 /etc/sudoers
    • Configure your ceph-deploy node ( ceph-mon1) with password-less SSH access to each Ceph Node. Leave the passphrase empty , repeat this step for CEPH and ROOT users.
    ceph@ceph-admin:~ [ceph@ceph-admin ~]$ ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/ceph/.ssh/id_rsa): yes
    Created directory '/home/ceph/.ssh'.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /home/ceph/.ssh/id_rsa.
    Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
    The key fingerprint is:
    48:86:ff:4e:ab:c3:f6:cb:7f:ba:46:33:10:e6:22:52 ceph@ceph-admin.csc.fi
    The key's randomart image is:
    +--[ RSA 2048]----+
    | |
    | E. o |
    | .. oo . |
    | . .+..o |
    | . .o.S. |
    | . + |
    | . o. o |
    | ++ .. . |
    | ..+*+++ |
    +-----------------+

    • Copy the key to each Ceph Node. ( Repeat this step for ceph and root users )
    [ceph@ceph-mon1 ~]$ ssh-copy-id ceph@ceph-node2
    The authenticity of host 'ceph-node2 (192.168.1.38)' can't be established.
    RSA key fingerprint is ac:31:6f:e7:bb:ed:f1:18:9e:6e:42:cc:48:74:8e:7b.
    Are you sure you want to continue connecting (yes/no)? y
    Please type 'yes' or 'no': yes
    Warning: Permanently added 'ceph-node2,192.168.1.38' (RSA) to the list of known hosts.
    ceph@ceph-node2's password:
    Now try logging into the machine, with "ssh 'ceph@ceph-node2'", and check in: .ssh/authorized_keys
    to make sure we haven't added extra keys that you weren't expecting.
    [ceph@ceph-mon1 ~]$
    • Ensure connectivity using ping with hostnames , for convenience we have used local host file , update host file of every node with details of other nodes. PS : Use of DNS is recommended
    • Packages are cryptographically signed with the release.asc key. Add release key to your system’s list of trusted keys to avoid a security warning:
    sudo rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
    • Ceph may require additional additional third party libraries. To add the EPEL repository, execute the following:
    su -c 'rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm'
    sudo yum install snappy leveldb gdisk python-argparse gperftools-libs
    • Installing Release packages , Dumpling is the most recent stable release of Ceph. ( by the time i am creating this wiki )
    su -c 'rpm -Uvh http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm'
    • Adding ceph to YUM , create repository file for ceph /etc/yum.repos.d/ceph.repo
    [ceph]
    name=Ceph packages for $basearch
    baseurl=http://ceph.com/rpm-dumpling/el6/$basearch
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

    [ceph-noarch]
    name=Ceph noarch packages
    baseurl=http://ceph.com/rpm-dumpling/el6/noarch
    enabled=1
    gpgcheck=1
    type=rpm-md
    gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

    [ceph-source]
    name=Ceph source packages
    baseurl=http://ceph.com/rpm-dumpling/el6/SRPMS
    enabled=0
    gpgcheck=1
    type=rpm-md
    gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
    • For best results, create directories on your nodes for maintaining the configuration generated by ceph . These should get auto created by ceph however in may case it gave me problems. So creating manually.
    mkdir -p /etc/ceph /var/lib/ceph/{tmp,mon,mds,bootstrap-osd} /var/log/ceph
    • By default, daemons bind to ports within the 6800:7100 range. You may configure this range at your discretion. Before configuring your IP tables, check the default iptables configuration. ::ports within the 6800:7100 range. You may configure this range at your discretion. Since we are performing test deployment we can disable iptables on ceph nodes . For moving to production this need to be attended.

    Please Follow Ceph Installation :: Part-2 for next step in installation



  • December 5, 2013
    Ceph Storage :: Introduction


    What is CEPH

    Ceph is an open-source, massively scalable, software-defined storage system which provides object, block and file system storage from a single clustered platform. Ceph’s main goals is to be completely distributed without a single point of failure, scalable to the exabyte level, and freely-available. The data is replicated, making it fault tolerant. Ceph software runs on commodity hardware. The system is designed to be both self-healing and self-managing and self awesome 🙂


    CEPH Internals

    • OSD: A Object Storage Daemon (OSD) stores data, handles data replication, recovery, backfilling, rebalancing, and provides some monitoring information to Ceph Monitors by checking other Ceph OSD Daemons for a heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to achieve an active + clean state when the cluster makes two copies of your data . 
    • Monitor: A Ceph Monitor maintains maps of the cluster state, including the monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map. Ceph maintains a history (called an “epoch”) of each state change in the Monitors, Ceph OSD Daemons, and PGs.
    • MDS: A Ceph Metadata Server (MDS) stores metadata on behalf of the Ceph Filesystem . Ceph Metadata Servers make it feasible for POSIX file system users to execute basic commands like ls, find, etc. without placing an enormous burden on the Ceph Storage Cluster.
    Note :: Please use http://ceph.com/docs/master/ and other official InkTank and ceph community resources as a primary source of information on ceph . This entire blog is an attempt help beginners in setting up ceph cluster and sharing my troubleshooting with you.

Careers