Planet Ceph

Aggregated news from external sources

  • December 21, 2013
    Experimenting with the Ceph REST API

    Like I mentioned in my previous post, Ceph has a REST API now.
    That opens a lot of possibilities.

    The Ceph REST API is a WSGI application and it listens on port 5000 by default.

    This means you can query it directly but you probably want to put a
    webserver/proxy such a Apache or nginx in front of it.
    For high availability, you could run ceph-rest-api on several servers
    and have redundant load balancers pointing to the API endpoints.

    ceph-rest-api doesn’t handle authentication very well right now. You
    start it with a cephx authentication key and that’s it. You need to
    handle the permissions/authentication at the application level.

    For the sake of simplicity and testing, I’m going to test in a sandbox
    without a proxy and run ceph-rest-api directly on a monitor with the
    client.admin cephx key.

    Starting ceph-rest-api

    ceph-rest-api is part of the ceph-common package so I already have it on
    my monitor.

    usage: ceph-rest-api [-h] [-c CONF] [--cluster CLUSTER] [-n NAME] [-i ID]
    

    Ceph REST API webapp

    optional arguments: -h, --help show this help message and exit -c CONF, --conf CONF Ceph configuration file --cluster CLUSTER Ceph cluster name -n NAME, --name NAME Ceph client name -i ID, --id ID Ceph client id

    With my configuration file /etc/ceph/ceph.conf and my cephx key at /etc/ceph/keyring:

    root@mon01:~# ceph-rest-api -n client.admin
    * Running on http://0.0.0.0:5000/

    Using the API

    Well, that was easy. Let’s poke it and see what happens:

    root@mon02:~# curl mon01.ceph.example.org:5000
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
    <title>Redirecting...</title>
    <h1>Redirecting...</h1>
    <p>You should be redirected automatically to target URL: <a href="/api/v0.1">/api/v0.1</a>.  If not click the link.</pre>
    

    Well, that works, can we get the status of the cluster ?

    root@mon02:~# curl mon01.ceph.example.org:5000/api/v0.1/health
    HEALTH_OK

    Let’s do the same call with JSON, look at all the data we get !

    root@mon02:~# curl -i -H "Accept: application/json" mon01.ceph.example.org:5000/api/v0.1/health
    HTTP/1.0 200 OK
    Content-Type: application/json
    Content-Length: 1379
    Server: Werkzeug/0.8.1 Python/2.7.3
    Date: Fri, 27 Dec 2013 04:10:29 GMT
    {
      "status": "OK",
      "output": {
        "detail": [
    
        ],
        "timechecks": {
          "round_status": "finished",
          "epoch": 8,
          "round": 3418,
          "mons": [
            {
              "latency": "0.000000",
              "skew": "0.000000",
              "health": "HEALTH_OK",
              "name": "03"
            },
            {
              "latency": "0.001830",
              "skew": "-0.001245",
              "health": "HEALTH_OK",
              "name": "01"
            },
            {
              "latency": "0.001454",
              "skew": "-0.001546",
              "health": "HEALTH_OK",
              "name": "02"
            }
          ]
        },
        "health": {
          "health_services": [
            {
              "mons": [
                {
                  "last_updated": "2013-12-27 04:10:28.096444",
                  "name": "03",
                  "avail_percent": 87,
                  "kb_total": 20641404,
                  "kb_avail": 18132220,
                  "health": "HEALTH_OK",
                  "kb_used": 1460900,
                  "store_stats": {
                    "bytes_total": 14919567,
                    "bytes_log": 983040,
                    "last_updated": "0.000000",
                    "bytes_misc": 65609,
                    "bytes_sst": 13870918
                  }
                },
                {
                  "last_updated": "2013-12-27 04:10:25.155508",
                  "name": "01",
                  "avail_percent": 87,
                  "kb_total": 20641404,
                  "kb_avail": 18030408,
                  "health": "HEALTH_OK",
                  "kb_used": 1562712,
                  "store_stats": {
                    "bytes_total": 15968034,
                    "bytes_log": 2031616,
                    "last_updated": "0.000000",
                    "bytes_misc": 65609,
                    "bytes_sst": 13870809
                  }
                },
                {
                  "last_updated": "2013-12-27 04:10:24.362689",
                  "name": "02",
                  "avail_percent": 87,
                  "kb_total": 20641404,
                  "kb_avail": 18143028,
                  "health": "HEALTH_OK",
                  "kb_used": 1450092,
                  "store_stats": {
                    "bytes_total": 15968294,
                    "bytes_log": 2031616,
                    "last_updated": "0.000000",
                    "bytes_misc": 65609,
                    "bytes_sst": 13871069
                  }
                }
              ]
            }
          ]
        },
        "overall_status": "HEALTH_OK",
        "summary": [
    
        ]
      }
    }
    

    Wrap-up

    The ceph-rest-api is powerful.
    You could use it to monitor your cluster with something like nagios or
    even create a full blown interface to manage your cluster like what
    Inktank provides with the Calamari GUI in their enterprise offering.

    Personally ? I’m going to toy with the idea of making a wrapper library
    around the API calls and surely improve the documentation, not only for
    myself but for the benefit of other ceph users.

  • December 21, 2013
    Experimenting with the Ceph REST API

    Let’s take a closer look at the Ceph REST API, how it works and what it can do.

  • December 21, 2013
    Benchmarking Ceph erasure code plugins

    The erasure code implementation in Ceph relies on the jerasure library. It is packaged into a plugin that is dynamically loaded by erasure coded pools. The ceph_erasure_code_benchmark is implemented to help benchmark the competing erasure code plugins implementations and to … Continue reading

  • December 12, 2013
    Profiling CPU usage of a ceph command (callgrind)

    After compiling Ceph from sources with: ./configure –with-debug CFLAGS=’-g’ CXXFLAGS=’-g’ The crushtool test mode is used to profile the crush implementation with: valgrind –tool=callgrind \ –callgrind-out-file=crush.callgrind \ src/crushtool \ -i src/test/cli/crushtool/one-hundered-devices.crushmap \ –test –show-bad-mappings The resulting crush.callgrind file can then … Continue reading

  • December 10, 2013
    Profiling CPU usage of a ceph command (gperftools)

    After compiling Ceph from sources with: ./configure –with-debug CFLAGS=’-g’ CXXFLAGS=’-g’ The crushtool test mode is used to profile the crush implementation with: LD_PRELOAD=/usr/lib/libprofiler.so.0 \ CPUPROFILE=crush.prof src/crushtool \ -i src/test/cli/crushtool/one-hundered-devices.crushmap \ –test –show-bad-mappings as instructed in the cpu profiler documentation. The … Continue reading

  • December 9, 2013
    Testing a Ceph crush map

    After modifying a crush map it should be tested to check that all rules can provide the specified number of replicas. If a pool is created to use the metadata rule with seven replicas, could it fail to find enough … Continue reading

  • December 7, 2013
    Ceph has a REST API!

    Ceph is a distributed object store and file system designed to
    provide excellent performance, reliability and scalability.
    It’s a technology I’ve been following and working with for the past
    couple months, especially around deploying it with puppet, I really
    have a feeling it is going to revolutionize the world of storage.

    I just realized Ceph has a REST API since the Dumpling (0.67)
    release.
    This API essentially wraps around the command line tools allowing you
    to monitor and manage your cluster.

    Inktank, the company behind Ceph (a bit like Canonical is behind
    Ubuntu) recently released an enterprise offering that includes a web
    interface to manage your cluster and it is based on that API.
    Calamari, their interface, is unfortunately closed source.

    Open source initiatives are already being worked on (1, 2),
    can’t wait to see what kind of nice things we can craft !

  • December 6, 2013
    Ceph has a REST API!

    I learned recently that a REST API existed for Ceph. I can’t wait to see what kind of nice things we can craft with it.

  • December 5, 2013
    Ceph + OpenStack :: Part-5

    OpenStack Instance boot from Ceph VolumeFor a list of images to choose from to create a bootable volume[root@rdo /(keystone_admin)]# nova image-list+————————————–+—————————–+——–+——–+| ID …

  • December 5, 2013
    Ceph + OpenStack :: Part-4

    Testing OpenStack Glance + RBDTo allow glance to keep images on ceph RBD volume , edit /etc/glance/glance-api.confdefault_store = rbd# ============ RBD Store Options =============================# Ceph configuration file path# If using cephx …

  • December 5, 2013
    Ceph + OpenStack :: Part-3

    Testing OpenStack Cinder + RBDCreating a cinder volume provided by ceph backend[root@rdo /]#[root@rdo /]# cinder create –display-name cinder-ceph-vol1 –display-description “first cinder volume on ceph backend” 10+———————+—–…

  • December 5, 2013
    Ceph + OpenStack :: Part-2

    Configuring OpenStack

    Two parts of openstack integrates with Ceph’s block devices:

    • Images: OpenStack Glance manages images for VMs.
    • Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs, or to attach volumes to running VMs. OpenStack manages volumes using Cinder services.
      • Create pools for volumes and images:
    ceph osd pool create volumes 128
    ceph osd pool create images 128
    • Configure OpenStack Ceph Client – The nodes running glance-api and cinder-volume act as Ceph clients. Each requires the ceph.conf file:
    [root@ceph-mon1 ceph]# scp ceph.conf openstack:/etc/ceph
    • Installing ceph client packages on openstack node
      • First install Python bindings for librbd
    yum install python-ceph
      • Install ceph
    [root@ceph-mon1 ceph]# ceph-deploy install openstack
    • Setup Ceph Client Authentication for both pools along with keyrings
      • Create a new user for Nova/Cinder and Glance.
    ceph auth get-or-create client.volumes mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
    ceph auth get-or-create client.images mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' 
      • Add these keyrings to glance-api and cinder-volume nodes.
    ceph auth get-or-create client.images | ssh openstack tee /etc/ceph/ceph.client.images.keyring
    ssh openstack chown glance:glance /etc/ceph/ceph.client.images.keyring
    ceph auth get-or-create client.volumes | ssh openstack tee /etc/ceph/ceph.client.volumes.keyring
    ssh openstack chown cinder:cinder /etc/ceph/ceph.client.volumes.keyring
      • Hosts running nova-compute do not need the keyring. Instead, they store the secret key in libvirt. To create libvirt secret key you will need key from client.volumes.key
    ceph auth get-key client.volumes | ssh openstack tee client.volumes.key
      • on the compute nodes, add the secret key to libvirt create a secret.xml file
    cat > secret.xml < <EOF
    <secret ephemeral='no' private='no'>
    <usage type='ceph'>
    <name>client.volumes secret</name>
    </usage>
    EOF
      • Generate secret from created secret.xml file , make a note of uuid of secret output
    # virsh secret-define --file secret.xml 
      • Set libvirt secret using above key
    # virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.volumes.key) && rm client.volumes.key secret.xml
    • Configure OpenStack-Glance to use CEPH
      • Glance can use multiple back ends to store images. To use Ceph block devices by default, edit /etc/glance/glance-api.conf and add:
    default_store=rbd
    rbd_store_user=images
    rbd_store_pool=images
      • If want to enable copy-on-write cloning of images into volumes, also add:
    show_image_direct_url=True
    • Configure OpenStack – Cinder to use CEPH 
      • OpenStack requires a driver to interact with Ceph block devices. You must specify the pool name for the block device. On your OpenStack node, edit/etc/cinder/cinder.conf by adding:
    volume_driver=cinder.volume.drivers.rbd.RBDDriver
    rbd_pool=volumes
    glance_api_version=2
    • If you’re using cephx authentication also configure the user and uuid of the secret you added to libvirt earlier:
    rbd_user=volumes
    rbd_secret_uuid={uuid of secret}
    • Restart Openstack
    service glance-api restart
    service nova-compute restart
    service cinder-volume restart
    • Once OpenStack is up and running, you should be able to create a volume with OpenStack on a Ceph block device.
    • NOTE : Make sure /etc/ceph/ceph.conf file have sufficient rights to be ready by cinder and glance users.

    Please Follow Ceph + OpenStack :: Part-3 for next step in installation


Careers