Archives: December 2010

Ceph in 2013, a year in review

Wow, what a ride this year has been! Ceph has come a long way, and it doesn’t show any signs of slowing down. As the new year looms I thought it might be good to reflect a bit on 2013 and some of the notable achievements from the Ceph community.

I would categorize the accomplishments this year in three main categories: Community, Commercial, and Development. This year saw a great blend of all categories; that includes three major releases, a user committee, passing the $13 million mark in funding, numerous integrations, and tons more. Read on for details!

read more…

New Ceph Wiki is Live

For those who have used the wiki in recent history you may have noticed that it had been sitting in a read-only state for a little bit around the holidays here. Today the wiki is back in action and better than ever! While we are still using MindTouch, we have moved to the SaaS version that allows us to offload the physical infrastructure and gain a few nice features as well.

Logging In

While the new version is quite nice to look at there are a few things that I would like to point out. First, when you log in you may notice that it redirects you to wikilogin.ceph.com, this is normal. We are running our own custom OAuth plugin that will allow you to continue using your google credentials as before. The first time you log in it will ask you to choose a new user name. You can plug in your preferred user name, or a new one, it doesn’t matter. The previous content and edits have been archived and are not assigned to any existing users. You should only have to do this once. If you have problems please contact community@inktank.com or ping scuttlemonkey on IRC and I’ll make sure to get you squared away.

Content and Functionality

With respect to the content and functionality there are a few things worth pointing out. if you take a look at some of the guide content there are a few different types (tabs) that you will see: “Guide Content,” “How-To,” and “Reference.” These are pre-defined page templates that will help to classify and aggregate content in the appropriate places for easy consumption. Every user should be able to create pages and utilize the template that they feel best suits the content. If you have questions let me know.

Some of the new content features we have been discussing are slowly being added and will continue to be tweaked. The basics for the Chum Bucket have started, but the sorting and tagging have not yet been added. Look for these in a future update.

Ultimately there is a lot of content that could still be added and this is where we need help from the community! If you are interested in helping out feel free to dive right in or ask the community team where you can be of the greatest help.

Getting Acquianted with MindTouch

While many projects choose MediaWiki, we have decided to go with MindTouch for a while to see if things like their advanced knowledge base, polished UI, and automated content management tools might be a bit nicer in the long run. We realize that this may be a bit of a learning curve for some people and as such are providing a few resources if you wish to explore this new tool:

Documentation – MindTouch documentation and support resources can be found at https://help.mindtouch.us.

Training Videos – MindTouch training plans and Self-Training videos can be found at https://help.mindtouch.us/Support/Training.

Getting Started with MindTouch – Use these FAQ’s to get started with MindTouch. They cover a wide variety of topics and MindTouch is always improving their material.
https://help.mindtouch.us/01MindTouch_TCS/User_Guide/001_Getting_Started

As always, if you have questions, concerns, or anything for the good of the cause, feel free to contact the community team or scuttlemonkey on IRC.

scuttlemonkey out

CephFS 

Ceph Filesystem is a posix compliant file system that uses ceph storage cluster to store its data. This is the only ceph component that is not ready for production , i would like to say ready for pre-production.


Internals 
Thanks to http://ceph.com/docs/master/cephfs/ for Image 

Requirement of CephFS


  • You need a running ceph cluster with at least one MDS node. MDS is required for CephFS to work.
  • If you don't have MDS configure one
    • # ceph-deploy mds create <MDS-NODE-ADDRESS>
Note : If you are running short of hardware or want to save hardware you can run MDS services on existing Monitor nodes. MDS services does not need much resources
  • A Ceph client to mount cephFS

Configuring CephFS
  • Install ceph on client node
[root@storage0101-ib ceph]# ceph-deploy install na_fedora19
[ceph_deploy.cli][INFO ] Invoked (1.3.2): /usr/bin/ceph-deploy install na_fedora19
[ceph_deploy.install][DEBUG ] Installing stable version emperor on cluster ceph hosts na_csc_fedora19
[ceph_deploy.install][DEBUG ] Detecting platform for host na_fedora19 ...
[na_csc_fedora19][DEBUG ] connected to host: na_csc_fedora19
[na_csc_fedora19][DEBUG ] detect platform information from remote host
[na_csc_fedora19][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Fedora 19 Schrödinger’s Cat
[na_csc_fedora19][INFO ] installing ceph on na_fedora19
[na_csc_fedora19][INFO ] Running command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[na_csc_fedora19][INFO ] Running command: rpm -Uvh --replacepkgs --force --quiet http://ceph.com/rpm-emperor/fc19/noarch/ceph-release-1-0.fc19.noarch.rpm
[na_csc_fedora19][DEBUG ] ########################################
[na_csc_fedora19][DEBUG ] Updating / installing...
[na_csc_fedora19][DEBUG ] ########################################
[na_csc_fedora19][INFO ] Running command: yum -y -q install ceph

[na_csc_fedora19][ERROR ] Warning: RPMDB altered outside of yum.
[na_csc_fedora19][DEBUG ] No Presto metadata available for Ceph
[na_csc_fedora19][INFO ] Running command: ceph --version
[na_csc_fedora19][DEBUG ] ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
[root@storage0101-ib ceph]#
  • Create a new pool for CephFS
# rados mkpool cephfs
  • Create a new keyring (client.cephfs) for cephfs 
# ceph auth get-or-create client.cephfs mon 'allow r' osd 'allow rwx pool=cephfs' -o /etc/ceph/client.cephfs.keyring
  • Extract secret key from keyring
# ceph-authtool -p -n client.cephfs /etc/ceph/client.cephfs.keyring > /etc/ceph/client.cephfs
  • Copy the secret file to client node under /etc/ceph . This allow filesystem to mount when cephx authentication is enabled
# scp client.cephfs na_fedora19:/etc/ceph
client.cephfs 100% 41 0.0KB/s 00:00
  • List all the keys on ceph cluster
# ceph auth list                                               


Option-1 : Mount CephFS with Kernel Driver


  • On the client machine add mount point in /etc/fstab . Provide IP address of your ceph monitor node and path of secret key that we have created above
192.168.200.101:6789:/ /cephfs ceph name=cephfs,secretfile=/etc/ceph/client.cephfs,noatime 0 2    
  • Mount cephfs mount point  , you might see some "mount: error writing /etc/mtab: Invalid argument" but you can ignore them and check  df -h
[root@na_fedora19 ceph]# mount /cephfs
mount: error writing /etc/mtab: Invalid argument

[root@na_fedora19 ceph]#
[root@na_fedora19 ceph]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 7.8G 2.1G 5.4G 28% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 288K 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 2.6M 3.9G 1% /tmp
192.168.200.101:6789:/ 419T 8.5T 411T 3% /cephfs
[root@na_fedora19 ceph]#

Option-2 : Mounting CephFS as FUSE
  • Copy ceph configuration file ( ceph.conf ) from monitor node to client node and make sure it has permission of 644
# scp ceph.conf na_fedora19:/etc/ceph
# chmod 644 ceph.conf
  • Copy the secret file from monitor node to client node under /etc/ceph. This allow filesystem to mount when cephx authentication is enabled ( we have done this earlier )
# scp client.cephfs na_fedora19:/etc/ceph
client.cephfs 100% 41 0.0KB/s 00:00
  • Make sure you have "ceph-fuse" package installed on client machine
# rpm -qa | grep -i ceph-fuse
ceph-fuse-0.72.2-0.fc19.x86_64
  • To mount Ceph Filesystem as FUSE use ceph-fuse comand 
[root@na_fedora19 ceph]# ceph-fuse -m 192.168.100.101:6789  /cephfs
ceph-fuse[3256]: starting ceph client
ceph-fuse[3256]: starting fuse
[root@na_csc_fedora19 ceph]#

[root@na_fedora19 ceph]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 7.8G 2.1G 5.4G 28% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 292K 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 2.6M 3.9G 1% /tmp
ceph-fuse 419T 8.5T 411T 3% /cephfs
[root@na_fedora19 ceph]#



v0.72.2 Emperor released

This is the second bugfix release for the v0.72.x Emperor series. We have fixed a hang in radosgw, and fixed (again) a problem with monitor CLI compatiblity with mixed version monitors. (In the future this will no longer be a problem.)

Upgrading:

  • The JSON schema for the ‘osd pool set …’ command changed slightly. Please avoid issuing this particular command via the CLI while there is a mix of v0.72.1 and v0.72.2 monitor daemons running.

Changes:

  • mon: ‘osd pool set …’ syntax change
  • osd: added test for missing on-disk HEAD object
  • osd: fix osd bench block size argument
  • rgw: fix hang on large object GET
  • rgw: fix rare use-after-free
  • rgw: various DR bug fixes
  • rgw: do not return error on empty owner when setting ACL
  • sysvinit, upstart: prevent starting daemons using both init systems

For more detailed information, see the complete changelog.

You can get v0.72.2 from the usual locations:

v0.67.5 Dumpling released

This release includes a few critical bug fixes for the radosgw, including a fix for hanging operations on large objects. There are also several bug fixes for radosgw multi-site replications, and a few backported features. Also, notably, the ‘osd perf’ command (which dumps recent performance information about active OSDs) has been backported.

We recommend that all 0.67.x Dumpling users upgrade.

Notable changes:

  • ceph-fuse: fix crash in caching code
  • mds: fix looping in populate_mydir()
  • mds: fix standby-replay race
  • mon: accept ‘osd pool set …’ as string
  • mon: backport: ‘osd perf’ command to dump recent OSD performance stats
  • osd: add feature compat check for upcoming object sharding
  • rbd.py: increase parent name size limit
  • rgw: backport: allow wildcard in supported keystone roles
  • rgw: backport: improve swift COPY behavior
  • rgw: backport: log and open admin socket by default
  • rgw: backport: validate S3 tokens against keystone
  • rgw: fix bucket removal
  • rgw: fix client error code for chunked PUT failure
  • rgw: fix hang on large object GET
  • rgw: fix rare use-after-free
  • rgw: various DR bug fixes
  • sysvinit, upstart: prevent starting daemons using both init systems

Please see the complete changelog for more details.

You can get v0.67.5 from the usual locations:

Experimenting with the Ceph REST API

Like I mentioned in my previous post, Ceph has a REST API now. That opens a lot of possibilities.

The Ceph REST API is a WSGI application and it listens on port 5000 by default.

This means you can query it directly but you probably want to put a webserver/proxy such a Apache or nginx in front of it.
For high availability, you could run ceph-rest-api on several servers and have redundant load balancers pointing to the API endpoints.

ceph-rest-api doesn’t handle authentication very well right now. You start it with a cephx authentication key and that’s it. You need to handle the permissions/authentication at the application level.

For the sake of simplicity and testing, I’m going to test in a sandbox without a proxy and run ceph-rest-api directly on a monitor with the client.admin cephx key.

Starting ceph-rest-api

ceph-rest-api is part of the ceph-common package so I already have it on my monitor.

usage: ceph-rest-api [-h] [-c CONF] [--cluster CLUSTER] [-n NAME] [-i ID]

Ceph REST API webapp

optional arguments: -h, --help show this help message and exit -c CONF, --conf CONF Ceph configuration file --cluster CLUSTER Ceph cluster name -n NAME, --name NAME Ceph client name -i ID, --id ID Ceph client id

With my configuration file /etc/ceph/ceph.conf and my cephx key at /etc/ceph/keyring:

root@mon01:~# ceph-rest-api -n client.admin
* Running on http://0.0.0.0:5000/

Using the API

Well, that was easy. Let’s poke it and see what happens:

root@mon02:~# curl mon01.ceph.example.org:5000
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: <a href="/api/v0.1">/api/v0.1</a>.  If not click the link.</pre>

Well, that works, can we get the status of the cluster ?

root@mon02:~# curl mon01.ceph.example.org:5000/api/v0.1/health
HEALTH_OK

Let’s do the same call with JSON, look at all the data we get !

root@mon02:~# curl -i -H "Accept: application/json" mon01.ceph.example.org:5000/api/v0.1/health
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 1379
Server: Werkzeug/0.8.1 Python/2.7.3
Date: Fri, 27 Dec 2013 04:10:29 GMT
{
  "status": "OK",
  "output": {
    "detail": [

    ],
    "timechecks": {
      "round_status": "finished",
      "epoch": 8,
      "round": 3418,
      "mons": [
        {
          "latency": "0.000000",
          "skew": "0.000000",
          "health": "HEALTH_OK",
          "name": "03"
        },
        {
          "latency": "0.001830",
          "skew": "-0.001245",
          "health": "HEALTH_OK",
          "name": "01"
        },
        {
          "latency": "0.001454",
          "skew": "-0.001546",
          "health": "HEALTH_OK",
          "name": "02"
        }
      ]
    },
    "health": {
      "health_services": [
        {
          "mons": [
            {
              "last_updated": "2013-12-27 04:10:28.096444",
              "name": "03",
              "avail_percent": 87,
              "kb_total": 20641404,
              "kb_avail": 18132220,
              "health": "HEALTH_OK",
              "kb_used": 1460900,
              "store_stats": {
                "bytes_total": 14919567,
                "bytes_log": 983040,
                "last_updated": "0.000000",
                "bytes_misc": 65609,
                "bytes_sst": 13870918
              }
            },
            {
              "last_updated": "2013-12-27 04:10:25.155508",
              "name": "01",
              "avail_percent": 87,
              "kb_total": 20641404,
              "kb_avail": 18030408,
              "health": "HEALTH_OK",
              "kb_used": 1562712,
              "store_stats": {
                "bytes_total": 15968034,
                "bytes_log": 2031616,
                "last_updated": "0.000000",
                "bytes_misc": 65609,
                "bytes_sst": 13870809
              }
            },
            {
              "last_updated": "2013-12-27 04:10:24.362689",
              "name": "02",
              "avail_percent": 87,
              "kb_total": 20641404,
              "kb_avail": 18143028,
              "health": "HEALTH_OK",
              "kb_used": 1450092,
              "store_stats": {
                "bytes_total": 15968294,
                "bytes_log": 2031616,
                "last_updated": "0.000000",
                "bytes_misc": 65609,
                "bytes_sst": 13871069
              }
            }
          ]
        }
      ]
    },
    "overall_status": "HEALTH_OK",
    "summary": [

    ]
  }
}

Wrap-up

The ceph-rest-api is powerful.
You could use it to monitor your cluster with something like nagios or even create a full blown interface to manage your cluster like what Inktank provides with the Calamari GUI in their enterprise offering.

Personally ? I’m going to toy with the idea of making a wrapper library around the API calls and surely improve the documentation, not only for myself but for the benefit of other ceph users.

Experimenting with the Ceph REST API

Like I mentioned in my previous post, Ceph has a REST API now. That opens a lot of possibilities.

The Ceph REST API is a WSGI application and it listens on port 5000 by default.

This means you can query it directly but you probably want to put a webserver/proxy such a Apache or nginx in front of it.
For high availability, you could run ceph-rest-api on several servers and have redundant load balancers pointing to the API endpoints.

ceph-rest-api doesn’t handle authentication very well right now. You start it with a cephx authentication key and that’s it. You need to handle the permissions/authentication at the application level.

For the sake of simplicity and testing, I’m going to test in a sandbox without a proxy and run ceph-rest-api directly on a monitor with the client.admin cephx key.

Starting ceph-rest-api

ceph-rest-api is part of the ceph-common package so I already have it on my monitor.

usage: ceph-rest-api [-h] [-c CONF] [--cluster CLUSTER] [-n NAME] [-i ID]

Ceph REST API webapp

optional arguments: -h, --help show this help message and exit -c CONF, --conf CONF Ceph configuration file --cluster CLUSTER Ceph cluster name -n NAME, --name NAME Ceph client name -i ID, --id ID Ceph client id

With my configuration file /etc/ceph/ceph.conf and my cephx key at /etc/ceph/keyring:

root@mon01:~# ceph-rest-api -n client.admin
* Running on http://0.0.0.0:5000/

Using the API

Well, that was easy. Let’s poke it and see what happens:

root@mon02:~# curl mon01.ceph.example.org:5000
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: <a href="/api/v0.1">/api/v0.1</a>.  If not click the link.</pre>

Well, that works, can we get the status of the cluster ?

root@mon02:~# curl mon01.ceph.example.org:5000/api/v0.1/health
HEALTH_OK

Let’s do the same call with JSON, look at all the data we get !

root@mon02:~# curl -i -H "Accept: application/json" mon01.ceph.example.org:5000/api/v0.1/health
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 1379
Server: Werkzeug/0.8.1 Python/2.7.3
Date: Fri, 27 Dec 2013 04:10:29 GMT
{
  "status": "OK",
  "output": {
    "detail": [

    ],
    "timechecks": {
      "round_status": "finished",
      "epoch": 8,
      "round": 3418,
      "mons": [
        {
          "latency": "0.000000",
          "skew": "0.000000",
          "health": "HEALTH_OK",
          "name": "03"
        },
        {
          "latency": "0.001830",
          "skew": "-0.001245",
          "health": "HEALTH_OK",
          "name": "01"
        },
        {
          "latency": "0.001454",
          "skew": "-0.001546",
          "health": "HEALTH_OK",
          "name": "02"
        }
      ]
    },
    "health": {
      "health_services": [
        {
          "mons": [
            {
              "last_updated": "2013-12-27 04:10:28.096444",
              "name": "03",
              "avail_percent": 87,
              "kb_total": 20641404,
              "kb_avail": 18132220,
              "health": "HEALTH_OK",
              "kb_used": 1460900,
              "store_stats": {
                "bytes_total": 14919567,
                "bytes_log": 983040,
                "last_updated": "0.000000",
                "bytes_misc": 65609,
                "bytes_sst": 13870918
              }
            },
            {
              "last_updated": "2013-12-27 04:10:25.155508",
              "name": "01",
              "avail_percent": 87,
              "kb_total": 20641404,
              "kb_avail": 18030408,
              "health": "HEALTH_OK",
              "kb_used": 1562712,
              "store_stats": {
                "bytes_total": 15968034,
                "bytes_log": 2031616,
                "last_updated": "0.000000",
                "bytes_misc": 65609,
                "bytes_sst": 13870809
              }
            },
            {
              "last_updated": "2013-12-27 04:10:24.362689",
              "name": "02",
              "avail_percent": 87,
              "kb_total": 20641404,
              "kb_avail": 18143028,
              "health": "HEALTH_OK",
              "kb_used": 1450092,
              "store_stats": {
                "bytes_total": 15968294,
                "bytes_log": 2031616,
                "last_updated": "0.000000",
                "bytes_misc": 65609,
                "bytes_sst": 13871069
              }
            }
          ]
        }
      ]
    },
    "overall_status": "HEALTH_OK",
    "summary": [

    ]
  }
}

Wrap-up

The ceph-rest-api is powerful.
You could use it to monitor your cluster with something like nagios or even create a full blown interface to manage your cluster like what Inktank provides with the Calamari GUI in their enterprise offering.

Personally ? I’m going to toy with the idea of making a wrapper library around the API calls and surely improve the documentation, not only for myself but for the benefit of other ceph users.

Experimenting with the Ceph REST API

Like I mentioned in my previous post, Ceph has a REST API now. That opens a lot of possibilities.

The Ceph REST API is a WSGI application and it listens on port 5000 by default.

This means you can query it directly but you probably want to put a webserver/proxy such a Apache or nginx in front of it.
For high availability, you could run ceph-rest-api on several servers and have redundant load balancers pointing to the API endpoints.

ceph-rest-api doesn’t handle authentication very well right now. You start it with a cephx authentication key and that’s it. You need to handle the permissions/authentication at the application level.

For the sake of simplicity and testing, I’m going to test in a sandbox without a proxy and run ceph-rest-api directly on a monitor with the client.admin cephx key.

Starting ceph-rest-api

ceph-rest-api is part of the ceph-common package so I already have it on my monitor.

usage: ceph-rest-api [-h] [-c CONF] [--cluster CLUSTER] [-n NAME] [-i ID]

Ceph REST API webapp

optional arguments:
  -h, --help            show this help message and exit
  -c CONF, --conf CONF  Ceph configuration file
  --cluster CLUSTER     Ceph cluster name
  -n NAME, --name NAME  Ceph client name
  -i ID, --id ID        Ceph client id

With my configuration file /etc/ceph/ceph.conf and my cephx key at /etc/ceph/keyring:

root@mon01:~# ceph-rest-api -n client.admin
* Running on http://0.0.0.0:5000/

Using the API

Well, that was easy. Let’s poke it and see what happens:


root@mon02:~# curl mon01.ceph.example.org:5000

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: <a href="/api/v0.1">/api/v0.1</a>.  If not click the link.

Well, that works, can we get the status of the cluster ?


root@mon02:~# curl mon01.ceph.example.org:5000/api/v0.1/health

HEALTH_OK

Let’s do the same call with JSON, look at all the data we get !


root@mon02:~# curl -i -H "Accept: application/json" mon01.ceph.example.org:5000/api/v0.1/health

HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 1379
Server: Werkzeug/0.8.1 Python/2.7.3
Date: Fri, 27 Dec 2013 04:10:29 GMT
{
  "status": "OK",
  "output": {
    "detail": [
      
    ],
    "timechecks": {
      "round_status": "finished",
      "epoch": 8,
      "round": 3418,
      "mons": [
        {
          "latency": "0.000000",
          "skew": "0.000000",
          "health": "HEALTH_OK",
          "name": "03"
        },
        {
          "latency": "0.001830",
          "skew": "-0.001245",
          "health": "HEALTH_OK",
          "name": "01"
        },
        {
          "latency": "0.001454",
          "skew": "-0.001546",
          "health": "HEALTH_OK",
          "name": "02"
        }
      ]
    },
    "health": {
      "health_services": [
        {
          "mons": [
            {
              "last_updated": "2013-12-27 04:10:28.096444",
              "name": "03",
              "avail_percent": 87,
              "kb_total": 20641404,
              "kb_avail": 18132220,
              "health": "HEALTH_OK",
              "kb_used": 1460900,
              "store_stats": {
                "bytes_total": 14919567,
                "bytes_log": 983040,
                "last_updated": "0.000000",
                "bytes_misc": 65609,
                "bytes_sst": 13870918
              }
            },
            {
              "last_updated": "2013-12-27 04:10:25.155508",
              "name": "01",
              "avail_percent": 87,
              "kb_total": 20641404,
              "kb_avail": 18030408,
              "health": "HEALTH_OK",
              "kb_used": 1562712,
              "store_stats": {
                "bytes_total": 15968034,
                "bytes_log": 2031616,
                "last_updated": "0.000000",
                "bytes_misc": 65609,
                "bytes_sst": 13870809
              }
            },
            {
              "last_updated": "2013-12-27 04:10:24.362689",
              "name": "02",
              "avail_percent": 87,
              "kb_total": 20641404,
              "kb_avail": 18143028,
              "health": "HEALTH_OK",
              "kb_used": 1450092,
              "store_stats": {
                "bytes_total": 15968294,
                "bytes_log": 2031616,
                "last_updated": "0.000000",
                "bytes_misc": 65609,
                "bytes_sst": 13871069
              }
            }
          ]
        }
      ]
    },
    "overall_status": "HEALTH_OK",
    "summary": [
      
    ]
  }
}

Wrap-up

The ceph-rest-api is powerful.
You could use it to monitor your cluster with something like nagios or even create a full blown interface to manage your cluster like what Inktank provides with the Calamari GUI in their enterprise offering.

The API currently lacks proper documentation. You kind of have to guess the API calls with what the current documentation provides.

Edit: I take that back. The documentation is in fact built into the application. More on that in a future post.

Personally ? I’m going to toy with the idea of making a wrapper library around the API calls and surely improve the documentation, not only for myself but for the benefit of other ceph users.

© 2013, Inktank Storage, Inc.. All rights reserved.