Planet Ceph

Aggregated news from external sources

  • March 11, 2015
    New release of python-cephclient: 0.1.0.5

    I’ve just drafted a new release of python-cephclient
    on PyPi: v0.1.0.5.

    After learning about the ceph-rest-api I just had
    to do something fun with it.

    In fact, it’s going to become very handy for me as I might start to develop
    with it for things like nagios monitoring scripts.

    The changelog:

    dmsimard:

    • Add missing dependency on the requests library
    • Some PEP8 and code standardization cleanup
    • Add root “PUT” methods
    • Add mon “PUT” methods
    • Add mds “PUT” methods
    • Add auth “PUT” methods

    Donald Talton:

    • Add osd “PUT” methods

    Please try it out and let me know if you have any feedback !

    Pull requests are welcome ðŸ™‚

  • January 18, 2014
    python-cephclient now on PyPi

    Back on January 1st, I wrote about my initiative regarding a client
    for the Ceph REST API made in python.

    I’m glad to announce that the client is now available on PyPi and I have
    just drafted the second release: v0.1.0.2.

    Check it out and let me know what you think !

  • January 1, 2014
    A python client for ceph-rest-api

    After learning there was an API for Ceph, it was clear to me that I was going to write a client to wrap around it and use it for various purposes.

  • January 1, 2014
    A python client for ceph-rest-api

    After learning there was an API for Ceph, it was clear to me that I
    was going to write a client to wrap around it and use it for various purposes.
    It is still a work in progress and I feel it is not complete and clean
    enough to publish on pypi yet…

  • January 1, 2014
    Documentation for ceph-rest-api

    You won’t find the documentation for the ceph-rest-api API on ceph docs because it’s built-in into the application. Here’s what the application provides as documentation.

  • January 1, 2014
    Documentation for ceph-rest-api

    I learned that there was a Ceph REST API and I experimented with it
    a bit.

    I said the documentation was lacking and I take that back, I didn’t
    catch on that the API documentation was built into the application. I
    opened a pull request to make the documentation a bit more explicit
    about that: https://github.com/ceph/ceph/pull/1026

    Here’s what the API documentation currently looks like:

    Possible commands Method Description
    auth/add?entity=entity(<string>)&caps={c
    aps(<string>) [<string>…]}
    PUT add auth info for <entity> from input file, or random key if no input given, and/or any caps specified in the command
    auth/caps?entity=entity(<string>)&caps=c
    aps(<string>) [<string>…]
    PUT update caps for <name> from caps specified in the command
    auth/del?entity=entity(<string>) PUT delete all caps for <name>
    auth/export?entity={entity(<string>)} GET write keyring for requested entity, or master keyring if none given
    auth/get?entity=entity(<string>) GET write keyring file with requested key
    auth/get-key?entity=entity(<string>) GET display requested key
    auth/get-or-create?entity=entity(<string
    >)&caps={caps(<string>) [<string>…]}
    PUT add auth info for <entity> from input file, or random key if no input given, and/or any caps specified in the command
    auth/get-or-create-key?entity=entity(<st
    ring>)&caps={caps(<string>)
    [<string>…]}
    PUT get, or add, key for <name> from system/caps pairs specified in the command. If key already exists, any given caps must match the existing caps for that key.
    auth/import PUT auth import: read keyring file from -i <file>
    auth/list GET list authentication state
    auth/print-key?entity=entity(<string>) GET display requested key
    auth/print_key?entity=entity(<string>) GET display requested key
    tell/<osdid-or-pgid>/bench?count={count(
    <int>)}&size={size(<int>)}
    PUT OSD benchmark: write <count> <size>-byte objects, (default 1G size 4MB). Results in log.
    compact PUT cause compaction of monitor’s leveldb storage
    config-key/del?key=key(<string>) PUT delete <key>
    config-key/exists?key=key(<string>) GET check for <key>’s existence
    config-key/get?key=key(<string>) GET get <key>
    config-key/list GET list keys
    config-key/put?key=key(<string>)&val={va
    l(<string>)}
    PUT put <key>, value <val>
    tell/<osdid-or-
    pgid>/cpu_profiler?arg=arg(status|flush)
    PUT run cpu profiling on daemon
    tell/<osdid-or-pgid>/debug/kick_recovery
    _wq?delay=delay(<int[0-]>)
    PUT set osd_recovery_delay_start to <val>
    tell/<osdid-or-pgid>/debug_dump_missing?
    filename=filename(<outfilename>)
    GET dump missing objects to a named file
    df?detail={detail} GET show cluster free space stats
    tell/<osdid-or-
    pgid>/dump_pg_recovery_stats
    GET dump pg recovery statistics
    tell/<osdid-or-pgid>/flush_pg_stats PUT flush pg stats
    fsid GET show cluster FSID/UUID
    health?detail={detail} GET show cluster health
    tell/<osdid-or-pgid>/heap?heapcmd=heapcm
    d(dump|start_profiler|stop_profiler|rele
    ase|stats)
    PUT show heap usage info (available only if compiled with tcmalloc)
    heap?heapcmd=heapcmd(dump|start_profiler
    |stop_profiler|release|stats)
    PUT show heap usage info (available only if compiled with tcmalloc)
    tell/<osdid-or-pgid>/injectargs?injected
    _args=injected_args(<string>)
    [<string>…]
    PUT inject configuration arguments into running OSD
    injectargs?injected_args=injected_args(<
    string>) [<string>…]
    PUT inject config arguments into monitor
    tell/<osdid-or-pgid>/list_missing?offset
    ={offset(<string>)}
    GET list missing objects on this pg, perhaps starting at an offset given in JSON
    tell/<osdid-or-pgid>/list_missing?offset
    ={offset(<string>)}
    PUT list missing objects on this pg, perhaps starting at an offset given in JSON
    log?logtext=logtext(<string>)
    [<string>…]
    PUT log supplied text to the monitor log
    tell/<osdid-or-
    pgid>/mark_unfound_lost?mulcmd=revert
    PUT mark all unfound objects in this pg as lost, either removing or reverting to a prior version if one is available
    tell/<osdid-or-pgid>/mark_unfound_lost/r
    evert?mulcmd=revert
    PUT mark all unfound objects in this pg as lost, either removing or reverting to a prior version if one is available
    mds/add_data_pool?pool=pool(<string>) PUT add data pool <pool>
    mds/cluster_down PUT take MDS cluster down
    mds/cluster_up PUT bring MDS cluster up
    mds/compat/rm_compat?feature=feature(<in
    t[0-]>)
    PUT remove compatible feature
    mds/compat/rm_incompat?feature=feature(<
    int[0-]>)
    PUT remove incompatible feature
    mds/compat/show GET show mds compatibility settings
    mds/deactivate?who=who(<string>) PUT stop mds
    mds/dump?epoch={epoch(<int[0-]>)} GET dump info, optionally from epoch
    mds/fail?who=who(<string>) PUT force mds to status failed
    mds/getmap?epoch={epoch(<int[0-]>)} GET get MDS map, optionally from epoch
    mds/newfs?metadata=metadata(<int[0-]>)&d
    ata=data(<int[0-]>)&sure={—yes-i
    -really-mean-it}
    PUT make new filesystom using pools <metadata> and <data>
    mds/remove_data_pool?pool=pool(<string>) PUT remove data pool <pool>
    mds/rm?gid=gid(<int[0-]>)&who=who(<name
    (type.id)>)
    PUT remove nonactive mds
    mds/rmfailed?who=who(<int[0-]>) PUT remove failed mds
    mds/set?key=allow_new_snaps&sure={sure(<
    string>)}
    Unknown set <key>
    mds/set_max_mds?maxmds=maxmds(<int[0-]>) PUT set max MDS index
    mds/set_state?gid=gid(<int[0-]>)&state=s
    tate(<int[0-20]>)
    PUT set mds state of <gid> to <numeric-state>
    mds/setmap?epoch=epoch(<int[0-]>) PUT set mds map; must supply correct epoch number
    mds/stat GET show MDS status
    mds/stop?who=who(<string>) PUT stop mds
    mds/tell?who=who(<string>)&args=args(<st
    ring>) [<string>…]
    PUT send command to particular mds
    mds/unset?key=allow_new_snaps&sure={sure
    (<string>)}
    Unknown unset <key>
    mon/add?name=name(<string>)&addr=addr(<I
    Paddr[:port]>)
    PUT add new monitor named <name> at <addr>
    mon/dump?epoch={epoch(<int[0-]>)} GET dump formatted monmap (optionally from epoch)
    mon/getmap?epoch={epoch(<int[0-]>)} GET get monmap
    mon/remove?name=name(<string>) PUT remove monitor named <name>
    mon/stat GET summarize monitor status
    mon_status GET report status of monitors
    osd/blacklist?blacklistop=blacklistop(ad
    d|rm)&addr=addr(<EntityAddr>)&expire={ex
    pire(<float[0.0-]>)}
    PUT add (optionally until <expire> seconds from now) or remove <addr> from blacklist
    osd/blacklist/ls GET show blacklisted clients
    osd/create?uuid={uuid(<uuid>)} PUT create new osd (with optional UUID)
    osd/crush/add?id=id(<osdname (id|osd.id)
    >)&weight=weight(<float[0.0-]>)&args=arg
    s(<string(goodchars [A-Za-z0-9-_.=])>)
    [<string(goodchars [A-Za-z0-9-_.=])>…]
    PUT add or update crushmap position and weight for <name> with <weight> and location <args>
    osd/crush/add-
    bucket?name=name(<string(goodchars
    [A-Za-z0-9-_.])>)&type=type(<string>)
    PUT add no-parent (probably root) crush bucket <name> of type <type>
    osd/crush/create-or-move?id=id(<osdname
    (id|osd.id)>)&weight=weight(<float[0.0-]
    >)&args=args(<string(goodchars
    [A-Za-z0-9-_.=])>) [<string(goodchars
    [A-Za-z0-9-_.=])>…]
    PUT create entry or move existing entry for <name> <weight> at/to location <args>
    osd/crush/dump GET dump crush map
    osd/crush/link?name=name(<string>)&args=
    args(<string(goodchars
    [A-Za-z0-9-_.=])>) [<string(goodchars
    [A-Za-z0-9-_.=])>…]
    PUT link existing entry for <name> under location <args>
    osd/crush/move?name=name(<string(goodcha
    rs [A-Za-z0-9-_.])>)&args=args(<string(g
    oodchars [A-Za-z0-9-_.=])>)
    [<string(goodchars [A-Za-z0-9-_.=])>…]
    PUT move existing entry for <name> to location <args>
    osd/crush/remove?name=name(<string(goodc
    hars [A-Za-z0-9-_.])>)&ancestor={ancesto
    r(<string(goodchars [A-Za-z0-9-_.])>)}
    PUT remove <name> from crush map (everywhere, or just at <ancestor>
    osd/crush/reweight?name=name(<string(goo
    dchars [A-Za-z0-9-_.])>)&weight=weight(<
    float[0.0-]>)
    PUT change <name>’s weight to <weight> in crush map
    osd/crush/rm?name=name(<string(goodchars
    [A-Za-z0-9-_.])>)&ancestor={ancestor(<st
    ring(goodchars [A-Za-z0-9-_.])>)}
    PUT remove <name> from crush map (everywhere, or just at <ancestor>
    osd/crush/rule/create-
    simple?name=name(<string(goodchars [A-Za
    -z0-9-_.])>)&root=root(<string(goodchars
    [A-Za-z0-9-_.])>)&type=type(<string(good
    chars [A-Za-z0-9-_.])>)
    PUT create crush rule <name> in <root> of type <type>
    osd/crush/rule/dump GET dump crush rules
    osd/crush/rule/list GET list crush rules
    osd/crush/rule/ls GET list crush rules
    osd/crush/rule/rm?name=name(<string(good
    chars [A-Za-z0-9-_.])>)
    PUT remove crush rule <name>
    osd/crush/set PUT set crush map from input file
    osd/crush/set?id=id(<osdname (id|osd.id)
    >)&weight=weight(<float[0.0-]>)&args=arg
    s(<string(goodchars [A-Za-z0-9-_.=])>)
    [<string(goodchars [A-Za-z0-9-_.=])>…]
    PUT update crushmap position and weight for <name> to <weight> with location <args>
    osd/crush/tunables?profile=profile(legac
    y|argonaut|bobtail|optimal|default)
    PUT set crush tunables values to <profile>
    osd/crush/unlink?name=name(<string(goodc
    hars [A-Za-z0-9-_.])>)&ancestor={ancesto
    r(<string(goodchars [A-Za-z0-9-_.])>)}
    PUT unlink <name> from crush map (everywhere, or just at <ancestor>
    osd/deep-scrub?who=who(<string>) PUT initiate deep scrub on osd <who>
    osd/down?ids=ids(<string>) [<string>…] PUT set osd(s) <id> [<id>…] down
    osd/dump?epoch={epoch(<int[0-]>)} GET print summary of OSD map
    osd/find?id=id(<int[0-]>) GET find osd <id> in the CRUSH map and show its location
    osd/getcrushmap?epoch={epoch(<int[0-]>)} GET get CRUSH map
    osd/getmap?epoch={epoch(<int[0-]>)} GET get OSD map
    osd/getmaxosd GET show largest OSD id
    osd/in?ids=ids(<string>) [<string>…] PUT set osd(s) <id> [<id>…] in
    osd/lost?id=id(<int[0-]>)&sure={—yes-i
    -really-mean-it}
    PUT mark osd as permanently lost. THIS DESTROYS DATA IF NO MORE REPLICAS EXIST, BE CAREFUL
    osd/ls?epoch={epoch(<int[0-]>)} GET show all OSD ids
    osd/lspools?auid={auid(<int>)} GET list pools
    osd/map?pool=pool(<poolname>)&object=obj
    ect(<objectname>)
    GET find pg for <object> in <pool>
    osd/out?ids=ids(<string>) [<string>…] PUT set osd(s) <id> [<id>…] out
    osd/pause PUT pause osd
    osd/perf GET print dump of OSD perf summary stats
    osd/pool/create?pool=pool(<poolname>)&pg
    _num=pg_num(<int[0-]>)&pgp_num={pgp_num(
    <int[0-]>)}&properties={properties(<stri
    ng(goodchars [A-Za-z0-9-_.=])>)
    [<string(goodchars
    [A-Za-z0-9-_.=])>…]}
    PUT create pool
    osd/pool/delete?pool=pool(<poolname>)&po
    ol2={pool2(<poolname>)}&sure={—yes-i
    -really-really-mean-it}
    PUT delete pool
    osd/pool/get?pool=pool(<poolname>)&var=v
    ar(size|min_size|crash_replay_interval|p
    g_num|pgp_num|crush_ruleset)
    GET get pool parameter <var>
    osd/pool/mksnap?pool=pool(<poolname>)&sn
    ap=snap(<string>)
    PUT make snapshot <snap> in <pool>
    osd/pool/rename?srcpool=srcpool(<poolnam
    e>)&destpool=destpool(<poolname>)
    PUT rename <srcpool> to <destpool>
    osd/pool/rmsnap?pool=pool(<poolname>)&sn
    ap=snap(<string>)
    PUT remove snapshot <snap> from <pool>
    osd/pool/set?pool=pool(<poolname>)&var=v
    ar(size|min_size|crash_replay_interval|p
    g_num|pgp_num|crush_ruleset|hashpspool)&
    val=val(<int>)
    PUT set pool parameter <var> to <val>
    osd/pool/set-quota?pool=pool(<poolname>)
    &field=field(max_objects|max_bytes)&val=
    val(<string>)
    PUT set object or byte limit on pool
    osd/pool/stats?name={name(<string>)} GET obtain stats from all pools, or from specified pool
    osd/repair?who=who(<string>) PUT initiate repair on osd <who>
    osd/reweight?id=id(<int[0-]>)&weight=wei
    ght(<float[0.0-1.0]>)
    PUT reweight osd to 0.0 < <weight> < 1.0
    osd/reweight-by-
    utilization?oload={oload(<int[100-]>)}
    PUT reweight OSDs by utilization [overload-percentage-for-consideration, default 120]
    osd/rm?ids=ids(<string>) [<string>…] PUT remove osd(s) <id> [<id>…] in
    osd/scrub?who=who(<string>) PUT initiate scrub on osd <who>
    osd/set?key=key(pause|noup|nodown|noout|
    noin|nobackfill|norecover|noscrub
    |nodeep-scrub)
    PUT set <key>
    osd/setcrushmap PUT set crush map from input file
    osd/setmaxosd?newmax=newmax(<int[0-]>) PUT set new maximum osd value
    osd/stat GET print summary of OSD map
    osd/thrash?num_epochs=num_epochs(<int[0-
    ]>)
    PUT thrash OSDs for <num_epochs>
    osd/tier/add?pool=pool(<poolname>)&tierp
    ool=tierpool(<poolname>)
    PUT add the tier <tierpool> to base pool <pool>
    osd/tier/cache-mode?pool=pool(<poolname>
    )&mode=mode(none|writeback|invalidate+fo
    rward|readonly)
    PUT specify the caching mode for cache tier <pool>
    osd/tier/remove?pool=pool(<poolname>)&ti
    erpool=tierpool(<poolname>)
    PUT remove the tier <tierpool> from base pool <pool>
    osd/tier/remove-
    overlay?pool=pool(<poolname>)
    PUT remove the overlay pool for base pool <pool>
    osd/tier/set-overlay?pool=pool(<poolname
    >)&overlaypool=overlaypool(<poolname>)
    PUT set the overlay pool for base pool <pool> to be <overlaypool>
    osd/tree?epoch={epoch(<int[0-]>)} GET print OSD tree
    osd/unpause PUT unpause osd
    osd/unset?key=key(pause|noup|nodown|noou
    t|noin|nobackfill|norecover|noscrub
    |nodeep-scrub)
    PUT unset <key>
    pg/debug?debugop=debugop(unfound_objects
    _exist|degraded_pgs_exist)
    GET show debug info about pgs
    pg/deep-scrub?pgid=pgid(<pgid>) PUT start deep-scrub on <pgid>
    pg/dump?dumpcontents={dumpcontents(all|s
    ummary|sum|delta|pools|osds|pgs|pgs_brie
    f) [all|summary|sum|delta|pools|osds|pgs
    |pgs_brief…]}
    GET show human-readable versions of pg map (only ‘all’ valid with plain)
    pg/dump_json?dumpcontents={dumpcontents(
    all|summary|sum|pools|osds|pgs)
    [all|summary|sum|pools|osds|pgs…]}
    GET show human-readable version of pg map in json only
    pg/dump_pools_json GET show pg pools info in json only
    pg/dump_stuck?stuckops={stuckops(inactiv
    e|unclean|stale) [inactive|unclean|stale
    …]}&threshold={threshold(<int>)}
    GET show information about stuck pgs
    pg/force_create_pg?pgid=pgid(<pgid>) PUT force creation of pg <pgid>
    pg/getmap GET get binary pg map to -o/stdout
    pg/map?pgid=pgid(<pgid>) GET show mapping of pg to osds
    pg/repair?pgid=pgid(<pgid>) PUT start repair on <pgid>
    pg/scrub?pgid=pgid(<pgid>) PUT start scrub on <pgid>
    pg/send_pg_creates PUT trigger pg creates to be issued
    pg/set_full_ratio?ratio=ratio(<float[0.0
    -1.0]>)
    PUT set ratio at which pgs are considered full
    pg/set_nearfull_ratio?ratio=ratio(<float
    [0.0-1.0]>)
    PUT set ratio at which pgs are considered nearly full
    pg/stat GET show placement group status.
    tell/<osdid-or-pgid>/query GET show details of a specific pg
    tell/<osdid-or-pgid>/query GET show details of a specific pg
    quorum?quorumcmd=quorumcmd(enter|exit) PUT enter or exit quorum
    quorum_status GET report status of monitor quorum
    report?tags={tags(<string>)
    [<string>…]}
    GET report full status of cluster, optional title tag strings
    tell/<osdid-or-
    pgid>/reset_pg_recovery_stats
    PUT reset pg recovery statistics
    scrub PUT scrub the monitor stores
    status GET show cluster status
    sync/force?validate1={—yes-i-really-
    mean-it}&validate2={—i-know-what-i-am-
    doing}
    PUT force sync of and clear monitor store
    tell?target=target(<name
    (type.id)>)&args=args(<string>)
    [<string>…]
    PUT send a command to a specific daemon
    tell/<osdid-or-pgid>/version GET report version of OSD

    Enjoy !

  • December 21, 2013
    Experimenting with the Ceph REST API

    Like I mentioned in my previous post, Ceph has a REST API now.
    That opens a lot of possibilities.

    The Ceph REST API is a WSGI application and it listens on port 5000 by default.

    This means you can query it directly but you probably want to put a
    webserver/proxy such a Apache or nginx in front of it.
    For high availability, you could run ceph-rest-api on several servers
    and have redundant load balancers pointing to the API endpoints.

    ceph-rest-api doesn’t handle authentication very well right now. You
    start it with a cephx authentication key and that’s it. You need to
    handle the permissions/authentication at the application level.

    For the sake of simplicity and testing, I’m going to test in a sandbox
    without a proxy and run ceph-rest-api directly on a monitor with the
    client.admin cephx key.

    Starting ceph-rest-api

    ceph-rest-api is part of the ceph-common package so I already have it on
    my monitor.

    usage: ceph-rest-api [-h] [-c CONF] [--cluster CLUSTER] [-n NAME] [-i ID]
    

    Ceph REST API webapp

    optional arguments: -h, --help show this help message and exit -c CONF, --conf CONF Ceph configuration file --cluster CLUSTER Ceph cluster name -n NAME, --name NAME Ceph client name -i ID, --id ID Ceph client id

    With my configuration file /etc/ceph/ceph.conf and my cephx key at /etc/ceph/keyring:

    root@mon01:~# ceph-rest-api -n client.admin
    * Running on http://0.0.0.0:5000/

    Using the API

    Well, that was easy. Let’s poke it and see what happens:

    root@mon02:~# curl mon01.ceph.example.org:5000
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
    <title>Redirecting...</title>
    <h1>Redirecting...</h1>
    <p>You should be redirected automatically to target URL: <a href="/api/v0.1">/api/v0.1</a>.  If not click the link.</pre>
    

    Well, that works, can we get the status of the cluster ?

    root@mon02:~# curl mon01.ceph.example.org:5000/api/v0.1/health
    HEALTH_OK

    Let’s do the same call with JSON, look at all the data we get !

    root@mon02:~# curl -i -H "Accept: application/json" mon01.ceph.example.org:5000/api/v0.1/health
    HTTP/1.0 200 OK
    Content-Type: application/json
    Content-Length: 1379
    Server: Werkzeug/0.8.1 Python/2.7.3
    Date: Fri, 27 Dec 2013 04:10:29 GMT
    {
      "status": "OK",
      "output": {
        "detail": [
    
        ],
        "timechecks": {
          "round_status": "finished",
          "epoch": 8,
          "round": 3418,
          "mons": [
            {
              "latency": "0.000000",
              "skew": "0.000000",
              "health": "HEALTH_OK",
              "name": "03"
            },
            {
              "latency": "0.001830",
              "skew": "-0.001245",
              "health": "HEALTH_OK",
              "name": "01"
            },
            {
              "latency": "0.001454",
              "skew": "-0.001546",
              "health": "HEALTH_OK",
              "name": "02"
            }
          ]
        },
        "health": {
          "health_services": [
            {
              "mons": [
                {
                  "last_updated": "2013-12-27 04:10:28.096444",
                  "name": "03",
                  "avail_percent": 87,
                  "kb_total": 20641404,
                  "kb_avail": 18132220,
                  "health": "HEALTH_OK",
                  "kb_used": 1460900,
                  "store_stats": {
                    "bytes_total": 14919567,
                    "bytes_log": 983040,
                    "last_updated": "0.000000",
                    "bytes_misc": 65609,
                    "bytes_sst": 13870918
                  }
                },
                {
                  "last_updated": "2013-12-27 04:10:25.155508",
                  "name": "01",
                  "avail_percent": 87,
                  "kb_total": 20641404,
                  "kb_avail": 18030408,
                  "health": "HEALTH_OK",
                  "kb_used": 1562712,
                  "store_stats": {
                    "bytes_total": 15968034,
                    "bytes_log": 2031616,
                    "last_updated": "0.000000",
                    "bytes_misc": 65609,
                    "bytes_sst": 13870809
                  }
                },
                {
                  "last_updated": "2013-12-27 04:10:24.362689",
                  "name": "02",
                  "avail_percent": 87,
                  "kb_total": 20641404,
                  "kb_avail": 18143028,
                  "health": "HEALTH_OK",
                  "kb_used": 1450092,
                  "store_stats": {
                    "bytes_total": 15968294,
                    "bytes_log": 2031616,
                    "last_updated": "0.000000",
                    "bytes_misc": 65609,
                    "bytes_sst": 13871069
                  }
                }
              ]
            }
          ]
        },
        "overall_status": "HEALTH_OK",
        "summary": [
    
        ]
      }
    }
    

    Wrap-up

    The ceph-rest-api is powerful.
    You could use it to monitor your cluster with something like nagios or
    even create a full blown interface to manage your cluster like what
    Inktank provides with the Calamari GUI in their enterprise offering.

    Personally ? I’m going to toy with the idea of making a wrapper library
    around the API calls and surely improve the documentation, not only for
    myself but for the benefit of other ceph users.

  • December 21, 2013
    Experimenting with the Ceph REST API

    Let’s take a closer look at the Ceph REST API, how it works and what it can do.

Careers