Category Archives: Dev notes

Updates to Ceph tgt (iSCSI) support

In a previous blog post I introduced work we’ve done to the user-space tgt iSCSI project to allow exporting RADOS block device (rbd) images as iSCSI targets. I’ve recently taken a short break from working on the Calamari project to update that support to bypass some limitations and add some functionality. The tgt-admin utility now [...]

On the Road to a Better Ceph-Deploy

Ceph-deploy is the easy deployment tool for Ceph, but for a while it caused more than one headache: almost no logging and no clear error messages when something went wrong. There has been a *lot* of effort trying to get those (and other) issues ironed out and making ceph-deploy way better. Like, ridiculously better. This [...]

New Ceph Backend to Lower Disk Requirements

I get a fair number of questions on the current Ceph blueprints, especially those coming from the community. Loic Dachary, one of the owners of the Erasure Encoding blueprint, has done a great job taking a look at some of issues at hand. When evaluating Ceph to run a new storage service, the replication factor [...]

Incremental Snapshots with RBD

While Ceph has a wide range of use cases, the most frequent application that we are seeing is that of block devices as data store for public and private clouds managed by OpenStack, CloudStack, Eucalyptus, and OpenNebula. This means that we frequently get questions about things like geographic replication, backup, and disaster recovery (or some [...]

Adding Support for RBD to stgt

tgt, the Linux SCSI target framework (well, one of them) is an iSCSI target implementation whose goals include implementing a large portion of the SCSI emulation code in userland. tgt can provide iSCSI over Ethernet or iSER (iSCSI extensions for RDMA) over Infiniband. It can emulate various SCSI target types (really “command sets”): SBC (normal [...]

Ceph’s New Monitor Changes

Back in May 2012, after numerous hours confined to a couple of planes since departing Lisbon, I arrived at Los Angeles to meet most of the folks from Inktank. During my stay I had the chance to meet everybody on the team, attend the company’s launch party and start a major and well deserved rework [...]

CephFS MDS Status Discussion

There have been a lot of questions lately about the current status of the Ceph MDS and when to expect a stable release. Inktank has been having some internal discussions around CephFS release development, and I’d like to share them with you and ask for feedback! A couple quick notes: first, this blog post is [...]

Deploying Ceph with Juju

NOTE: This guide is out of date. Please see the included documentation on the more recent charms in the charmstore. The last few weeks have been very exciting for Inktank and Ceph. There have been a number of community examples of how people are deploying or using Ceph in the wild. From the ComodIT orchestration [...]

What’s New in the Land of OSD?

It’s been a few months since the last named release, Argonaut, and we’ve been busy! Well, in retrospect, most of the time was spent on finding a cephalopod name that starts with “b”, but once we got that done, we still had a few weeks left to devote to technical improvements. In particular, the OSD [...]

Atomicity of RESTful radosgw operations

A while back we worked on radosgw doing atomic reads and writes. The first issue was making sure that two or more concurrent writers that write to the same object don’t end up with an inconsistent object. That is the “atomic PUT” issue. We also wanted to be able to make sure that when one [...]

© 2013, Inktank Storage, Inc.. All rights reserved.