- Posted by sage
- May 29th, 2013
Another sprint, and v0.63 is here. This release features librbd improvements, mon fixes, osd robustness, and packaging fixes.
Notable features in this release include:
- librbd: parallelize delete, rollback, flatten, copy, resize
- librbd: ability to read from local replicas
- osd: resurrect partially deleted PGs
- osd: prioritize recovery for degraded PGs
- osd: fix internal heartbeart timeouts when scrubbing very large objects
- osd: close narrow journal race
- rgw: fix usage log scanning for large, untrimmed logs
- rgw: fix locking issue, user operation mask,
- initscript: fix osd crush weight calculation when using -a
- initscript: fix enumeration of local daemons
- mon: several fixes to paxos, sync
- mon: new –extract-monmap to aid disaster recovery
- mon: fix leveldb compression, trimming
- add ‘config get’ admin socket command
- rados: clonedata command for cli
- debian: stop daemons on uninstall; fix dependencies
- debian wheezy: fix udev rules
- many many small fixes from coverity scan
You can get v0.63 from the usual places:
Since last month saw huge amounts of OpenStack news coming out of the Developer Summit in Portland, I thought it might be worth spending some time on CloudStack and its ecosystem this month. With the Citrix Synergy event in full swing, a ‘State of the Union’ with respect to Ceph and Citrix is probably the easiest way to look at all the great things going on.
There are a number of products that Ceph plugs into, and many of them are built on top of open source projects. One of the great parts about Ceph is that a single cluster can service all of your data storage needs, especially as it relates to the Citrix portfolio. Much like Linux in the datacenter, it’s only a matter of time before Open Source becomes the dominant force in this last bastion of proprietary-driven infrastructure.
If you have deployed Ceph recently without the assistance of an orchestration tool like Chef or Juju you may have noticed there has been a lot of attention on ceph-deploy. Ceph-deploy is the new stand-alone way to deploy Ceph (replacing mkcephfs) that relies only on ssh, sudo, and some Python to get the job done. If you are experimenting with Ceph or find yourself deploying and tearing down Ceph clusters a lot and don’t want the added overhead of an orchestration framework, this is probably the tool for you.
Since this tool has undergone a lot of work lately, we wanted to publish a nice simple walkthrough to help people get up and running. However, since we also love it when brilliance comes from our community instead of us hogging the microphone all the time; we thought it would be better to replicate a blog from community contributor Loic Dachary. Read on for his ceph-deploy walkthrough and give it a shot!
- Posted by sage
- May 14th, 2013
This is the first release after cuttlefish. Since most of this window was spent on stabilization, there isn’t a lot of new stuff here aside from cleanups and fixes (most of which are backported to v0.61). v0.63 is due out in 2 weeks and will have more goodness.
- mon: fix validation of mds ids from CLI commands
- osd: fix for an op ordering bug
- osd, mon: optionally dump leveldb transactions to a log
- osd: fix handling for split after upgrade from bobtail
- debian, specfile: packaging cleanups
- radosgw-admin: create keys for new users by default
- librados python binding cleanups
- misc code cleanups
You can get v0.62 from the usual places:
While Ceph has a wide range of use cases, the most frequent application that we are seeing is that of block devices as data store for public and private clouds managed by OpenStack, CloudStack, Eucalyptus, and OpenNebula. This means that we frequently get questions about things like geographic replication, backup, and disaster recovery (or some combination therein, given the amount of overlap on these topics). While a full-featured, robust solution to geo-replication is currently being hammered out there are a number of different approaches already being tinkered with (like Sebastien Han’s setup with DRBD or the upcoming work using RGW).
However, since one of the primary focuses in managing a cloud is the manipulation of images, the solution to disaster recovery and general backup can often be quite simplistic. Incremental snapshots can fill this, and several other, roles quite well. To that end I wanted to share a few thoughts from RBD developer Josh Durgin for those of you who may have missed his great talk at the OpenStack Developer Summit a few weeks ago.
- Posted by sage
- May 14th, 2013
This release has only two changes: it disables a debug log by default that consumes disk space on the monitor, and fixes a bug with upgrading bobtail monitor stores with duplicated GV values. We urge all v0.61.1 users to upgrade to avoid filling the monitor data disks.
- mon: fix conversion of stores with duplicated GV values
- mon: disable ‘mon debug dump transactions’ by default
You can get v0.61.2 from the usual places:
This week marked the very first Ceph Developer Summit where the community gathered to discuss development efforts focused on the next stable release ‘Dumpling.’ There was quite a turnout for such a boutique event! We hit over 50 concurrent participants in the live video stream and had almost 400 unique visitors to the relatively new Ceph wiki during that window. Participants included folks from all over the world:
- United States
- United Kingdom
There was a ton of work proposed by the community, and almost all of it was accepted and discussed for inclusion in Dumpling. We were incredibly pleased with both the turn-out, and the general caliber of the participants. Having an awesome community makes it really easy to stay excited about what we do.
Below you will find each of the session videos split out with a brief description and links to the blueprint, etherpad, and irc logs as they appeared during the session. The original summit page has also been updated with the appropriate links for posterity. We plan to leave these pages up in order to give people the ability to look back at Ceph development as far as possible. If you have questions or feedback, please email the community team.
We will be doing a developer summit for each stable release (quarterly) so if you are interested in participating feel free to post a blueprint on the wiki for consideration. The sessions for each developer summit are selected directly from submitted blueprints.
If you are interested in contributing to Ceph on a smaller scale feel free to dive right in, clone our github repository and submit a pull request for any changes you make.
Now, on to the summit!
- Posted by sage
- May 9th, 2013
This release is a small update to Cuttlefish that fixes a problem when upgrading a bobtail cluster that had snapshots. Please use this instead of v0.61 if you are upgrading to avoid possible ceph-osd daemon crashes. There is also fix for a problem deploying monitors and generating new authentication keys.
- osd: handle upgrade when legacy snap collections are present; repair from previous failed restart
- ceph-create-keys: fix race with ceph-mon startup (which broke ‘ceph-deploy gatherkeys …’)
- ceph-create-keys: gracefully handle bad response from ceph-osd
- sysvinit: do not assume default osd_data when automatically weighting OSD
- osd: avoid crash from ill-behaved classes using getomapvals
- debian: fix squeeze dependency
- mon: debug options to log or dump leveldb transactactions
You can get v0.61.1 from the usual places:
- Posted by sage
- May 7th, 2013
Spring has arrived (at least for some of us), and a new stable release of Ceph is ready. Thank you to everyone who has contributed to this release!
Bigger ticket items since v0.56.x “Bobtail”:
- ceph-deploy: our new deployment tool to replace ‘mkcephfs’
- robust RHEL/CentOS support
- ceph-disk: many improvements to support hot-plugging devices via chef and ceph-deploy
- ceph-disk: dm-crypt support for OSD disks
- ceph-disk: ‘list’ command to see available (and used) disks
- rbd: incremental backups
- rbd-fuse: access RBD images via fuse
- librbd: autodetection of VM flush support to allow safe enablement of the writeback cache
- osd: improved small write, snap trimming, and overall performance
- osd: PG splitting
- osd: per-pool quotas (object and byte)
- osd: tool for importing, exporting, removing PGs from OSD data store
- osd: improved clean-shutdown behavior
- osd: noscrub, nodeepscrub options
- osd: more robust scrubbing, repair, ENOSPC handling
- osd: improved memory usage, log trimming
- osd: improved journal corruption detection
- ceph: new ‘df’ command
- mon: new storage backend (leveldb)
- mon: config-keys service
- mon, crush: new commands to manage CRUSH entirely via CLI
- mon: avoid marking entire subtrees (e.g., racks) out automatically
- rgw: REST API for user management
- rgw: CORS support
- rgw: misc API fixes
- rgw: ability to listen to fastcgi on a port
- sysvinit, upstart: improved support for standardized data locations
- mds: backpointers on all data and metadata objects
- mds: faster fail-over
- mds: many many bug fixes
- ceph-fuse: many stability improvements
Notable changes since v0.60:
- Posted by sage
- May 3rd, 2013
Behold, another Bobtail update! This one serves three main purposes: it fixes a small issue with monitor features that is important when upgrading from argonaut -> bobtail -> cuttlefish, it backports many changes to the ceph-disk helper scripts that allow bobtail clusters to be deployed with the new ceph-deploy tool or our chef cookbooks, and it fixes several important bugs in librbd. There is also, of course, the usual collection of important bug fixes in other parts of the system.
Notable changes include:
- mon: fix recording of quorum feature set (important for argonaut -> bobtail -> cuttlefish mon upgrades)
- osd: minor peering bug fixes
- osd: fix a few bugs when pools are renamed
- osd: fix occasionally corrupted pg stats
- osd: fix behavior when broken v0.56[.0] clients connect
- rbd: avoid FIEMAP ioctl on import (it is broken on some kernels)
- librbd: fixes for several request/reply ordering bugs
- librbd: only set STRIPINGV2 feature on new images when needed
- librbd: new async flush method to resolve qemu hangs (requires Qemu update as well)
- librbd: a few fixes to flatten
- ceph-disk: support for dm-crypt
- ceph-disk: many backports to allow bobtail deployments with ceph-deploy, chef
- sysvinit: do not stop starting daemons on first failure
- udev: fixed rules for redhat-based distros
- build fixes for raring
For more detailed information, see the complete changelog.
You can get v0.56.5 from the usual places: