The Ceph Blog

Featured Post

OSCON has arrived (although if you came in for the Ceph tutorial session that’s old news to you)! As a part of our participation in OSCON, and as a way to celebrate the fact that Ceph turned 10 years old this year, we have decided to have our party be a distributed one.

We would love to have our users send us pictures of whatever they might be doing to celebrate the 10th anniversary of ceph. Are you busy racking in 3 petabytes of storage to add to your Ceph cluster? Did you create a culinary masterpiece in the form of a squid cake? Are you sitting alone in the middle of the OSCON show floor with a party hat and a cupcake? We want to see! As thanks for sharing your birthday celebration efforts with the community we’ll be picking one lucky winner to receive a desktop Ceph test cluster built by our very own Mark Nelson (Ceph performance guru extraordinaire!).

While the cluster wont break any speed records, and only a madman would use it for anything even remotely production ready, it will give you a Ceph cluster to play with and can sit on your desk to invoke feelings of envy in your coworkers. For more details check out the (new) contest page on the Ceph wiki. If you have any questions please contact me or just tweet @Ceph. Thanks, and happy birthday to Ceph!

scuttlemonkey out
Earlier Posts

Ceph is coming back to OSCON next week (July 20-24 in Portland, OR). The difference however, is that this year we need two digits to tell people how old we are. Stop by for some mild festivities at the Ceph booth (P2) as we share cupcakes, and t-shirts that salute the hard work of all our committers since day one.

Photo credit: picphotos.net

Originally we had much bigger plans for a guerrilla show floor birthday spectacle, but summoning Cthulhu just seemed like far too much work and could have been potentially disruptive to our fellow attendees. So instead we’re just going to enjoy hanging out with our community and sharing memories and calories.

read more…

v0.80.4 Firefly released

This Firefly point release fixes an potential data corruption problem when ceph-osd daemons run on top of XFS and service Firefly librbd clients. A recently added allocation hint that RBD utilizes triggers an XFS bug on some kernels (Linux 3.2, and likely others) that leads to data corruption and deep-scrub errors (and inconsistent PGs). This release avoids the situation by disabling the allocation hint until we can validate which kernels are affected and/or are known to be safe to use the hint on.

We recommend that all v0.80.x Firefly users urgently upgrade, especially if they are using RBD.

NOTABLE CHANGES

  • osd: disable XFS extsize hint by default (#8830, Samuel Just)
  • rgw: fix extra data pool default name (Yehuda Sadeh)

For more detailed information, see the complete changelog.

GETTING CEPH

 

v0.80.3 Firefly released

V0.80.3 FIREFLY

This is the third Firefly point release. It includes a single fix for a radosgw regression that was discovered in v0.80.2 right after it was released.

We recommand that all v0.80.x Firefly users upgrade.

NOTABLE CHANGES

  • radosgw: fix regression in manifest decoding (#8804, Sage Weil)

For more detailed information, see the complete changelog.

V0.80.2 FIREFLY

This is the second Firefly point release. It contains a range of important fixes, including several bugs in the OSD cache tiering, some compatibility checks that affect upgrade situations, several radosgw bugs, and an irritating and unnecessary feature bit check that prevents older clients from communicating with a cluster with any erasure coded pools.

One someone large change in this point release is that the ceph RPM package is separated into a ceph and ceph-common package, similar to Debian. The ceph-common package contains just the client libraries without any of the server-side daemons.

We recommend that all v0.80.x Firefly users skip this release and use v0.80.3.

NOTABLE CHANGES

  • ceph-disk: better debug logging (Alfredo Deza)
  • ceph-disk: fix preparation of OSDs with dmcrypt (#6700, Stephen F Taylor)
  • ceph-disk: partprobe on prepare to fix dm-crypt (#6966, Eric Eastman)
  • do not require ERASURE_CODE feature from clients (#8556, Sage Weil)
  • libcephfs-java: build with older JNI headers (Greg Farnum)
  • libcephfs-java: fix build with gcj-jdk (Dmitry Smirnov)
  • librados: fix osd op tid for redirected ops (#7588, Samuel Just)
  • librados: fix rados_pool_list buffer bounds checks (#8447, Sage Weil)
  • librados: resend ops when pool overlay changes (#8305, Sage Weil)
  • librbd, ceph-fuse: reduce CPU overhead for clean object check in cache (Haomai Wang)
  • mon: allow deletion of cephfs pools (John Spray)
  • mon: fix default pool ruleset choice (#8373, John Spray)
  • mon: fix health summary for mon low disk warning (Sage Weil)
  • mon: fix ‘osd pool set <pool> cache_target_full_ratio’ (Geoffrey Hartz)
  • mon: fix quorum feature check (Greg Farnum)
  • mon: fix request forwarding in mixed firefly+dumpling clusters 9#8727, Joao Eduardo Luis)
  • mon: fix rule vs ruleset check in ‘osd pool set … crush_ruleset’ command (John Spray)
  • mon: make osd ‘down’ count accurate (Sage Weil)
  • mon: set ‘next commit’ in primary-affinity reply (Ilya Dryomov)
  • mon: verify CRUSH features are supported by all mons (#8738, Greg Farnum)
  • msgr: fix sequence negotiation during connection reset (Guang Yang)
  • osd: block scrub on blocked objects (#8011, Samuel Just)
  • osd: call XFS hint ioctl less often (#8241, Ilya Dryomov)
  • osd: copy xattr spill out marker on clone (Haomai Wang)
  • osd: fix flush of snapped objects (#8334, Samuel Just)
  • osd: fix hashindex restart of merge operation (#8332, Samuel Just)
  • osd: fix osdmap subscription bug causing startup hang (Greg Farnum)
  • osd: fix potential null deref (#8328, Sage Weil)
  • osd: fix shutdown race (#8319, Sage Weil)
  • osd: handle ‘none’ in CRUSH results properly during peering (#8507, Samuel Just)
  • osd: set no spill out marker on new objects (Greg Farnum)
  • osd: skip op ordering debug checks on tiered pools (#8380, Sage Weil)
  • rados: enforce ‘put’ alignment (Lluis Pamies-Juarez)
  • rest-api: fix for ‘rx’ commands (Ailing Zhang)
  • rgw: calc user manifest etag and fix check (#8169, #8436, Yehuda Sadeh)
  • rgw: fetch attrs on multipart completion (#8452, Yehuda Sadeh, Sylvain Munaut)
  • rgw: fix buffer overflow for long instance ids (#8608, Yehuda Sadeh)
  • rgw: fix entity permission check on metadata put (#8428, Yehuda Sadeh)
  • rgw: fix multipart retry race (#8269, Yehuda Sadeh)
  • rpm: split ceph into ceph and ceph-common RPMs (Sandon Van Ness, Dan Mick)
  • sysvinit: continue startin daemons after failure doing mount (#8554, Sage Weil)

For more detailed information, see the complete changelog.

GETTING CEPH

v0.82 released

This is the second post-firefly development release. It includes a range of bug fixes and some usability improvements. There are some MDS debugging and diagnostic tools, an improved ‘ceph df’, and some OSD backend refactoring and cleanup.

NOTABLE CHANGES

  • ceph-brag: add tox tests (Alfredo Deza)
  • common: perfcounters now use atomics and go faster (Sage Weil)
  • doc: CRUSH updates (John Wilkins)
  • doc: osd primary affinity (John Wilkins)
  • doc: pool quotas (John Wilkins)
  • doc: pre-flight doc improvements (Kevin Dalley)
  • doc: switch to an unencumbered font (Ross Turk)
  • doc: update openstack docs (Josh Durgin)
  • fix hppa arch build (Dmitry Smirnov)
  • init-ceph: continue starting other daemons on crush or mount failure (#8343, Sage Weil)
  • keyvaluestore: fix hint crash (#8381, Haomai Wang)
  • libcephfs-java: build against older JNI headers (Greg Farnum)
  • librados: fix rados_pool_list bounds checks (Sage Weil)
  • mds: cephfs-journal-tool (John Spray)
  • mds: improve Journaler on-disk format (John Spray)
  • mds, libcephfs: use client timestamp for mtime/ctime (Sage Weil)
  • mds: misc encoding improvements (John Spray)
  • mds: misc fixes for multi-mds (Yan, Zheng)
  • mds: OPTracker integration, dump_ops_in_flight (Greg Farnum)
  • misc cleanup (Christophe Courtaut)
  • mon: fix default replication pool ruleset choice (#8373, John Spray)
  • mon: fix set cache_target_full_ratio (#8440, Geoffrey Hartz)
  • mon: include per-pool ‘max avail’ in df output (Sage Weil)
  • mon: prevent EC pools from being used with cephfs (Joao Eduardo Luis)
  • mon: restore original weight when auto-marked out OSDs restart (Sage Weil)
  • mon: use msg header tid for MMonGetVersionReply (Ilya Dryomov)
  • osd: fix bogus assert during OSD shutdown (Sage Weil)
  • osd: fix clone deletion case (#8334, Sam Just)
  • osd: fix filestore removal corner case (#8332, Sam Just)
  • osd: fix hang waiting for osdmap (#8338, Greg Farnum)
  • osd: fix interval check corner case during peering (#8104, Sam Just)
  • osd: fix journal-less operation (Sage Weil)
  • osd: include backend information in metadata reported to mon (Sage Weil)
  • rest-api: fix help (Ailing Zhang)
  • rgw: check entity permission for put_metadata (#8428, Yehuda Sadeh)

GETTING CEPH

Wow, the last few weeks have been very busy for the Ceph team! While it may have been a few weeks ago, many of us are still feeling the excitement of the most recent OpenStack Summit, and many of us weren’t even there! With the Firefly release still cooling in the packaging repos, there certainly was a lot for the Ceph Community to be excited about. However it was the voices of the users that once again made us the most excited. The results from another OpenStack Foundation OpenStack User Survey was made public at the conference and it has extremely encouraging results for Ceph open source and software-defined storage alike.

The OpenStack Foundation survey results showed that among OpenStack users surveyed, Ceph is a leader across the board for block storage in clouds of all stages. Ceph is cited as one of the leading distributed block storage technology in all three categories.

read more…

v0.81 released

This is the first development release since Firefly. It includes a lot of work that we delayed merging while stabilizing things. Lots of new functionality, as well as several fixes that are baking a bit before getting backported.

UPGRADING

  • CephFS support for the legacy anchor table has finally been removed. Users with file systems created before firefly should ensure that inodes with multiple hard links are modifiedprior to the upgrade to ensure that the backtraces are written properly. For example:
    sudo find /mnt/cephfs -type f -links +1 -exec touch \{\} \;
  • Disallow nonsensical ‘tier cache-mode’ transitions. From this point onward, ‘writeback’ can only transition to ‘forward’ and ‘forward’ can transition to 1) ‘writeback’ if there are dirty objects, or 2) any if there are no dirty objects.

read more…

Ceph Developer Summit: G/H

Hard to believe another quarter has gone by in the world of Ceph development! So much has happened with the release of Firefly (late, but well worth the wait!), another smashing OpenStack Developer Summit, and of course the acquisition of Inktank. As the dust settles from all of the hustle and bustle we’re looking to get back on track with development.

So, since it has been a while since our last summit we thought it would be a good idea to have another “interim” summit to discuss work in flight, anything new that may have come up, and look forward to both our “Giant” and “Hammer” releases. In the spirit of organization I’m opening blueprint submissions for this “Ceph Developer Summit G/H” but only as “General” blueprints (which are open all the time). If you have new work, or things you would like to see in either Giant or Hammer, please enter them as such (and make a note on the blueprint) and we’ll make sure they are tagged accordingly.

Now, on to the details!

Date Milestone
02 June Blueprint submissions begin
16 JUNE Blueprint submissions end
20 JUNE Summit agenda announced
24 JUNE Ceph Developer Summit: Day 1
25 JUNE Ceph Developer Summit: Day 2 (if needed)
September 2014 Giant Release

As always, this event will be an online event (probably Google Hangouts again) so that everyone can attend from their own timezone. If you are interested in submitting a blueprint, collaborating on an existing blueprint, or just attending to learn more about Ceph, read on!

 

Submit Blueprint

read more…

Today Red Hat is following through on one of my favorite promises from the acquisition: we’re open sourcing Calamari, the management platform for Ceph. Originally delivered as a proprietary dashboard included with Inktank Ceph Enterprise, Calamari has some really great visualization stuff for your cluster as well as the long term goal of being the all-in-wonder management system that can configure and analyze a Ceph cluster. We’re really glad that we can share this work with the community, with the added benefit of aggregating several different groups of folks working on a management GUI for Ceph. Now, on to the details!

Photo Credit: Flickr

read more…

v0.67.9 Dumpling released

This Dumpling point release fixes several minor bugs. The most prevalent in the field is one that occasionally prevents OSDs from starting on recently created clusters.

We recommand that all Dumpling users upgrade at their convenience.

NOTABLE CHANGES

  • ceph-fuse, libcephfs: client admin socket command to kick and inspect MDS sessions (#8021, Zheng Yan)
  • monclient: fix failure detection during mon handshake (#8278, Sage Weil)
  • mon: set tid on no-op PGStatsAck messages (#8280, Sage Weil)
  • msgr: fix a rare bug with connection negotiation between OSDs (Guang Yang)
  • osd: allow snap trim throttling with simple delay (#6278, Sage Weil)
  • osd: check for splitting when processing recover/backfill reservations (#6565, Samuel Just)
  • osd: fix backfill position tracking (#8162, Samuel Just)
  • osd: fix bug in backfill stats (Samuel Just)
  • osd: fix bug preventing OSD startup for infant clusters (#8162, Greg Farnum)
  • osd: fix rare PG resurrection race causing an incomplete PG (#7740, Samuel Just)
  • osd: only complete replicas count toward min_size (#7805, Samuel Just)
  • rgw: allow setting ACLs with empty owner (#6892, Yehuda Sadeh)
  • rgw: send user manifest header field (#8170, Yehuda Sadeh)

For more detailed information, see the complete changelog.

GETTING CEPH

Page 1 of 1212345...10...Last »
© 2013, Inktank Storage, Inc.. All rights reserved.