Aggregated news from external sources
If you’re in Atlanta Sunday 11th, may 2014 evening, for the OpenStack summit or any other reason, join us to celebrate the OpenStack Icehouse release and the Ceph Firefly release. There will be both OpenStack and Ceph developers present and … Continue reading →
The jerasure library is the default erasure code plugin of Ceph. The gf-complete companion library supports SSE optimizations at compile time, when the compiler provides them (-msse4.2 etc.). The jerasure (and gf-complete with it) plugin is compiled multiple times with … Continue reading →
$ rbd create –size $((1024 * 1024 * 1024 * 1024)) tiny $ rbd info tiny rbd image ‘tiny’: size 1024 PB in 274877906944 objects order 22 (4096 kB objects) block_name_prefix: rb.0.1009.6b8b4567 format: 1 Note: rbd rm tiny will take … Continue reading →
The gf-complete and jerasure libraries implement the erasure code functions used in Ceph. They were copied in Ceph in 2013 because there were no reference repositories at the time. The copy was removed from the Ceph repository and replaced by … Continue reading →
Come chat with me about Openstack, Swift and Ceph in Montreal March 17th.
It’s with great pleasure that I accepted an invitation from my colleague
Rafael Rosa (@rafaelrosafu) to talk about Ceph in the context of Openstack.
Our friends at Enovance will be talking about Swift, the object storage
project in Openstack.
Should definitely be fun, it will in fact be my first public speech ever 😀
Come join us, register – time is running short:
Edit: It was fun ! The presentation I did about Ceph is available on
Proxmox has today released a new version of Proxmox VE, Proxmox 3.2 which is available as either a downloadable ISO or from the Proxmox repository. Hilights of this release include’; Ceph has now been integrated to the Proxmox web GUI as well as a new CLI command created for creating Ceph clusters. See my post … Continue reading Proxmox 3.2 is now available with SPICE, Ceph and updated QEMU →
Replacing a Failed Disk from Ceph a ClusterDo you have a ceph cluster , great , you are awesome ; so very soon you would face this . Check your cluster health# ceph status cluster c452b7df-0c0b-4005-8feb-fc3bb92407f5 health HEALTH_WARN 6 pgs pe…
The Ceph erasure code plugin benchmark for jerasure version 1 are compared after an upgrade to jerasure version 2, using the same command, on the same hardware. Encoding: 5.2GB/s which is ~20% better than 4.2GB/s Decoding: no processing necessary (because … Continue reading →
The addition of erasure code in Ceph started in april 2013 and was discussed during the first Ceph Developer Summit. The implementation reached an important milestone a few days ago and it is now ready for alpha testing. For the … Continue reading →
Since running benchmarks against Ceph was a topic in the “Best Practices with Ceph as Distributed, Intelligent, Unified Cloud Storage (Dieter Kasper, Fujitsu)” talk on the Ceph day in Frankfurt today, I would like to point you to a blog post about…