The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • February 28, 2018
    Ceph Community Newsletter, Feb 2018 edition

    Hey Cephers! We are starting this new section on Ceph website to talk about the project highlights on a monthly newsletter. We hope you enjoy it! Project updates The SUSE OpenAttic team is porting their management dashboard upstream into ceph-mgr, where it will replace the current ‘dashboard’ module and be expanded to include greater management …Read more

  • February 28, 2018
    v12.2.4 Luminous Released

    This is the fourth bugfix release of Luminous v12.2.x long term stable release series. This was primarily intended to fix a few build, ceph-volume/ceph-disk and RGW issues. We recommend all the users of 12.2.x series to update. Notable Changes¶ ceph-volume: adds support to zap encrypted devices (issue#22878, pr#20545, Andrew Schoen) ceph-volume: log the current running …Read more

  • February 21, 2018
    v12.2.3 Luminous released

    This is the third bugfix release of Luminous v12.2.x long term stable release series. It contains a range of bug fixes and a few features across Bluestore, CephFS, RBD & RGW. We recommend all the users of 12.2.x series update. Notable Changes¶ CephFS: The CephFS client now checks for older kernels’ inability to reliably clear …Read more

  • December 11, 2017
    Want to Install Ceph, but afraid of Ansible?

    There is no doubt that Ansible is a pretty cool automation engine for provisioning and configuration management. ceph-ansible builds on this versatility to deliver what is probably the most flexible Ceph deployment tool out there. However, some of you may not want to get to grips with Ansible before you install Ceph…weird right? No, not really. …Read more

  • December 1, 2017
    v12.2.2 Luminous released

    This is the second bugfix release of Luminous v12.2.x long term stable release series. It contains a range of bug fixes and a few features across Bluestore, CephFS, RBD & RGW. We recommend all the users of 12.2.x series update. For more detailed information, see the complete changelog. Notable Changes Standby ceph-mgr daemons now redirect …Read more

  • October 25, 2017
    New in Luminous: PG overdose protection

    Choosing the right number of PGs (“placement groups”) for your cluster is a bit of black art–and a usability nightmare.  Getting a reasonable value can have a big impact on a cluster’s performance and reliability, both in a good way and a bad way.  Unfortunately, over the past few years we’ve seen our share of …Read more

  • October 23, 2017
    New in Luminous: Zabbix

    The Ceph manager service (ceph-mgr) was introduced in the Kraken release, and in Luminous it has been extended with a number of new python modules. One of these is a module exporting overall cluster status and performance to Zabbix. Enabling the dashboard module The Zabbix module is included in the ceph-mgr package, so if you’ve …Read more

  • October 19, 2017
    New in Luminous: RADOS improvements

    RADOS is the reliable autonomic distributed object store that underpins Ceph, providing a reliable, highly available, and scalable storage service to other components.  As with every Ceph release, Luminous includes a range of improvements to the RADOS core code (mostly in the OSD and monitor) that benefit all object, block, and file users. Parallel monitor …Read more

  • October 16, 2017
    New in Luminous: Erasure Coding for RBD and CephFS

    Luminous now fully supports overwrites for erasure coded (EC) RADOS pools, allowing RBD and CephFS (as well as RGW) to directly consume erasure coded pools.  This has the potential to dramatically reduce the overall cost per terabyte of Ceph systems since the usual 3x storage overhead of replication can be reduced to more like 1.2x …Read more

  • October 10, 2017
    New in Luminous: CephFS metadata server memory limits

    The Ceph file system uses a cluster of metadata servers to provide an authoritative cache for the CephFS metadata stored in RADOS. The most basic reason for this is to maintain a hot set of metadata in memory without talking to the metadata pool in RADOS. Another important reason is to allow clients to also …Read more

  • October 6, 2017
    v10.2.10 Jewel released

    This point release brings a number of important bugfixes in all major components of Ceph, we recommend all Jewel 10.2.x users to upgrade. Notable Changes¶ build/ops: Add fix subcommand to ceph-disk, fix SELinux denials, and speed up upgrade from non-SELinux enabled ceph to an SELinux enabled one (issue#20077, issue#20184, issue#19545, pr#14346, Boris Ranto) build/ops: deb: …Read more

  • October 2, 2017
    New in Luminous: CephFS subtree pinning

    The Ceph file system (CephFS) allows for portions of the file system tree to be carved up into subtrees which can be managed authoritatively by multiple MDS ranks. This empowers the cluster to scale performance with the size and usage of the file system by simply adding more MDS servers into the cluster. Where possible, …Read more

Careers