Cephalocon returns in a co-located event with KubeCon + CloudNativeCon in Barcelona on May 19-20. Join hundreds of technologists and adopters from across the globe to showcase Ceph’s history and its future, demonstrate real-world applications, and highlight vendor solutions.
The latest release of Ceph includes a beautiful dashboard, merging of placement groups, auto placement group management and more!
Ceph is a unified, distributed storage system designed for excellent performance, reliability and scalability.
Ceph provides seamless access to objects using native language bindings or radosgw (RGW), a REST interface that’s compatible with applications written for S3 and Swift.
Ceph’s RADOS Block Device (RBD) provides access to block device images that are striped and replicated across the entire storage cluster.
Prior to Nautilus, Ceph storage administrators have not had access to any built-in RBD performance monitoring and metrics gathering tools. While a storage administrator could monitor high-level cluster or OSD IO metrics, oftentimes this was too coarse-grained to determine the source of noisy neighbor workloads running on top of RBD...