Cephalocon returns in a co-located event with KubeCon + CloudNativeCon in Barcelona on May 19-20. Join hundreds of technologists and adopters from across the globe to showcase Ceph’s history and its future, demonstrate real-world applications, and highlight vendor solutions.
The latest release of Ceph includes a beautiful dashboard, merging of placement groups, auto placement group management and more!
Ceph is a unified, distributed storage system designed for excellent performance, reliability and scalability.
Ceph provides seamless access to objects using native language bindings or radosgw (RGW), a REST interface that’s compatible with applications written for S3 and Swift.
Ceph’s RADOS Block Device (RBD) provides access to block device images that are striped and replicated across the entire storage cluster.
Introduction Recap: In Blog Episode-3 We have covered RHCS cluster scale-out performance and have observed that, upon adding 60% of additional hardware resources we can get 95% higher IOPS, this demonstrates the scale-out nature of Red Hat Ceph Storage Cluster. This is the fourth episode of the performance blog series...