Share With Us!

Do you have a use case or a reference architecture that you would like to share with the Ceph community? We’re always happy to help show others how the world is using Ceph. Feel free to send your information to
ceph-community@ceph.com

Understanding a Multi-Site Ceph Gateway Installation

With the major rework of the Ceph gateway software in the Jewel release it became necessary to revisit the installation and configuration process for S3 and Swift deployments. Although some documentation is already available on the Internet most of them do not convey a deeper understanding of the various configuration parameters. This applies in particular for the fail­over and fall­back process in case of a disaster. This whitepaper aims at improving this situation.

Hyper Converged Red Hat OpenStack Platform 10 and Red Hat Ceph Storage 2

This reference architecture describes how to deploy Red Hat OpenStack Platform and Red Hat Ceph Storage in a way that both the OpenStack Nova Compute services and the Ceph Object Storage Daemon (OSD) services reside on the same node. A server which runs both compute and storage processes is known as a hyper-converged node. There is increasing interest in the field for hyper-convergence for cloud (NFVi and Enterprise) deployments. The reasons include smaller initial deployment foot prints, a lower cost of entry, and maximized capacity utilization.

Dell EMC DSS 7000 with Red Hat Ceph Storage 2

This technical white paper provides performance and sizing guidelines for Red Hat Ceph Storage 2 running on Dell EMC servers, specifically the Dell EMC DSS 7000, based on extensive testing performed by Dell EMC engineering teams. The DSS 7000 is a cost-optimized, scale-out storage server platform that provides high capacity and scalability along with an optimal balance of storage utilization, performance, and cost.

High-Performance Cluster Storage for IOPS Workloads

As Ceph takes on high-performance, intensive workloads, solid-state drives (SSDs) become a critical component of Ceph clusters. To address the needs of Ceph users to effectively deploy All-Flash Ceph clusters optimized for performance, Samsung Semiconductor Inc. and Red Hat have performed extensive testing to characterize optimized configurations for deploying Red Hat® Ceph Storage on Samsung NVMe SSDs deployed in a Samsung NVMe Reference.

Ceph on NetApp E-Series

This technical report describes how to build a Ceph cluster using a tested E-Series reference architecture. The report also describes the performance benchmarking methodologies used along with test results.

MySQL Databases on Red Hat Ceph Storage

As the number of deployed MySQL databases has grown, database administrators (DBAs) are increasingly seeking public or private cloud-based storage solutions that complement their successful non-cloud deployments. To meet this growing need, Ceph storage can provide resilient elastic storage pools. This hardware guide offers guidance for deploying IOPS-intensive workloads using Percona MySQL Server and Red Hat® Ceph Storage on Ceph-optimized Supermicro storage servers.

Red Hat Ceph Storage on Dell PowerEdge R730xd

This technical white paper provides performance and sizing guidelines for Red Hat Ceph Storage running on Dell servers, specifically the Dell PowerEdge R730xd server, based on extensive testing performed by Red Hat and Dell engineering teams. The PowerEdge R730xd is an award-winning server and storage platform that provides high capacity and scalability and offers an optimal balance of storage utilization, performance, and cost, along with optional in-server hybrid hard disk drive and solid state drive (HDD/SSD) storage configurations.

Red Hat Ceph Storage on Supermicro Storage Servers

Ceph users frequently request simple, recommended cluster configurations for different workload types. Common requests are for throughput-optimized and capacity-optimized workloads, but IOPS-intensive workloads on Ceph are also emerging. To address the need for real-world performance, capacity, and sizing guidance, Red Hat and Supermicro have performed extensive testing to characterize Red Hat Ceph Storage deployments on a range of Supermicro storage servers in optimized configurations.

Red Hat Ceph Storage on Intel Processors and SSDs


Ceph users frequently request simple, optimized cluster configurations for different workload types. Common requests are for throughput-optimized and capacity-optimized workloads, but IOPS-intensive workloads on Ceph are also emerging. Based on extensive testing by Red Hat and Intel with a variety of hardware providers, this document provides general performance, capacity, and sizing guidance for servers based on Intel® Xeon® processors, optionally equipped with Intel® Solid State Drive Data Center (Intel® SSD DC) Series. provides general performance, capacity, and sizing guidance.

Accelerating Ceph for Database Workloads with an all PCIe SSD Cluster


PCIe SSDs are becoming increasingly popular for deploying latency sensitive workloads such as database and big data in enterprise and service provider environments. Customers are exploring low latency workloads on Ceph using PCIe SSDs to meet their performance needs. In this presentation, Intel looks at a high IOPS, low latency workload deployment on Ceph, performance analysis on all PCIe configurations, best practices, and recommendations.

Red Hat Ceph Storage on QCT Servers


Running Red Hat® Ceph Storage on QCT servers provides open interaction with a community-based software development model, backed by the 24×7 support of the world’s most experienced open-source software company. Use of standard hardware components helps ensure low costs, while QCT’s innovative development model enables organizations to iterate more rapidly on a family of server designs optimized for different types of Ceph workloads. Unlike scale-up storage solutions, Red Hat Ceph Storage on QCT servers lets organizations scale out to thousands of nodes, with the ability to scale storage performance and capacity independently, depending on the needs of the application and the chosen storage server platform.

Ceph@HOME: the domestication of a wild cephalopod


I’ve long looked for a distributed and replicated filesystem to store my data. I’ve also been the sysadmin at the university, in the distributed systems lab, and for some time the entire computing institute. In both positions, I took care of backups and worried about the potential for loss of data due to disk failures and of keeping the network going in the presence of hardware failures.