Ceph Days Korea
Save the Date - Ceph is coming to Korea! ¶
A full-day event dedicated to sharing Ceph’s transformative power and fostering the vibrant Ceph community in South Korea.
The expert Ceph team, Ceph’s customers and partners, and the Ceph community join forces to discuss things like the status of the Ceph project, recent Ceph project improvements and roadmap, and Ceph community news. The day ends with a networking reception, to foster more Ceph learning.
Important Dates ¶
- CFP Opens: 2023-02-24
- CFP Closes: 2023-04-28
- Speakers receive confirmation of acceptance: 2023-05-12
- Schedule Announcement: 2023-05-16
- Sponsorship Deadline: 2023-04-28
- Event Date: 2023-06-14
Schedule ¶
Time | Abstract | |
9:00 | Opening | |
9:15 | Keynote: The journey of memory innovation with Ceph | Samsung Electronics |
9:50 | Keynote: The present and future of hard drives, and storage | Seagate |
10:20 | Keynote: IBM and IBM Storage Ceph's Future Congratulations on becoming a member of a family in the open source community, and I would like to talk about Ceph's plan through synergy with IBM. | IBM Korea |
10:50 | Break | |
11:10 | Distributed storage system architecture and Ceph’s strength From local storage systems to regular NAS, explore the reliability structurally and discuss considerations from a distributed system perspective. Finally, talk about the advantages and disadvantages of Ceph and what workloads are useful. | Samsung |
11:50 | Role of RocksDB in Ceph In Ceph, RocksDB is used by default as a Metadata store for stored objects. However, this is not limited to providing critical features for top-tier applications such as RadosGW and MDS, and has a critical impact on performance. Surprisingly, however, many people tend to think of RocksDB as a black box and not pay attention to it. While looking at the internal logic of Ceph and RocksDB, I would like to look at the impact of RocksDB and introduce some points to note. | LINE |
12:20 | Lunch | |
13:50 | Ceph case study and large-scale cluster operation plan in NAVER In this pressentation, we will look at NAVER's case study of Ceph and how to operate storage. We would like to explain the problems and solutions that we struggled with when introducing Ceph and provide useful information for companies that want to adopt Ceph. | NAVER |
14:30 | A New MDS Partitioning for CephFS This talk will present a new MDS partitioning strategy for CephFS that combines static pinning and dynamic partitioning with the bal_rank_mask option based on user metadata workload analysis. We will also share our experiences with the implementation of these optimizations in our production service and the results of our experiments. Finally, we will discuss how we can contribute our work to the Ceph community. | LINE |
15:00 | Break | |
15:20 | Revisiting S3 features on Ceph Rados In this presentation, we will first explore the S3 API execution path from a client to the Ceph Object Storage Daemon (OSD). It will cover how RGW translates S3 requests into internal Rados requests and how OSD stores S3 objects and metadata in the case of Bluestore. Second, we will analyze S3 performance with and without versioning-related features on three different S3-compatible storage platforms: Ceph, MinIO, and OpenStack Swift with Swift3. We conducted a synthetic benchmark to measure S3 performance, especially ListObject performance, while considering versioning-related features. | Seoul National Univ. |
16:00 | Technical discussion | |
16:50 | Networking session | |
17:20 | Closing |
Join the Ceph announcement list, or follow Ceph on social media for Ceph event updates: