The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • March 6, 2019
    Deploying a Ceph+NFS Server Cluster with Rook

    With rook.io it’s possible to deploy a Ceph cluster on top of kubernetes (also known as k8s). The ceph cluster can use storage on each individual k8s cluster node just as it when it is deployed on regular hosts. Newer versions of rook and Ceph also support the deployment of a CephFS to NFS gateway …Read more

  • October 10, 2017
    New in Luminous: CephFS metadata server memory limits

    The Ceph file system uses a cluster of metadata servers to provide an authoritative cache for the CephFS metadata stored in RADOS. The most basic reason for this is to maintain a hot set of metadata in memory without talking to the metadata pool in RADOS. Another important reason is to allow clients to also …Read more

  • October 2, 2017
    New in Luminous: CephFS subtree pinning

    The Ceph file system (CephFS) allows for portions of the file system tree to be carved up into subtrees which can be managed authoritatively by multiple MDS ranks. This empowers the cluster to scale performance with the size and usage of the file system by simply adding more MDS servers into the cluster. Where possible, …Read more

  • September 20, 2017
    New in Luminous: Multiple Active Metadata Servers in CephFS

    The Ceph file system (CephFS) is the file storage solution for Ceph. Since the Jewel release it has been deemed stable in configurations using a single active metadata server (with one or more standbys for redundancy). Now in Luminous, multiple active metadata servers configurations are stable and ready for deployment! This allows the CephFS metadata …Read more

Careers