The Ceph Blog

Ceph blog stories provide high-level spotlights on our customers all over the world

  • October 10, 2017
    New in Luminous: CephFS metadata server memory limits

    The Ceph file system uses a cluster of metadata servers to provide an authoritative cache for the CephFS metadata stored in RADOS. The most basic reason for this is to maintain a hot set of metadata in memory without talking to the metadata pool in RADOS. Another important reason is to allow clients to also …Read more

  • September 20, 2017
    New in Luminous: Multiple Active Metadata Servers in CephFS

    The Ceph file system (CephFS) is the file storage solution for Ceph. Since the Jewel release it has been deemed stable in configurations using a single active metadata server (with one or more standbys for redundancy). Now in Luminous, multiple active metadata servers configurations are stable and ready for deployment! This allows the CephFS metadata …Read more

  • January 10, 2014
    CephFS with a dedicated pool

    CephFS with a Dedicated PoolThis blog is about configuring a dedicated pool ( user defined pool ) for cephfs. If you are looking to configure cephfs , please visit  CephFS Step by Step blogCreate a new pool for cephfs ( obviosl…

  • December 23, 2013
    Ceph Filesystem ( CephFS) :: Step by Step Configuration
    CephFS 

    Ceph Filesystem is a posix compliant file system that uses ceph storage cluster to store its data. This is the only ceph component that is not ready for production , i would like to say ready for pre-production.

    Internals 

    Thanks to http://ceph.com/docs/master/cephfs/ for Image 

    Requirement of CephFS

    • You need a running ceph cluster with at least one MDS node. MDS is required for CephFS to work.
    • If you don’t have MDS configure one
      • # ceph-deploy mds create <MDS-NODE-ADDRESS>
    Note : If you are running short of hardware or want to save hardware you can run MDS services on existing Monitor nodes. MDS services does not need much resources
    • A Ceph client to mount cephFS

    Configuring CephFS
    • Install ceph on client node
    [root@storage0101-ib ceph]# ceph-deploy install na_fedora19
    [ceph_deploy.cli][INFO ] Invoked (1.3.2): /usr/bin/ceph-deploy install na_fedora19
    [ceph_deploy.install][DEBUG ] Installing stable version emperor on cluster ceph hosts na_csc_fedora19
    [ceph_deploy.install][DEBUG ] Detecting platform for host na_fedora19 ...
    [na_csc_fedora19][DEBUG ] connected to host: na_csc_fedora19
    [na_csc_fedora19][DEBUG ] detect platform information from remote host
    [na_csc_fedora19][DEBUG ] detect machine type
    [ceph_deploy.install][INFO ] Distro info: Fedora 19 Schrödinger’s Cat
    [na_csc_fedora19][INFO ] installing ceph on na_fedora19
    [na_csc_fedora19][INFO ] Running command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
    [na_csc_fedora19][INFO ] Running command: rpm -Uvh --replacepkgs --force --quiet http://ceph.com/rpm-emperor/fc19/noarch/ceph-release-1-0.fc19.noarch.rpm
    [na_csc_fedora19][DEBUG ] ########################################
    [na_csc_fedora19][DEBUG ] Updating / installing...
    [na_csc_fedora19][DEBUG ] ########################################
    [na_csc_fedora19][INFO ] Running command: yum -y -q install ceph

    [na_csc_fedora19][ERROR ] Warning: RPMDB altered outside of yum.
    [na_csc_fedora19][DEBUG ] No Presto metadata available for Ceph
    [na_csc_fedora19][INFO ] Running command: ceph --version
    [na_csc_fedora19][DEBUG ] ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
    [root@storage0101-ib ceph]#
    • Create a new pool for CephFS
    # rados mkpool cephfs
    • Create a new keyring (client.cephfs) for cephfs 
    # ceph auth get-or-create client.cephfs mon 'allow r' osd 'allow rwx pool=cephfs' -o /etc/ceph/client.cephfs.keyring
    • Extract secret key from keyring
    # ceph-authtool -p -n client.cephfs /etc/ceph/client.cephfs.keyring > /etc/ceph/client.cephfs
    • Copy the secret file to client node under /etc/ceph . This allow filesystem to mount when cephx authentication is enabled
    # scp client.cephfs na_fedora19:/etc/ceph
    client.cephfs 100% 41 0.0KB/s 00:00
    • List all the keys on ceph cluster
    # ceph auth list                                               

    Option-1 : Mount CephFS with Kernel Driver

    • On the client machine add mount point in /etc/fstab . Provide IP address of your ceph monitor node and path of secret key that we have created above
    192.168.200.101:6789:/ /cephfs ceph name=cephfs,secretfile=/etc/ceph/client.cephfs,noatime 0 2    
    • Mount cephfs mount point  , you might see some “mount: error writing /etc/mtab: Invalid argument” but you can ignore them and check  df -h
    [root@na_fedora19 ceph]# mount /cephfs
    mount: error writing /etc/mtab: Invalid argument

    [root@na_fedora19 ceph]#
    [root@na_fedora19 ceph]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/vda1 7.8G 2.1G 5.4G 28% /
    devtmpfs 3.9G 0 3.9G 0% /dev
    tmpfs 3.9G 0 3.9G 0% /dev/shm
    tmpfs 3.9G 288K 3.9G 1% /run
    tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
    tmpfs 3.9G 2.6M 3.9G 1% /tmp
    192.168.200.101:6789:/ 419T 8.5T 411T 3% /cephfs
    [root@na_fedora19 ceph]#

    Option-2 : Mounting CephFS as FUSE

    • Copy ceph configuration file ( ceph.conf ) from monitor node to client node and make sure it has permission of 644
    # scp ceph.conf na_fedora19:/etc/ceph
    # chmod 644 ceph.conf
    • Copy the secret file from monitor node to client node under /etc/ceph. This allow filesystem to mount when cephx authentication is enabled ( we have done this earlier )
    # scp client.cephfs na_fedora19:/etc/ceph
    client.cephfs 100% 41 0.0KB/s 00:00
    • Make sure you have “ceph-fuse” package installed on client machine
    # rpm -qa | grep -i ceph-fuse
    ceph-fuse-0.72.2-0.fc19.x86_64
    • To mount Ceph Filesystem as FUSE use ceph-fuse comand 
    [root@na_fedora19 ceph]# ceph-fuse -m 192.168.100.101:6789  /cephfs
    ceph-fuse[3256]: starting ceph client
    ceph-fuse[3256]: starting fuse
    [root@na_csc_fedora19 ceph]#

    [root@na_fedora19 ceph]# df -h
    Filesystem Size Used Avail Use% Mounted on
    /dev/vda1 7.8G 2.1G 5.4G 28% /
    devtmpfs 3.9G 0 3.9G 0% /dev
    tmpfs 3.9G 0 3.9G 0% /dev/shm
    tmpfs 3.9G 292K 3.9G 1% /run
    tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
    tmpfs 3.9G 2.6M 3.9G 1% /tmp
    ceph-fuse 419T 8.5T 411T 3% /cephfs
    [root@na_fedora19 ceph]#

Careers