Planet Ceph

Aggregated news from external sources

  • August 9, 2013
    Samba Shadow_copy and Ceph RBD

    I add script to create snapshot on rbd for use with samba shadow_copy2.
    For more details go on https://github.com/ksperis/autosnap-rbd-shadow-copy

     How to use :

    Before you need to have ceph cluster running and samba installed.

    Verify admin access to the ceph cluster : (should not return error)

    1
    
    $ rbd ls

    Get the script :

    1
    2
    3
    4
    5
    
    $ mkdir -p /etc/ceph/scripts/
    $ cd /etc/ceph/scripts/
    $ wget https://raw.github.com/ksperis/autosnap-rbd-shadow-copy/master/autosnap.conf
    $ wget https://raw.github.com/ksperis/autosnap-rbd-shadow-copy/master/autosnap.sh
    $ chmod +x autosnap.sh

    Create a block device :

    1
    2
    3
    4
    5
    
    $ rbd create myshare --size=1024
    $ echo "myshare" >> /etc/ceph/rbdmap
    $ /etc/init.d/rbdmap reload
    [ ok ] Starting RBD Mapping: rbd/myshare.
    [ ok ] Mounting all filesystems...done.

    Format the block device :

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
    $ mkfs.xfs /dev/rbd/rbd/myshare
    log stripe unit (4194304 bytes) is too large (maximum is 256KiB)
    log stripe unit adjusted to 32KiB
    meta-data=/dev/rbd/rbd/myshare   isize=256    agcount=9, agsize=31744 blks
             =                       sectsz=512   attr=2, projid32bit=0
    data     =                       bsize=4096   blocks=262144, imaxpct=25
             =                       sunit=1024   swidth=1024 blks
    naming   =version 2              bsize=4096   ascii-ci=0
    log      =internal log           bsize=4096   blocks=2560, version=2
             =                       sectsz=512   sunit=8 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0

    Mount the share :

    1
    2
    3
    
    $ mkdir /myshare
    $ echo "/dev/rbd/rbd/myshare /myshare xfs defaults 0 0" >> /etc/fstab
    $ mount /myshare

    Add this section in your /etc/samba/smb.conf :

    1
    2
    3
    4
    5
    6
    
    [myshare]
        path = /myshare
        writable = yes
      vfs objects = shadow_copy2
      shadow:snapdir = .snapshots
      shadow:sort = desc

    Reload samba

    1
    
    $ /etc/init.d/samba reload

    Create snapshot directory and run the script :

    1
    2
    3
    4
    5
    6
    
    $ mkdir -p /myshare/.snapshots
    $ /etc/ceph/scripts/autosnap.sh
    * Create snapshot for myshare: @GMT-2013.08.09-10.16.10-autosnap
    synced, no cache, snapshot created.
    * Shadow Copy to mount for rbd/myshare :
    GMT-2013.08.09-10.14.44

    Verify that the first snapshot is correctly mount :

    1
    2
    3
    
    $ mount | grep myshare
    /dev/rbd1 on /myshare type xfs (rw,relatime,attr2,inode64,sunit=8192,swidth=8192,noquota)
    /dev/rbd2 on /myshare/.snapshots/@GMT-2013.08.09-10.14.44 type xfs (ro,relatime,nouuid,norecovery,attr2,inode64,sunit=8192,swidth=8192,noquota)

    Also, you can add this on crontab to run everyday the script :

    1
    
    $ echo "00 0    * * *   root    /bin/bash /etc/ceph/scripts/autosnap.sh" >> /etc/crontab
  • August 3, 2013
    Test Ceph Persistant Rbd Device

    Create persistant rbd device

    Create block device and map it with /etc/ceph/rbdmap

    1
    2
    3
    4
    5
    
    $ rbd create rbd/myrbd --size=1024
    $ echo "rbd/myrbd" >> /etc/ceph/rbdmap
    $ service rbdmap reload
    [ ok ] Starting RBD Mapping: rbd/myrbd.
    [ ok ] Mounting all filesystems...done.

    View rbd mapped :

    1
    2
    3
    
    $ rbd showmapped
    id pool image snap device    
    1  rbd  myrbd -    /dev/rbd1

    Create FS and mount :

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    
    $ mkfs.xfs /dev/rbd/rbd/myrbd 
    log stripe unit (4194304 bytes) is too large (maximum is 256KiB)
    log stripe unit adjusted to 32KiB
    meta-data=/dev/rbd/rbd/myrbd     isize=256    agcount=9, agsize=31744 blks
             =                       sectsz=512   attr=2, projid32bit=0
    data     =                       bsize=4096   blocks=262144, imaxpct=25
             =                       sunit=1024   swidth=1024 blks
    naming   =version 2              bsize=4096   ascii-ci=0
    log      =internal log           bsize=4096   blocks=2560, version=2
             =                       sectsz=512   sunit=8 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0
    
    $ mkdir -p /mnt/myrbd
    $ blkid | grep rbd1
    /dev/rbd1: UUID="a07e969e-bb1a-4921-9171-82cf7a737a69" TYPE="xfs"
    $ echo "UUID=a07e969e-bb1a-4921-9171-82cf7a737a69 /mnt/myrbd xfs defaults 0 0" >> /etc/fstab
    $ mount -a

    Check :

    1
    2
    
    $ mount | grep rbd1
    /dev/rbd1 on /mnt/myrbd type xfs (rw,relatime,attr2,inode64,sunit=8192,swidth=8192,noquota)

    Test snapshot

    1
    
    $ touch /mnt/myrbd/v1

    Make snapshot :

    1
    2
    3
    
    $ sync && xfs_freeze -f /mnt/
    $ rbd snap create rbd/myrbd@snap1
    $ xfs_freeze -u /mnt/

    Change a file :

    1
    
    $ mv /mnt/myrbd/v1 /mnt/myrbd/v2

    Mount snapshot in RO :

    1
    2
    3
    
    $ mkdir -p /mnt/myrbd@snap1
    $ rbd map rbd/myrbd@snap1
    $ mount -t xfs -o ro,norecovery,nouuid "/dev/rbd/rbd/myrbd@snap1" "/mnt/myrbd@snap1"
    1
    2
    3
    
    $ ls "/mnt/myrbd"
    total 0
    v2

    OK.

    1
    2
    
    $ ls "/mnt/myrbd@snap1"
    total 0

    Nothing ??? Something went wrong with the sync ?

    Try again :

    1
    2
    3
    4
    5
    6
    
    $ sync && xfs_freeze -f /mnt/
    $ rbd snap create rbd/myrbd@snap2
    $ xfs_freeze -u /mnt/
    $ mkdir -p /mnt/myrbd@snap2
    $ rbd map rbd/myrbd@snap2
    $ mount -t xfs -o ro,norecovery,nouuid "/dev/rbd/rbd/myrbd@snap2" "/mnt/myrbd@snap2"

    Move again the file.

    1
    
    $ mv /mnt/myrbd/v2 /mnt/myrbd/v3
    1
    2
    3
    4
    5
    6
    
    $ ls /mnt/myrbd@snap2
    total 0
    v2
    $ ls /mnt/myrbd
    total 0
    v3

    All right.

    Stop rbdmap (will remove all rbd mapped device)

    1
    
    $ service rbdmap remove

    Remove line added in /etc/ceph/rbdmap

    Remove myrbd :

    1
    2
    3
    4
    
    $ rbd snap purge rbd/myrbd
    Removing all snapshots: 100% complete...done.
    $ rbd rm rbd/myrbd
    Removing image: 100% complete...done.
  • August 2, 2013
    Don’t Forget Unmap Before Remove Rbd

    1
    2
    3
    4
    $ rbd rm rbd/myrbd
    Removing image: 99% complete…failed.2013-08-02 14:07:17.530470 7f3ba2692760 -1 librbd: error removing header: (16) Device or resource busy
    rbd: error: image still has watchers
    This means the image is still open or the clien…

  • July 30, 2013
    Convert RBD to Format V2

    Simple Import / Export

    Don’t forget to stop IO before sync and unmap rbd before rename.

    1
    2
    3
    $ rbd export rbd/myrbd – | rbd import –image-format 2 – rbd/myrbd_v2
    $ rbd mv rbd/myrbd rbd/myrbd_old
    $ rbd mv rbd/myrbd_v2 rbd/myrbd

    Check :

  • July 30, 2013
    Remove Snapshot Before Rbd

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    $ rbd rm rbd/myrbd
    2013-07-30 14:10:13.341184 7f9e11922760 -1 librbd: image has snapshots – not removing
    Removing image: 0% complete…failed.
    rbd: image has snapshots – these must be deleted with ‘rbd snap purge’ before the …

  • July 30, 2013
    Using Ceph-deploy

    Install the ceph cluster

    On each node :

    create a user “ceph” and configure sudo for nopassword :

    1
    2
    3
    4
    $ useradd -d /home/ceph -m ceph
    $ passwd ceph
    $ echo “ceph ALL = (root) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/ceph
    $ chmod 0440 /e…

  • July 26, 2013
    Ceph: update Cephx Keys

    It’s not really clear from the command line

    Generate a dummy key for the exercise

    1
    2
    3
    4
    $ ceph auth get-or-create client.dummy mon 'allow r' osd 'allow rwx pool=dummy'

    [client.dummy]
    key = AQAPiu1RCMb4CxAAmP7rrufwZP…

  • July 11, 2013
    Inktank Presenting on Ceph at FinTech Demo Day!

    It’s been quite a year for Inktank and the Ceph community. We are super excited to announce another major milestone for Inktank – our participation in the third annual FinTech Innovation Lab in New York City. The goal of the Lab – established in 2010 by Accenture and the Partnership Fund for New York City […]

  • June 23, 2013
    What I think about CephFS in OpenStack

    I recently had some really interesting questions that led to some nice discussions.
    Since I received the same question twice, I thought it might be good to share the matter with the community.

    The question was pretty simple and obvioulsy the context…

  • June 11, 2013
    Ceph RBD Online Resize

    Extend rbd drive with libvirt and XFS

    First, resize the device on the physical host.

    Get the current size :

    1
    
    $ qemu-img info -f rbd "rbd:rbd/myrbd"

    Be careful, you must specify a bigger size, shrink a volume is destructive for the FS.

    1
    
    $ qemu-img resize -f rbd "rbd:rbd/myrbd" 600G

    List device define for myVM :

    1
    
    $ virsh domblklist myVM

    Resize libvirt blockdevice :

    1
    2
    
    $ virsh blockresize --domain myVM --path vdb --size 600G
    $ rbd info rbd/myrb

    Extend xfs on guest :

    1
    
    $ xfs_growfs /mnt/rbd/myrbd

    Extend rbd with kernel module

    You need at least kernel 3.10 on ceph client to support resizing.
    For previous version look at http://dachary.org/?p=2179

    Get current size :

    1
    
    $ rbd info rbd/myrbd

    Just do :

    1
    2
    
    $ rbd resize rbd/myrbd --size 600000
    $ xfs_growfs /mnt/rbd/myrbd

    Also, since cuttlefish you can’t shrink a bloc device without specify additional option (–allow-shrink)

  • June 3, 2013
    Ceph integration in OpenStack: Grizzly update and roadmap for Havana

    What a perfect picture, a Cephalopod smoking a cigar! Updates!
    The OpenStack developer summit was great and obviously one of the most exciting session was the one about the Ceph integration with OpenStack.
    I had the great pleasure to attend this sess…

  • May 16, 2013
    ViPR: A software-defined storage mullet?

    Almost every few weeks, new storage products are announced by competitors and I generally avoid commenting on them. But EMC’s ViPR announcement contains attempts to perform both marketing and technical sleight of hand around software-defined storage that potentially do much to slow down the inevitable change that is coming to the storage market. While EMC […]

Careers