Planet Ceph

Aggregated news from external sources

  • July 14, 2015
    OpenStack Summit Tokyo: Call for Speakers

    The next OpenStack Summit will take place in Tokyo, Japan from 27-30 October 2015. The Call for Speaker period is open since some days and will close on July 15th, 2015, 11:59 PM PDT (July 16th, 08:59 CEST).You can submit your presentations here. I my…

  • July 13, 2015
    Calamari Packages for Community

    Recently I have been playing around Ceph calamari, which is a management and monitoring system for Ceph storage cluster. It provides a beautiful Dashboard User Interface that makes Ceph cluster monitoring amazingly simple and handy.

    History

    Cal…

  • July 13, 2015
    oneliner to deploy teuthology on OpenStack

    Note: this is obsoleted by Ceph integration tests made simple with OpenStack The teuthology can be installed as a dedicated OpenStack instance on OVH using the OpenStack backend with: nova boot \ –image ‘Ubuntu 14.04’ \ –flavor ‘vps-ssd-1’ \ –key-name … Continue reading

  • July 12, 2015
    Ceph Calamari Packages for Community

    Recently i have been playing around Ceph calamari which is a management and monitoring system for Ceph storage cluster. It provides a beautiful Dashboard User Interface that makes Ceph cluster monitoring amazingly simple and handy.

    History

    Calamar…

  • July 11, 2015
    Deploying OpenStack KILO Using RDO

    Deploying OpenStack KILO using RDO

    Getting openstack up and running using RDO is fairly straight forward. However many people have asked to deploy openstack with an existing external network. This method should allow any machine on the network to be able to access launched instances via their floating IPs.

    Environment

    • CentOS7
    • OpenStack RDO KILO
    • Vagrant ( Optional )

    In this demo , we will use Vagrant to spin up two CentOS7 VM’s node1 and node2. You can also use your other machines or even your physical servers.

    Step 1 – Creating virtual machines for OpenStack deployment

    • Get my version of Vagrantfile
    1
    
    # wget https://gist.githubusercontent.com/ksingh7/85d887b92a448a042ca8/raw/372be2527bad24045b3a1764dee31e91074ecb50/Vagrantfile --output-document=Vagrantfile
    
    • Bring up virtual machines using Vagrant
    1
    
    # vagrant up node1 node2
    
    • Once both machines are UP , ssh into them followed by sudo su -

    Step 2 – Setting up OpenStack nodes

    • On both the nodes disable CentOS7 network manager and update CentOS7 packages
    1
    
    # systemctl stop NetworkManager;systemctl disable NetworkManager;chkconfig network on;systemctl start network;yum update -y
    

    Step 3 – Setting up RDO

    • On node1 setup RDO repositories and install packstack
    1
    
    # yum install -y https://rdoproject.org/repos/rdo-release.rpm ; yum install -y openstack-packstack
    

    Step 4 – Modify Packstack answerfile

    • Next generate packsack answer file , by keeping some unrelevant options disabled and enabling neutron ML2 plugins.
    1
    2
    3
    4
    5
    6
    7
    
    packstack \
    --provision-demo=n  \
    --nagios-install=n \
    --os-swift-install=n \
    --os-ceilometer-install=n \
    --os-neutron-ml2-type-drivers=vxlan,flat,vlan \
    --gen-answer-file=answerfile.cfg
    
    • Edit answerfile.cfg to add IP addresses of CONTROLLER, COMPUTE, NETWORK, STORAGE and databases.
    1
    2
    3
    4
    5
    6
    7
    
    CONFIG_CONTROLLER_HOST=10.0.1.10
    CONFIG_COMPUTE_HOSTS=10.0.1.10,10.0.1.11
    CONFIG_NETWORK_HOSTS=10.0.1.10
    CONFIG_STORAGE_HOST=10.0.1.10
    CONFIG_AMQP_HOST=10.0.1.10
    CONFIG_MARIADB_HOST=10.0.1.10
    CONFIG_MONGODB_HOST=10.0.1.10
    
    • Next edit answerfile.cfg to add public and private interface names
    1
    2
    3
    
    CONFIG_NOVA_COMPUTE_PRIVIF=enp0s9
    CONFIG_NOVA_NETWORK_PUBIF=enp0s8
    CONFIG_NOVA_NETWORK_PRIVIF=enp0s9
    
    • Since we have multiple nodes to deploy openstack on, lets setup SSH between nodes.
    1
    2
    3
    
    # ssh-keygen
    # ssh-copy-id root@node1
    # ssh-copy-id root@node2
    

    Step 5 – Installing OpenStack

    • Finally start deploying openstack
    1
    
    # packstack --answer-file=answerfile.cfg
    
    • Once deployment is completed
      • Get you openstack username and password from keystonerc_admin file # cat keystonerc_admin
      • Point your web browser to http://10.0.1.10/dashboard and login to openstack dashboard
      • You can also source keystonerc_admin file to use openstack CLI
    1
    2
    
    # source keystonerc_admin
    # openstack server list
    

    Step 6 – Configure OVS external bridge ( for floating IP )

    • Create OVS bridge interface by creating file /etc/sysconfig/network-scripts/ifcfg-br-ex with the following contents
    1
    2
    3
    4
    5
    6
    7
    8
    9
    
    DEVICE=br-ex
    DEVICETYPE=ovs
    TYPE=OVSBridge
    BOOTPROTO=static
    IPADDR=10.0.1.10   # IP address of enp0s8 interface
    NETMASK=255.255.255.0
    GATEWAY=10.0.1.1
    DNS1=8.8.8.8
    ONBOOT=yes
    
    • Configure enp0s8 for OVS bridging by editing /etc/sysconfig/network-scripts/ifcfg-enp0s8 and adding the following content
    1
    2
    3
    4
    5
    
    DEVICE=enp0s8
    TYPE=OVSPort
    DEVICETYPE=ovs
    OVS_BRIDGE=br-ex
    ONBOOT=yes
    
    • Modify neutron plugin to define a logical name for our external physical L2 segment as “extnet”
    1
    
    # openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs bridge_mappings extnet:br-ex
    
    • Restart networking services
    1
    2
    3
    
    # service network restart
    # service neutron-openvswitch-agent restart
    # service neutron-server restart
    

    Step 7 – Create OpenStack networks for Instances

    • Create Public ( External ) network
    1
    
    # neutron net-create public_network --provider:network_type flat --provider:physical_network extnet  --router:external --shared
    
    • Create Public ( External ) network subnet
    1
    
    # neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=10.0.1.100,end=10.0.1.110 --gateway=10.0.1.1 public_network 10.0.1.0/24 --dns-nameservers list=true 8.8.8.8 4.2.2.2
    
    • Create Private ( Tenent ) network
    1
    
    # neutron net-create private_network
    
    • Create Private ( Tenent ) network subnet
    1
    
    # neutron subnet-create --name private_subnet private_network 10.15.15.0/24
    
    • Create Router
    1
    
    # neutron router-create router1
    
    • Set Router gateway as public network
    1
    
    # neutron router-gateway-set router1 public_network
    
    • Set Router interface as private network subnet
    1
    
    # neutron router-interface-add router1 private_subnet
    
    • At this point you have configured openstack networks and your network topology should look like

    Deploying OpenStack KILO using RDO

    Step 8 – Launch Instance

    • Add a glance image
    1
    
    # curl http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img | glance image-create --name='cirros image' --is-public=true  --container-format=bare --disk-format=qcow2
    
    • From openstack dashboard

      • Add key pair
        Projects --> Compute --> Access & Security --> Key Pairs --> Import Key Pair

        • Key Pair Name –> node1_key
        • Public Key –> Contents of # cat /root/.ssh/id_rsa.pub
      • Create security groups rules for ICMP and SSH
        Projects --> Compute --> Access & Security --> security groups --> default --> manage rules
        Deploying OpenStack KILO using RDO
    • Launch Instance

      • Get Private_Network ID using # openstack network list
      • Create instance ( replace net-id from network id that got from above )
        # openstack server create --image="cirros image" --flavor=m1.tiny --key-name=node1 --nic net-id="288f9b1f-7453-4132-9dd4-8829a6844d73" Demo_Instance
      • Check instance status # openstack server list

    Step 9 – Accessing Instance

    • From openstack dashboard assign floating ip to instance Projects --> Compute --> Instances --> Actions --> Associate Floating IP
    • Ping this floating ip address from node1 # ping 10.0.1.101
    • SSH into demo_instance # ssh cirros@10.0.1.101
      Deploying OpenStack KILO using RDO

    Tadaa … you are Done !!! Play around, create several instances and test them against your workloads 😉

  • July 11, 2015
    Running your own Ceph integration tests with OpenStack

    Note: this is obsoleted by Ceph integration tests made simple with OpenStack The Ceph lab has hundreds of machines continuously running integration and upgrade tests. For instance, when a pull request modifies the Ceph core, it goes through a run … Continue reading

  • July 8, 2015
    configuring ansible for teuthology

    As of July 8th, 2015, teuthology (the Ceph integration test software) switched from using Chef to using Ansible. To keep it working, two files must be created. The /etc/ansible/hosts/group_vars/all.yml file with: modify_fstab: false The modify_fstab is necessary for OpenStack provisioned … Continue reading

  • July 8, 2015
    See what the Ceph client sees

    The title is probably weird and misleading but I could not find better than this :).
    The idea here is to dive a little bit into what the kernel client sees for each client that has a RBD device mapped.
    In this article, we are focusing on the Kernel R…

  • July 6, 2015
    Ceph enable the object map feature

    The Hammer release brought the support of a new feature for RBD images called object map.
    The object map tracks which blocks of the image are actually allocated and where.
    This is especially useful for operations on clones like resize, import, export…

  • July 3, 2015
    First China Ceph Day – Beijing Ceph Day

    Ceph is becoming more and more popular in China. Intel and Redhat jointly held the Beijing Ceph Day in Intel RYC office on June 6th, 2015. It attracted ~200 developers, end users from 120+ companies. Ten technical sessions were delivered to share Ceph’s transformative power during the event, it also focused on current problems of …Read more

  • June 28, 2015
    Bring persistent storage for your containers with KRBD on Kubernetes

    {% img center http://sebastien-han.fr/images/kubernetes-ceph-krbd.png Bring persistent storage for your containers with KRBD on Kubernetes %}

    Use RBD device to provide persistent storage to your containers.
    This work was initiated by a colleague of mine Huamin Chen.
    I would like to take the opportunity to thank him for the troubleshooting session we had.
    Having the ability to use persistent volume for your containers is critical, containers can be ephemeral since they are immutable.
    If they did on a machine they can be bootstrapped on another host without any problem.
    The only problem here is we need to ensure that somehow the data that come with this container will follow it no matter where it goes.
    This is exactly what we want to achieve with this implementation.

    Pre requisite

    This article assumes that your Kubernetes environment is up and running.
    First on your host install Ceph:

    $ sudo yum install -y ceph-common
    

    W Important: the version of ceph-common must be >= 0.87.

    Set up your Ceph environment:

    $ sudo docker run -d \
    --net=host \
    -v /var/lib/ceph:/var/lib/ceph \
    -v /etc/ceph:/etc/ceph \
    -e MON_IP=192.168.0.1 \
    -e CEPH_NETWORK=192.168.0.0/24 \
    ceph/demo
    

    Several actions are not assumed by Kubernetes such as:

    • RBD volume creation
    • Filesystem on this volume

    So let’s do this first:

    $ sudo rbd create foo -s 1024
    $ sudo rbd map foo
    /dev/rbd0
    $ sudo mkfs.ext4 /dev/rbd0
    $ sudo rbd unmap /dev/rbd0
    

    Configure Kubernetes

    First, we clone Kubernetes repository to get some handy file examples:

    $ git clone https://github.com/GoogleCloudPlatform/kubernetes.git
    $ cd kubernetes/examples/rbd
    

    Get your ceph.admin key and encode it in base64:

    $ sudo ceph auth get-key client.admin
    AQBAMo1VqE1OMhAAVpERPcyQU5pzU6IOJ22x1w==
    
    $ echo "AQBAMo1VqE1OMhAAVpERPcyQU5pzU6IOJ22x1w==" | base64
    QVFCQU1vMVZxRTFPTWhBQVZwRVJQY3lRVTVwelU2SU9KMjJ4MXc9PQo=
    

    R Note: it’s not mandatory to use the client.admin key, you can use whatever key you want as soon as it has the approprieate permissions of the given pool.

    Edit your ceph-secret.yml with the base64 key:

    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-secret
    data:
      key: QVFCQU1vMVZxRTFPTWhBQVZwRVJQY3lRVTVwelU2SU9KMjJ4MXc9PQo=
    

    Add your secret to Kubernetes:

    $ kubectl create -f secret/ceph-secret.yaml
    $ kubectl get secret
    NAME                  TYPE                                  DATA
    ceph-secret           Opaque                                1
    

    Now, we edit our rbd-with-secret.json pod file.
    This file describes the content of your pod:

    {
        "apiVersion": "v1beta3",
        "id": "rbdpd2",
        "kind": "Pod",
        "metadata": {
            "name": "rbd2"
        },
        "spec": {
            "containers": [
                {
                    "name": "rbd-rw",
                    "image": "kubernetes/pause",
                    "volumeMounts": [
                        {
                            "mountPath": "/mnt/rbd",
                            "name": "rbdpd"
                        }
                    ]
                }
            ],
            "volumes": [
                {
                    "name": "rbdpd",
                    "rbd": {
                        "monitors": [
                                                            "192.168.0.1:6789"
                                     ],
                        "pool": "rbd",
                        "image": "foo",
                        "user": "admin",
                        "secretRef": {
                                                      "name": "ceph-secret"
                                             },
                        "fsType": "ext4",
                        "readOnly": true
                    }
                }
            ]
        }
    }
    

    The relevant sections are:

    • mountPath: where to mount the RBD image, this mountpoint must exist
    • monitors: address of the monitors (you can have asn many as you want)
    • pool: the pool used to store your image
    • image: name of the image
    • secretRef: name of the secret
    • fsType: filesystem type of the image

    Now it’s time to fire it up your pod:

    $ kubectl create -f rbd-with-secret.json
    $ kubectl get pods
    NAME      READY     REASON    RESTARTS   AGE
    rbd2      1/1       Running   0          1m
    

    Check the running containers:

    $ docker ps
    CONTAINER ID        IMAGE                                  COMMAND             CREATED             STATUS              PORTS               NAMES
    61e12752d0e9        kubernetes/pause:latest                "/pause"            18 minutes ago      Up 18 minutes                           k8s_rbd-rw.1d89132d_rbd2_default_bd8b2bb0-1c0d-11e5-9dcf-b4b52f63c584_f9954e16
    e7b1c2645e8f        gcr.io/google_containers/pause:0.8.0   "/pause"            18 minutes ago      Up 18 minutes                           k8s_POD.e4cc795_rbd2_default_bd8b2bb0-1c0d-11e5-9dcf-b4b52f63c584_ac64e07c
    e9dfc079809f        ceph/demo:latest                       "/entrypoint.sh"    3 hours ago         Up 3 hours                              mad_ardinghelli
    

    Everything seems to be working well, let’s check the device status on the Kubernetes host:

    $ sudo rbd showmapped
    id pool image snap device
    0  rbd  foo   -    /dev/rbd0
    

    The image got mapped, now we check where this image got mounted:

    $ mount |grep kube
    /dev/rbd0 on /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/rbd-image-foo type ext4 (ro,relatime,stripe=1024,data=ordered)
    /dev/rbd0 on /var/lib/kubelet/pods/bd8b2bb0-1c0d-11e5-9dcf-b4b52f63c584/volumes/kubernetes.io~rbd/rbdpd type ext4 (ro,relatime,stripe=1024,data=ordered)
    

    Further work and known issue

    The current implementation is here and it’s good to see that we merged such thing.
    It will be easier in the future to follow up on that original work.
    The “v2” will ease operators life, since they won’t need to pre-populate RBD images and filesystems.

    There is a bug currently where the pod creation failed if the mount point doest not exist.
    This is fixed in Kubernetes 0.20.

    I hope you will enjoy this as much as I do 🙂

  • June 26, 2015
    Jewel – Ceph Developer Summit

    The next (virtual) Ceph Developer Summit is coming. The agenda has been finally announced for the 1.and 2. of July 2015. The fist day starts at 07:00 PDT (16:00 CEST) and the second day starts at 18:00 PDT on 2. July or rather 03:00 CEST on 03.July.&nb…

Careers