Planet Ceph

Aggregated news from external sources

  • May 24, 2018
    How to Survive an OpenStack Cloud Meltdown with Ceph

    Los Tres Caballeros —sans sombreros— descended on Vancouver this week to participate in the “Rocky” OpenStack Summit. For the assembled crowd of clouderati, Sébastien Han, Sean Cohen and yours truly had one simple question: what if your datacenter was wiped out in its entirety, but your users hardly even noticed? We have touched on the …Read more

  • May 22, 2018
    OpenStack Summit Vancouver: How to Survive an OpenStack Cloud Meltdown with Ceph

    Date: 22/05/18 Video: Source: Sebastian Han (OpenStack Summit Vancouver: How to Survive an OpenStack Cloud Meltdown with Ceph)

  • May 17, 2018
    See you at the OpenStack Summit

    Next week is the OpenStack Summit conference. I will attend and will be giving a talk How to Survive an OpenStack Cloud Meltdown with Ceph. See you there! Source: Sebastian Han (See you at the OpenStack Summit)

  • May 7, 2018
    Crypto Unleashed

    Cryptography made easy…er Cryptography does not have to be mysterious — as author of Serious Cryptography Jean-Philippe Aumasson points out. It is meant to be fiendishly complex to break, and it remains very challenging to implement (see jokes on rolling your own crypto found all over the Net), but it is well within the grasp …Read more

  • May 6, 2018
    See you at the Red Hat summit

    I will be attending the Red Hat summit as I’m co-presenting a lab. This goal of the lab is to deploy an OpenStack Hyperconverged environment (HCI) with Ceph. See you in San Francisco! Source: Sebastian Han (See you at the Red Hat summit)

  • April 30, 2018
    Ceph Nano big updates

    With its two latest versions (v1.3.0 and v1.4.0) Ceph Nano brought some nifty new functionalities that I’d like to highlight in the article. Multi cluster support This is feature is available since v1.3.0. You can now run more than a single instance of cn, you can run as many as your system allows it (CPU …Read more

  • April 11, 2018

    前言 cosbench的功能很强大,但是配置起来可能就有点不是太清楚怎么配置了,本篇将梳理一下这个测试的配置过程,以及一些测试注意项目,以免无法完成自己配置模型的情况 安装 cosbench模式是一个控制端控制几个driver向后端rgw发起请求 下载最新版本 [root@lab102 cosbench]# unzip[root@lab102 cosbench]# yum install java-1.7.0-openjdk nmap-ncat 同时可以执行的workloads的个数通过下面的control参数控制 concurrency=1 默认是一个,这个为了保证单机的硬件资源足够,保持单机启用一个workload 创建一个s3用户 [root@lab101 ~]# radosgw-admin user create –uid=test1 –display-name=”test1″ –access-key=test1 –secret-key=test1{ “user_id”: “test1”, “display_name”: “test1”, “email”: “”, “suspended”: 0, “max_buckets”: 1000, “auid”: 0, “subusers”: [], “keys”: [ { “user”: “test1”, “access_key”: “test1”, “secret_key”: “test1” } ], “swift_keys”: [], …Read more

  • April 11, 2018

    前言 最开始接触这个是在L版本的监控平台里面看到的,有个iscsi网关,但是没看到有类似的介绍,然后通过接口查询到了一些资料,当时由于有比较多的东西需要新内核,新版本的支持,所以并没有配置出来,由于内核已经更新迭代了几个小版本了,经过测试验证可以跑起来了,这里只是把东西跑起来,性能相关的对比需要根据去做 实践过程 架构图 这个图是引用的红帽的架构图,可以理解为一个多路径的实现方式,那么这个跟之前的有什么不同 主要是有个新的tcmu-runner来处理LIO TCM后端存储的用户空间端的守护进程,这个是在内核之上多了一个用户态的驱动层,这样只需要根据tcmu的标准来对接接口就可以了,而不用去直接跟内核进行交互 需要的软件 Ceph Luminous 版本的集群或者更新的版本RHEL/CentOS 7.5或者Linux kernel v4.16或者更新版本的内核其他控制软件 targetcli-2.1.fb47 or newer package ython-rtslib-2.1.fb64 or newer package cmu-runner-1.3.0 or newer package eph-iscsi-config-2.4 or newer package eph-iscsi-cli-2.5 or newer package 以上为配置这个环境需要的软件,下面为我使用的版本的软件,统一打包放在一个下载路径我安装的版本如下: kernel-4.16.0-0.rc5.git0.1targetcli-fb-2.1.fb48python-rtslib-2.1.67tcmu-runner-1.3.0-rc4ceph-iscsi-config-2.5ceph-iscsi-cli-2.6 下载链接: 链接: 密码:m09k 如果环境之前有安装过其他版本,需要先卸载掉,并且需要提前部署好一个Luminous 最新版本的集群官方建议调整的参数 # ceph tell osd.* injectargs ‘–osd_client_watch_timeout 15’# ceph tell osd.* injectargs ‘–osd_heartbeat_grace 20’# ceph …Read more

  • April 9, 2018
    Ceph Dashboard v2 update

    It’s been a little over a month now since we reached Milestone 1 (feature parity with Dashboard v1), which was merged into the Ceph master branch on 2018-03-06. After the initial merge, we had to resolve a few build and packaging related issues, to streamline the ongoing development, testing and packaging of the new dashboard …Read more

  • April 1, 2018
    Ansible module to manage CephX Keys

    Following our recent initiative on writing more Ceph modules for Ceph Ansible, I’d like to introduce one that I recently wrote: ceph_key. The module is pretty straightforward to use and will ease your day two operations for managing CephX keys. It has several capabilities such as: create: will create the key on the filesystem with …Read more

  • March 28, 2018
    The Ceph MON synchronization (election)

    The Ceph MON synchronization (election) Here recently I had got asked a question about Ceph that I wasn’t entirely sure how to answer. It had to do with how the synchronization (election) process worked between monitors. I had an idea, but wasn’t quite sure. So here is a quick synopsis of what I found out. …Read more

  • March 26, 2018
    Handling app signals in containers

    A year ago, I was describing how we were debugging our ceph containers; today I’m back with yet another great thing we wrote :). Sometimes, when a process receives a signal and if that process runs within a container, you might want to do something before or after its termination. That’s what we are going …Read more