Planet Ceph

Aggregated news from external sources

  • July 12, 2017
    Choosing the right storage for your OpenStack cloud

    Choosing a storage solution for OpenStack is an interesting problem, and it reflects the complexity of a choice that reflects across the entire design of your cloud. I was honored to be able to share Red Hat’s views on the matter in a very well attended webinar this week. My colleagues Rahul Vijayan and Sadique …Read more

  • July 7, 2017
    Update on the state of Ceph Support in openATTIC 3.x (July 2017)

    A little bit less than a month ago Lenz Grimmer gave an overview about the current state of our development in openATTIC 3.x. We made a lot of good progress in the meanwhile and I’m very proud to announce that NFS Gateway Management, RGW Bucket Management and Prometheus/Grafana made it into our newest openATTIC 3.3.0 …Read more

  • June 28, 2017
    ceph luminous 新功能之磁盘智能分组

    前言 本篇是luminous一个新功能介绍,关于磁盘智能分组的,这个在ceph里面叫crush class,这个我自己起名叫磁盘智能分组,因为这个实现的功能就是根据磁盘类型进行属性关联,然后进行分类,减少了很多的人为操作 以前我们需要对ssd和hdd进行分组的时候,需要大量的修改crush map,然后绑定不同的存储池到不同的 crush 树上面,现在这个逻辑简化了很多 ceph crush class {create,rm,ls} manage the new CRUSH deviceclass feature. ceph crush set-device-class will set the clas for a particular device. Each OSD can now have a device class associated with it (e.g., hdd orssd), allowing CRUSH rules to trivially map data to a subset of devicesin the …Read more

  • June 27, 2017
    The new Ceph container demo is super dope!

    I have been recently working on refactoring our Ceph container images. We used to have two separate images for daemon and demo. Recently, for Luminous, I decided to merge the demo container into daemon. It makes everything easier, code is in a single place, we only have a single image to test with the CI …Read more

  • June 26, 2017
    Using the new dashboard in ceph-mgr

    The upcoming Ceph Luminous (12.2.0) release features the new ceph-mgr daemon which has a few default plugins. One of these plugins is a dashboard to give you a graphical overview of your cluster. Enabling Module To enable the dashboard you have to enable the module in your /etc/ceph/ceph.conf on all machines running the ceph-mgr daemon. …Read more

  • June 25, 2017
    ceph luminous 新功能之内置dashboard

    前言 ceph luminous版本新增加了很多有意思的功能,这个也是一个长期支持版本,所以这些新功能的特性还是很值得期待的,从底层的存储改造,消息方式的改变,以及一些之前未实现的功能的完成,都让ceph变得更强,这里面有很多核心模块来自中国的开发者,在这里准备用一系列的文章对这些新功能进行一个简单的介绍,也是自己的一个学习的过程 相关配置 配置ceph国内源 修改 /etc/yum.repos.d/ceph.repo文件 [ceph]name=cephbaseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/x86_64/gpgcheck=0[ceph-noarch]name=cephnoarchbaseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch/gpgcheck=0 添加完更新下缓存 yum makecache 前一段时间163源上的ceph没有了,可能是误操作的,现在的163源已经恢复,上面添加的是最新的luminous版本源,本篇实践的功能是在这个版本才加入的 安装ceph相关软件包 [root@lab8106 ~]# yum install ceph-deploy ceph 检查版本 [root@lab8106 ~]# ceph -vceph version 12.1.0 (262617c9f16c55e863693258061c5b25dea5b086) luminous (dev) 搭建一个集群 这个就不描述配置集群的步骤,这个网上很多资料,也是很基础的操作这里提几个luminous重要的变化 默认的消息处理从simple变成了async了(ms_type = async+posix) 默认的后端存储从filestore变成了bluestore了 ceph-s的命令的输出发生了改变(显示如下) [root@lab8106 ceph]# ceph -s cluster: id: 49ee8a7f-fb7c-4239-a4b7-acf0bc37430d health: HEALTH_OK services: mon: 1 daemons, quorum lab8106 mgr: lab8106(active) osd: 2 …Read more

  • June 21, 2017
    openATTIC 3.4.1 has been released

    We are very happy to announce the release of openATTIC version 3.4.1 In this version we completely removed Nagios/PNP4Nagios graphs from the UI and installation in favor of Prometheus/Grafana. We’ve continued with the integration of Ceph Luminous features. The ‘allow_ec_overwrites’ flag can now be set when creating erasure coded pools via the REST API. The …Read more

  • June 19, 2017
    OpenStack Cinder configure replication API with Ceph

    I just figured out that there hasn’t been much coverage on that functionality, even though we presented it last year at the OpenStack Summit. I . Rationale What will follow is useful in the context of disaster recovery. This functionality was implemented during the Ocata cycle for the v2.1 replication in the RBD driver. In …Read more

  • June 14, 2017
    Ceph Docker better support for Bluestore

    I have been extensively working on ceph-docker for the last few months and it’s getting better. With the upcoming arrival of Ceph Luminous (next LTS), Bluestore will be the default backend to store objects. Thus, I had to spend some time working on improving the support for Bluestore. Now, if you want to prepare a …Read more

  • June 14, 2017
    调整PG分多次调整和一次到位的迁移差别分析

    前言 这个问题来源于我们研发的一个问题,在进行pg调整的时候,是一次调整到位好,还是分多次调整比较好,分多次调整的时候会不会出现某个pg反复挪动的问题,造成整体迁移量大于一次调整的 最近自己的项目上也有pg调整的需求,这个需求一般来源于pg规划好了,后期出现节点扩容的情况,需要对pg进行增加的调整 本篇用具体的数据来分析两种方式的差别 因为本篇的篇幅较长,直接先把结论拿出来 数据结论 调整pg 迁移pg 迁移对象 1200->1440 460 27933 1440->1680 458 27730 1680->1920 465 27946 1920->2160 457 21141 2160->2400 458 13938 总和 2305 132696 调整pg 迁移pg 迁移对象 1200->2400 2299 115361 结论:分多次调整的时候,PG迁移量比一次调整多了6个,多了0.2%,对象的迁移量多了17335,多了15% 从数据上看pg迁移的数目基本一样,但是数据量是多了15%,这个是因为分多次迁移的时候,在pg基数比较小的时候,迁移一个pg里面的对象要比后期分裂以后的对象要多,就产生了这个数据量的差别 从整体上来看二者需要迁移的pg基本差不多,数据量上面会增加15%,分多次的时候是可以进行周期性调整的,拆分到不同的时间段来做,所以各有好处 实践 环境准备 本次测试采用的是开发环境,使用开发环境可以很快的部署一个需要的环境,本次分析采用的就是一台机器模拟的4台机器48个 4T osd的环境 环境搭建 生成集群 ./vstart.sh -n –mon_num 1 –osd_num 48 –mds_num 1 –short -d 后续操作都在源码的src目录下面执行 设置存储池副本为2 …Read more

  • June 11, 2017
    SUSE Enterprise Storage 5 Beta Program

    openATTIC 3.x will be part of the upcoming SUSE Enterprise Storage 5 release, which is currently in beta testing. It will be based on the upstream Ceph “Luminous” release and will also ship with openATTIC 3.x and Salt/DeepSea for the orchestration, deployment and management. If you would like to take a look at this release …Read more

  • June 11, 2017
    Update on the State of Ceph Support in openATTIC 3.x (June 2017)

    A bit over a month ago, I posted about a few new Ceph management features that we have been working on in openATTIC 3.x after we finished refactoring the code base. These have been merged into the trunk in the meanwhile, and the team has started working on additional features. In this post, I’d like …Read more

Careers