Planet Ceph

Aggregated news from external sources

  • August 31, 2018
    mountpoint presentation of Ceph Nano

    Date: 28/08/18 Source: Sebastian Han (mountpoint presentation of Ceph Nano)

  • August 18, 2018
    cephfs根据存储池显示df容量

    前言 如果用cephfs比较多,应该都知道,在cephfs的客户端进行mount以后,看到的容量显示的是集群的总的容量,也就是你的总的磁盘空间是多少这个地方显示的就是多少 这个一直都是这样显示的,我们之前在hammer版本的时候,阿茂和大黄一起在公司内部实现了这个功能,社区会慢慢的集成一些类似的面向面向商业用户的需求 社区已经开发了一个版本,接口都做的差不多了,那么稍微改改,就能实现想要的需求的 本篇内的改动是基于内核客户端代码的改动,改动很小,应该能够看的懂 改动过程 首先找到这个补丁 Improve accuracy of statfs reporting for Ceph filesystems comprising exactly one data pool. In this case, the Ceph monitor can now report the space usage for the single data pool instead of the global data for the entire Ceph cluster. Include support for this message in mon_client and …Read more

  • August 8, 2018
    openATTIC 3.7.0 has been released

    We’re happy to announce version 3.7.0 of openATTIC! Version 3.7.0 is the first bugfix release of the 3.7 stable branch, containing fixes for multiple issues that were mainly reported by users. There has been an issue with self-signed certificates in combination with the RGW proxy which is now configurable. We also improved the openATTIC user …Read more

  • July 17, 2018
    快速构建ceph可视化监控系统

    前言 ceph的可视化方案很多,本篇介绍的是比较简单的一种方式,并且对包都进行了二次封装,所以能够在极短的时间内构建出一个可视化的监控系统 本系统组件如下: ceph-jewel版本 ceph_exporter的jewel版本 prometheus的2.3.2版本 grafana的grafana-5.2.1版本 Ceph grafana的插件- Clusterby Cristian Calin 适配的系统为centos7 资源如下: http://static.zybuluo.com/zphj1987/jiwx305b8q1hwc5uulo0z7ft/ceph_exporter-2.0.0-1.x86_64.rpmhttp://static.zybuluo.com/zphj1987/1nu2k4cpcery94q2re3u6s1t/ceph-cluster_rev1.jsonhttp://static.zybuluo.com/zphj1987/7ro7up6r03kx52rkwy1qjuwm/prometheus-2.3.2-1.x86_64.rpmhttp://7xweck.com1.z0.glb.clouddn.com/grafana-5.2.1-1.x86_64.rpm 以上资源均可以直接用wget进行下载,然后直接安装 监控的架构介绍 通过ceph_exporter抓取的ceph相关的数据并且在本地监听端口9128端口 prometheus抓取ceph_exporter的9128的端口的数据存储在本地的/var/lib/prometheus/目录下面 grafana抓取prometheus的数据进行渲染成web页面 页面的模板就是使用的grafana的ceph模板插件 那么我们就根据上面的架构去一步步的把系统配置起来 配置监控系统 安装ceph_exporter [root@lab101 install]# wget http://static.zybuluo.com/zphj1987/jiwx305b8q1hwc5uulo0z7ft/ceph_exporter-2.0.0-1.x86_64.rpm[root@lab101 install]# rpm -qpl ceph_exporter-2.0.0-1.x86_64.rpm /usr/bin/ceph_exporter/usr/lib/systemd/system/ceph_exporter.service[root@lab101 install]# rpm -ivh ceph_exporter-2.0.0-1.x86_64.rpm Preparing… ################################# [100%]Updating / installing… 1:ceph_exporter-2:2.0.0-1 ################################# [100%][root@lab101 install]# systemctl start ceph_exporter[root@lab101 install]# systemctl enable ceph_exporter[root@lab101 install]# netstat …Read more

  • June 27, 2018
    利用s3-test进行ceph的接口兼容性测试

    前言 ceph的rgw能够提供一个兼容性的s3的接口,既然是兼容性,当然不可能是所有接口都会兼容,那么我们需要有一个工具来进行接口的验证以及测试,这个在其他测试工具里面有类似的posix接口验证工具,这类的工具就是跑测试用例,来输出通过或者不通过的列表 用此类的工具有个好的地方就是,能够对接口进行验证,来避免版本的更新带来的接口破坏 安装 直接对官方的分支进行clone下来,总文件数不多,下载很快 [root@lab101 s3]# git clone https://github.com/ceph/s3-tests.git[root@lab101 s3]# cd s3-tests/ 这个地方注意下有版本之分,测试的时候需要用对应版本,这里我们测试的jewel版本就切换到jewel的分支(关键步骤) [root@lab101 s3-tests]# git branch -a[root@lab101 s3-tests]# git checkout -b jewel remotes/origin/ceph-jewel[root@lab101 s3-tests]# ./bootstrap 进入到目录当中执行 ./bootstrap进行初始化相关的工作,这个是下载一些相关的库和软件包,并且创建了一个python的虚拟环境,如果从其他地方拷贝过来的代码最好是删除掉python虚拟环境,让程序自己去重新创建一套环境 执行完了以后就是创建测试配置文件test.conf [DEFAULT]## this section is just used as default for all the “s3 *”## sections, you can place these variables also directly there## replace with e.g. …Read more

  • June 11, 2018
    ceph erasure默认的min_size分析

    引言 最近接触了两个集群都使用到了erasure code,一个集群是hammer版本的,一个环境是luminous版本的,两个环境都出现了incomplete,触发的原因有类似的地方,都是有osd的离线的问题 准备在本地环境进行复验的时候,发现了一个跟之前接触的erasure不同的地方,这里做个记录,以防后面出现同样的问题 分析过程 准备了一个luminous的集群,使用默认的erasure的profile进行了创建存储池的相关工作 [root@lab102 ~]# ceph osd erasure-code-profile get defaultk=2m=1plugin=jerasuretechnique=reed_sol_van 默认的是2+1的纠删码的配置,创建完了以后存储池的配置是这样的 [root@lab102 ~]# ceph osd dump|grep poolpool 1 ‘rbd’ erasure size 3 min_size 3 crush_rule 2 object_hash rjenkins pg_num 256 pgp_num 256 last_change 41 flags hashpspool stripe_width 8192 application rbdrc 然后停止了一个osd以后,状态变成了这样的 [root@lab102 ~]# ceph -s cluster: id: 9ec7768a-5e7c-4f8e-8a85-89895e338cca health: HEALTH_WARN 1 osds …Read more

  • May 30, 2018
    Storage for Data Platforms in 10 minutes

    Kyle Bader and I teamed up to deliver a quick (and hopefully painless) review of what types of storage your Big Data strategy needs to succeed alongside the better-understood (and more traditional) existing approaches to structured data. Data platform engineers need to receive support from both the Compute and the Storage infrastructure teams to deliver. …Read more

  • May 29, 2018
    cephfs元数据池故障的恢复

    前言 cephfs 在L版本已经比较稳定了,这个稳定的意义个人觉得是在其故障恢复方面的成熟,一个文件系统可恢复是其稳定必须具备的属性,本篇就是根据官网的文档来实践下这个恢复的过程 实践过程 部署一个ceph Luminous集群 [root@lab102 ~]# ceph -vceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable) 创建filestore ceph-deploy osd create lab102 –filestore –data /dev/sdb1 –journal /dev/sdb2 这里想用filestore进行测试就按上面的方法去创建osd即可 传入测试数据 doc pic vidio这里提供下载链接 链接:https://pan.baidu.com/s/19tlFi4butA2WjnPAdNEMwg 密码:ugjo 这个是网上下载的模板的数据,方便进行真实的文件的模拟,dd产生的是空文件,有的时候会影响到测试 需要更多的测试文档推荐可以从下面网站下载 视频下载: https://videos.pexels.com/popular-videos 图片下载: https://www.pexels.com/ 文档下载: http://office.mmais.com.cn/Template/Home.shtml 元数据模拟故障 跟元数据相关的故障无非就是mds无法启动,或者元数据pg损坏了,这里我们模拟的比较极端的情况,把metadata的元数据对象全部清空掉,这个基本能覆盖到最严重的故障了,数据的损坏不在元数据损坏的范畴 清空元数据存储池 for object in `rados -p metadata ls`;do rados -p metadata rm $object;done …Read more

  • May 28, 2018
    Ceph and Ceph Manager Dashboard presentations at openSUSE Conference 2018

    Last weekend, the openSUSE Conference 2018 took place in Prague (Czech Republic). Our team was present to talk about Ceph and our involvement in developing the Ceph manager dashboard, which will be available as part of the upcoming Ceph “Mimic” release. The presentations were held by Laura Paduano and Kai Wagner from our team – …Read more

  • May 24, 2018
    How to Survive an OpenStack Cloud Meltdown with Ceph

    Los Tres Caballeros —sans sombreros— descended on Vancouver this week to participate in the “Rocky” OpenStack Summit. For the assembled crowd of clouderati, Sébastien Han, Sean Cohen and yours truly had one simple question: what if your datacenter was wiped out in its entirety, but your users hardly even noticed? We have touched on the …Read more

  • May 22, 2018
    OpenStack Summit Vancouver: How to Survive an OpenStack Cloud Meltdown with Ceph

    Date: 22/05/18 Video: Source: Sebastian Han (OpenStack Summit Vancouver: How to Survive an OpenStack Cloud Meltdown with Ceph)

  • May 17, 2018
    See you at the OpenStack Summit

    Next week is the OpenStack Summit conference. I will attend and will be giving a talk How to Survive an OpenStack Cloud Meltdown with Ceph. See you there! Source: Sebastian Han (See you at the OpenStack Summit)

Careers