Planet Ceph

Aggregated news from external sources

  • March 15, 2018
    Ansible module to create CRUSH hierarchy

    First post of the year after a long time with no article, three months… I know it has been a while, I wish I had more time to do more blogging. I have tons of draft articles that never made it through, I need to make up for lost time. So for this first post, …Read more

  • March 13, 2018
    The initial Ceph Dashboard v2 pull request has been merged!

    It actually happened exactly one week ago while I was on vacation: it’s our great pleasure and honor to announce that we have reached our first milestone – the initial Ceph Dashboard v2 pull request has now been merged into the upstream Ceph master git branch, so it will become part of the upcoming Ceph …Read more

  • March 7, 2018
    openATTIC 3.6.2 has been released

    We’re happy to announce version 3.6.2 of openATTIC! Version 3.6.2 is the second bugfix release of the 3.6 stable branch, containing fixes for multiple issues that were reported by users. One new feature that we want to point out is the internationalization. openATTIC has been translated to Chinese and German to be present on other …Read more

  • March 6, 2018
    CephFS Admin Tips – Create a new user and share

    Hi my name is Stephen McElroy, and in this guide I will be showing how to create a new user, set permissions, set quotas, mount the share, and make them persistent on the client. Creating the user On the Ceph admin nodeLets create a basic user and give it capabilities to read the / and …Read more

  • March 1, 2018
    The Ceph Dashboard v2 pull request is ready for review!

    About a month ago, we shared the news that we started working on a replacement for the Ceph dashboard, to set the stage for creating a full-fledged, built-in web-base management tool for Ceph. We’re happy to announce that we have now finalized the preparations for the initial pull request, which marks our first milestone in …Read more

  • February 19, 2018
    How to do a Ceph cluster maintenance/shutdown

    Last week someone asked on the ceph-users ML how to shutdown a Ceph cluster and I would like to summarize the steps that are neccessary to do that. Stop the clients from using your Cluster (this step is only neccessary if you want to shutdown your whole cluster) Important – Make sure that your cluster …Read more

  • February 10, 2018
    REDHAT 7.5beta 新推出的VDO功能

    前言 关于VDO VOD的技术来源于收购的Permabit公司,一个专门从事重删技术的公司,所以技术可靠性是没有问题的 VDO是一个内核模块,目的是通过重删减少磁盘的空间占用,以及减少复制带宽,VDO是基于块设备层之上的,也就是在原设备基础上映射出mapper虚拟设备,然后直接使用即可,功能的实现主要基于以下技术: 零区块的排除: 在初始化阶段,整块为0的会被元数据记录下来,这个可以用水杯里面的水和沙子混合的例子来解释,使用滤纸(零块排除),把沙子(非零空间)给过滤出来,然后就是下一个阶段的处理 重复数据删除: 在第二阶段,输入的数据会判断是不是冗余数据(在写入之前就判断),这个部分的数据通过UDS内核模块来判断(U niversal D eduplication S ervice),被判断为重复数据的部分不会被写入,然后对元数据进行更新,直接指向原始已经存储的数据块即可 压缩: 一旦消零和重删完成,LZ4压缩会对每个单独的数据块进行处理,然后压缩好的数据块会以固定大小4KB的数据块存储在介质上,由于一个物理块可以包含很多的压缩块,这个也可以加速读取的性能 上面的技术看起来很容易理解,但是实际做成产品还是相当大的难度的,技术设想和实际输出还是有很大距离,不然redhat也不会通过收购来获取技术,而不是自己去重新写一套了 如何获取VDO 主要有两种方式,一种是通过申请测试版的方式申请redhat 7.5的ISO,这个可以进行一个月的测试 另外一种方式是申请测试版本,然后通过源码在你正在使用的ISO上面进行相关的测试,从适配方面在自己的ISO上面进行测试能够更好的对比,由于基于redhat的源码做分发会涉及法律问题,这里就不做过多讲解,也不提供rpm包,自行申请测试即可 实践过程 安装VDO 安装的操作系统为CentOS Linux release 7.4.1708 [root@lab101 ~]# lsb_release -aLSB Version: :core-4.1-amd64:core-4.1-noarchDistributor ID: CentOSDescription: CentOS Linux release 7.4.1708 (Core) Release: 7.4.1708Codename: Core 内核版本如下 [root@lab101 ~]# uname -aLinux lab101 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 …Read more

  • February 2, 2018
    Ceph Manager Dashboard v2

    The original Ceph Manager Dashboard that was introduced in Ceph “Luminous” started out as a simple, read-only view into various run-time information and performance data of a Ceph cluster, without authentication or any administrative functionality. However, as it turns out, there is a growing demand for adding more web-based management capabilities, to make it easier …Read more

  • January 29, 2018
    Building Ceph master with C++17 support on openSUSE Leap 42.3

    Ceph now requires C++17 support, which is available with modern compilers such as gcc-7. openSUSE Leap 42.3, my current OS of choice, includes gcc-7. However, it’s not used by default. Using gcc-7 for the Ceph build is a simple matter of: > sudo zypper in gcc7-c++> CC=gcc-7 CXX=/usr/bin/g++-7 ./do_cmake.sh …> cd build && make -j …Read more

  • January 29, 2018
    Placement Groups with Ceph Luminous stay in activating state

    Placement Groups stuck in activating When migrating from FileStore with BlueStore with Ceph Luminuous you might run into the problem that certain Placement Groups stay stuck in the activating state. 44 activating+undersized+degraded+remapped PG Overdose This is a side-effect of the new PG overdose protection in Ceph Luminous. Too many PGs on your OSDs can cause …Read more

  • January 16, 2018
    定位一个网络问题引起的ceph异常

    前言 有一个ceph环境出现了异常,状态就是恢复异常的慢,但是所有数据又都在走,只是非常的慢,本篇将记录探测出问题的过程,以便以后处理类似的问题有个思路 处理过程 问题的现象是恢复的很慢,但是除此以外并没有其它的异常,通过iostat监控磁盘,也没有出现异常的100%的情况,暂时排除了是osd底层慢的问题 检测整体写入的速度 通过rados bench写入 rados -p rbd bench 5 write 刚开始写入的时候没问题,但是写入了以后不久就会出现一只是0的情况,可以判断在写入某些对象的时候出现了异常 本地生成一些文件 seq 0 30|xargs -i dd if=/dev/zero of=benchmarkzp{} bs=4M count=2 通过rados put 命令把对象put进去 for a in `ls ./`;do time rados -p rbd put $a $a;echo $a;ceph osd map rbd $a;done 得到的结果里面会有部分是好的,部分是非常长的时间,对结果进行过滤,分为bad 和good 开始怀疑会不会是固定的盘符出了问题,首先把磁盘组合分出来,完全没问题的磁盘全部排除,结果最后都排除完了,所以磁盘本省是没问题的 根据pg的osd组合进行主机分类 1 2 4 ok3 1 2 bad2 …Read more

  • January 11, 2018
    How to create a vagrant VM from a libvirt vm/image

    It cost’s me some nerves and time to figure out how to create a vagrant image from a libvirt kvm vm and how to modify an existing one. Thanks to pl_rock from stackexchange for the awesome start. First of all you have to install a new vm as usual. I’ve installed a new vm with …Read more

Careers