Planet Ceph

Aggregated news from external sources

  • February 23, 2017
    Catastrophe Hits Your Datacenter – But Users Don’t Notice

    Many large, network-dependent organizations are deploying OpenStack together with Red Hat Ceph Storage because they are inherently highly available solutions. What if you lost your datacenter completely in a catastrophe, but your users hardly noticed? Sounds like a mirage, but it’s absolutely possible, and major datacenters with high demand for full availability are already accomplishing …Read more

  • February 20, 2017
    No more privileged containers for Ceph OSDs

    I’m really sorry for being so quiet lately, I know I promised to release articles more regularly and I clearly failed… Many things are going on and where motivation is key to write articles, I’ve been having a hard time to find the right motivation to write :/ However, I am not giving up and …Read more

  • February 16, 2017
    Importing an existing Ceph RBD image into Glance

    The normal process of uploading an image into Glance is straightforward: you use glance image-create or openstack image create, or the Horizon dashboard. Whichever process you choose, you select a local file, which you upload into the Glance image store. This process can be unpleasantly time-consuming when your Glance service is backed with Ceph RBD, …Read more

  • February 16, 2017
    Ceph and RBD mirroring, upcoming enhancements

    I’ve been getting a lot of questions about the RBD mirroring daemon so I thought I will do a blog post similar to a FAQ. Most of the features described in this article will likely be released for Ceph Luminous. Luminous should land this fall, so be patient :). HA support A crucial part of …Read more

  • February 13, 2017
    Do not use SMR disks with Ceph

    Many new disks like the Seagate He8 disks are using a technique called Shingled Magnetic Recording to increase capacity. As these disks offer a very low price per Gigabyte they seem interesting to use in a Ceph cluster. Performance Due to the nature of SMR these disks are very, very, very bad when it comes …Read more

  • February 10, 2017
    根据一段话判断情绪

    引言 看到一个好玩的项目,女朋友的微博情绪监控,这个是根据一段话来判断情绪的,记得之前有在哪里看到过,未来的一切都是API,也就是很多东西会被封装好,你只需要去用就可以了,这个就是一个很好的例子,你可以不懂语意分析,不懂分词,这些都不要紧,只要你给出你的素材,后面就交给api去处理 代码 #!/usr/bin/env python# -*- coding: UTF-8 -*-import sysimport jsonimport requestsdef main(): if len(sys.argv) != 2: help() else: printpromotion(sys.argv[1])def help(): print “””Usage : qingxu.py [-h] [word] 情绪鉴定 – 判断一段话的情绪 OPTIONS ======== sample: [root@host ~]# python qingxu.py 开心 说的话: word 正面情绪: 98.3% 负面情绪: 1.7% ======== “””def printpromotion(word): weburl=’https://api.prprpr.me/emotion/wenzhi?password=DIYgod&text=’+word r = requests.get(‘%s’ %weburl) json_str = json.loads(r.text) …Read more

  • February 8, 2017
    预估ceph的迁移数据量

    引言 我们在进行 ceph 的 osd 的增加和减少的维护的时候,会碰到迁移数据,但是我们平时会怎么去回答关于迁移数据量的问题,一般来说,都是说很多,或者说根据环境来看,有没有精确的一个说法,到底要迁移多少数据?这个我以前也有思考过这个问题,当时想是对比前后的pg的分布,然后进行计算,正好在翻一些资料的时候,看到有alram写的一篇博客,alram是Inktank的程序员,也就是sage所在的公司,程序是一个python脚本,本篇会分析下这个对比的思路,以及运行效果 计算迁移量只需要一个修改后的crushmap就可以了,这个是离线计算的,所以不会对集群有什么影响 运行效果 准备修改后的crushmap 获取当前crushmap ceph osd getcrushmap -o crushmap 解码crushmap crushtool -d crushmap -o crushmap.txt 修改crushmap.txt这个根据自己需要,修改成自己想修改成的crushmap即可,可以是增加,也可以是删除 减少节点的计算 假如删除一个osd.5 我们需要迁移多少数据将crushmap里面的osd.5的weight改成0 crushtool -c crushmap.txt -o crushmapnew 运行计算脚本 [root@lab8106 ceph]# python jisuan.py –crushmap-file crushmapnewPOOL REMAPPED OSDs BYTES REBALANCE OBJECTS REBALANCE rbd 59 6157238296 1469 data 54 5918162968 1412 metadata 53 5825888280 1390 …Read more

  • January 29, 2017
    Edit the Ceph CRUSHmap

    The CRUSHmap, as suggested by the name, is a map of your storage cluster. This map is necessary for the CRUSH algorithm to determine data placements. But Ceph’s CRUSHmap is stored in binary form. So how to easily change it? Native tools Ceph comes with a couple of native commands to do basic customizations to …Read more

  • January 27, 2017
    Erasure Code on Small Clusters

    Erasure code is rather designed for clusters with a sufficient size. However if you want to use it with a small amount of hosts you can also adapt the crushmap for a better matching distribution to your need. Here a first example for distributing data with 1 host OR 2 drive fault tolerance with k=4, …Read more

  • January 24, 2017
    Testing Ceph BlueStore with the Kraken release

    Ceph version Kraken (11.2.0) has been released and the Release Notes tell us that the new BlueStore backend for the OSDs is now available. BlueStore The current backend for the OSDs is the FileStore which mainly uses the XFS filesystem to store it’s data. To overcome several limitations of XFS and POSIX in general the …Read more

  • January 24, 2017
    Linux 升级内核开启 TCP BBR 有多大好处

    如果你有订阅一些科技新闻,应该会有看过内核在4.9当中加入了一个新的算法,来解决在有一定的丢包率的情况下的带宽稳定的问题,这个是谷歌为我们带来的干货,新的 TCP 拥塞控制算法 BBR (Bottleneck Bandwidth and RTT),谷歌一向的做法是,先上生产,然后发论文,然后有可能开源,所以这个已经合并到了内核4.9分支当中,算法带来的改变在出的测试报告当中有很详细的数据展示,这个看多了可能反而不知道到底会有什么明显改变,特别是对于我们自己的场景 那么本篇就是来做一个实践的,开看看在通用的一些场景下,这个改变有多大,先说下结果,是真的非常大 实践 还是我的两台机器lab8106和lab8107,lab8106做一个webserver,lab8107模拟客户端,用简单的wget来进行测试,环境为同一个交换机上的万兆网卡服务器 我们本次测试只测试一种丢包率的情况就是1%,有兴趣的情况下,可以自己去做些其他丢包率的测试,大多数写在丢包率20%以上的时候,效果可能没那么好,这个高丢包率不是我们探讨的情况,毕竟不是常用的场景 安装新内核 内核可以自己选择4.9或者以上的进行安装,也可以用yum安装,这里只是测试,就yum直接安装 yum –enablerepo=elrepo-kernel install kernel-ml 修改启动项 grub2-editenv listgrub2-set-default ‘CentOS Linux (4.9.5-1.el7.elrepo.x86_64) 7 (Core)’grub2-editenv list 准备下载数据 准备一个web服务器然后把一个iso丢到根目录下,用于客户端的wget 设置丢包率 这里用tc进行控制的,也就是一条命令就可以了,这个还可以做其他很多控制,可以自行研究 tc qdisc add dev enp2s0f0 root netem loss 1% 如果需要取消限制 tc qdisc del root dev enp2s0f0 设置新的算法 讲下面的两个配置文件添加到/etc/sysctl.conf net.ipv4.tcp_congestion_control=bbrnet.core.default_qdisc=fq 然后执行sysctl -p让它生效 检查是参数是否生效 [root@lab8106 rpmbuild]# …Read more

  • January 23, 2017
    Crushmap for 2 DC

    An example of crushmap for 2 Datacenter replication : 1 2 3 4 5 6 7 8 9 10 rule replicated_ruleset { ruleset X type replicated min_size 2 max_size 3 step take default step choose firstn 2 type datacenter step chooseleaf firstn -1 type host step emit } This working well with pool size=2 (not …Read more

Careers