Planet Ceph

Aggregated news from external sources

  • May 27, 2017
    A tool to rebalance uneven Ceph pools

    The algorithm to fix uneven CRUSH distributions in Ceph was implemented as the crush optimize subcommand. Given the output of ceph report, crush analyze can show buckets that are over/under filled: $ ceph report > ceph_report.json $ crush analyze –crushmap ceph_report.json –pool 3 ~id~ ~weight~ ~PGs~ ~over/under filled %~ ~name~ cloud3-1363 -6 419424 1084 7.90 …Read more

  • May 23, 2017
    Freebsd10.2安装包升级pkg引起环境破坏的解决

    前言 freebsd10.2环境在安装一个新软件包的时候提示升级pkg到1.10.1,然后点击了升级,然后整个pkg环境就无法使用了 记录 升级完了软件包以后第一个错误提示 FreeBSD: /usr/local/lib/libpkg.so.3: Undefined symbol “utimensat” 这个是因为这个库是在freebsd的10.3当中才有的库,而我的环境是10.2的环境 网上有一个解决办法 更新源 # cat /usr/local/etc/pkg/repos/FreeBSD.confFreeBSD: { url: “pkg+http://pkg.FreeBSD.org/${ABI}/release_2”, enabled: yes} 检查当前版本 # pkg –version1.10.1 更新缓存 # pkg update 卸载 # pkg delete -f pkg 重新安装 # pkg install -y pkg# pkg2ng 检查版本 # pkg –version1.5.4 这个在我的环境下没有生效 还有一个办法 有个pkg-static命令可以使用,,然后/var/cache/pkg里边缓存的包。执行命令: # pkg-static install -f /var/cache/pkg/pkg-1.5.4.txz“` 这个在我的环境下报错“`bashroot@mkiso:/usr/ports/ports-mgmt/pkg # …Read more

  • May 12, 2017
    An algorithm to fix uneven CRUSH distributions in Ceph

    The current CRUSH implementation in Ceph does not always provide an even distribution. The most common cause of unevenness is when only a few thousands PGs, or less, are mapped. This is not enough samples and the variations can be as high as 25%. For instance, when there are two OSDs with the same weight, …Read more

  • May 11, 2017
    Ceph space lost due to overweight CRUSH items

    When a CRUSH bucket contains five Ceph OSDs with the following weights: weight osd.0 5 osd.1 1 osd.2 1 osd.3 1 osd.4 1 20% of the space in osd.0 will never be used by a pool with two replicas. The osd.0 gets 55% of the values for the first replica (i.e 5 / 9), as …Read more

  • May 8, 2017
    OpenStack Summit Boston: Deterministic Storage Performance

    Deterministic Storage Performance – ‘The AWS Way’ for capacity based QoS with OpenStack and Ceph Date: 08/05/17 Video: http://sebastien-han.fr/viewer/web/viewer.html?val=http://www.sebastien-han.fr/down/Deterministic_Storage_Performance_-_The_AWS_Way_for_capacity_based_QoS_with_OpenStack_and_Ceph_-_OS_SUMMIT17.pdf Source: Sebastian Han (OpenStack Summit Boston: Deterministic Storage Performance)

  • May 6, 2017
    Recovering from a complete node failure

    Recovering an entire OSD node A Ceph Recovery Story Note: This will be a very lengthy and detail account of my experience. If you want to skip it, please just scroll down to the TL:DR section at the bottom I wanted to share with everyone a situation that happened to me over the weekend. This …Read more

  • May 5, 2017
    Sneak preview: Upcoming Ceph Management Features

    Despite the number of disruptive changes that we went through in the past few weeks, e.g. moving our code base from Mercurial to git, relocating our infrastructure to a new data center, refactoring our code base for version 3.0, our developers have been busy working on expanding the Ceph management capabilities in openATTIC. I’d like …Read more

  • May 4, 2017
    Ceph full ratio and uneven CRUSH distributions

    A common CRUSH rule in Ceph is step chooseleaf firstn 0 type host meaning Placement Groups (PGs) will place replicas on different hosts so the cluster can sustain the failure of any host without losing data. The missing replicas are then restored from the surviving replicas (via a process called “backfilling”) and placed on the …Read more

  • May 3, 2017
    Ceph OSD从filestore 转换到 bluestore的方法

    前言 前段时间看到豪迈的公众号上提到了这个离线转换工具,最近看到群里有人问,找了下没什么相关文档,就自己写了一个,供参考 实践步骤 获取代码并安装 git clone https://github.com/ceph/ceph.gitcd cephgit submodule update –init –recursive./make-distrpm -bb ceph.spec 生成rpm安装包后进行安装,这个过程就不讲太多,根据各种文档安装上最新的版本即可,这个代码合进去时间并不久,大概是上个月才合进去的 配置集群 首先配置一个filestore的集群,这个也是很简单的,我的环境配置一个单主机三个OSD的集群 [root@lab8106 ceph]# ceph -s cluster 3daaf51a-eeba-43a6-9f58-c26c5796f928 health HEALTH_WARN mon.lab8106 low disk space monmap e2: 1 mons at {lab8106=192.168.8.106:6789/0} election epoch 4, quorum 0 lab8106 mgr active: lab8106 osdmap e16: 3 osds: 3 up, 3 in pgmap v34: 64 …Read more

  • May 3, 2017
    openATTIC 2.0.20 has been released

    It is our great pleasure to announce the release of openATTIC version 2.0.20. This is a minor bugfix release, which also provides a number of small selected improvements, e.g. in the WebUI (styling, usability), installation and logging (now adds PID and process name to logs). Furthermore, we updated our documentation – especially the installation instructions …Read more

  • May 3, 2017
    多MDS变成单MDS的方法

    前言 之前有个cepher的环境上是双活MDS的,需要变成MDS,目前最新版本是支持这个操作的 方法 设置最大mds 多活的mds的max_mds会超过1,这里需要先将max_mds设置为1 ceph mds set max_mds 1 deactive mds 看下需要停掉的mds是rank 0 还是rank1,然后执行下面的命令即可 [root@server8 ~]# zbkc -s|grep mdsmap mdsmap e13: 1/1/1 up {0=lab8106=up:clientreplay} 这个输出的lab8106前面的0,就是这个mds的rank,根据需要停止对应的rank ceph mds deactivate 1 总结 不建议用多活mds 变更记录 Why Who When 创建 武汉-运维-磨渣 2017-05-03 Source: zphj1987@gmail (多MDS变成单MDS的方法)

  • April 29, 2017
    Ceph hybrid storage tiers

    In a previous post I showed you how to deploy storage tiering for Ceph, today I will explain how to setup hybrid storage tiers. What is hybrid storage? Hybrid storage is a combination of two different storage tiers like SSD and HDD. In Ceph terms that means that the copies of each objects are located …Read more

Careers