Planet Ceph

Aggregated news from external sources

  • December 15, 2017
    Learning Ceph – Second Edition

    Learning Ceph – Second Edition was published in October 2017. This is special post to highlight a new book I’ve been helping with. Good colleagues of mine wrote that book and I encourage anyone willing to learn Ceph to get a copy of it. The book is available on Amazon. Learning Ceph, Second Edition will …Read more

  • December 14, 2017
    openATTIC 3.6.1 has been released

    It is our great pleasure to announce Version 3.6.1 of openATTIC. 3.6.1 is a bugfix release for the 3.6 stable branch, containing fixes for multiple issues that were reported by users. In addition to that, it contains several usability enhancements and security improvements. Behind the scenes, we continued with converting the WebUI code into Angular …Read more

  • December 5, 2017
    A Luminous Release

    The third one is a charm. RHCS 3 is our annual major release of Red Hat Ceph Storage and it brings great new features to customers in the areas of containers, usability and raw technology horsepower. It includes support for CephFS, giving us a complete all-in-one storage solution in Ceph spanning block, object and file …Read more

  • December 4, 2017

    前言 mds是ceph里面处理文件接口的组件,一旦使用文件系统,不可避免的会出现一种场景就是目录很多,目录里面的文件很多,而mds是一个单进程的组件,现在虽然有了muti mds,但稳定的使用的大部分场景还是单acitve mds的 这就会出现一种情况,一旦一个目录里面有很多文件的时候,去查询这个目录里的文件就会在当前目录做一次遍历,这个需要一个比较长的时间,如果能比较好的缓存文件信息,也能避免一些过载情况,本篇讲述的是内核客户端正常,而export nfs后mds的负载长时间过高的情况 问题复现 准备测试数据,准备好监控环境 监控mds cpu占用 pidstat -u 1 -p 27076 > /tmp/mds.cpu.logUserParameter=mds.cpu,cat /tmp/mds.cpu.log|tail -n 1|grep -v Average| awk ‘{print $8}’ 整个测试避免屏幕的打印影响时间统计,把输出需要重定向测试一:内核客户端写入10000文件查看时间以及cpu占用 [root@nfsserver kc10000]# time seq 10000|xargs -i dd if=/dev/zero of=a{} bs=1K count=1 2>/dev/nullreal 0m30.121suser 0m1.901ssys 0m10.420s 测试二:内核客户端写入20000文件查看时间以及cpu占用 [root@nfsserver kc20000]# time seq 20000|xargs -i dd if=/dev/zero of=a{} bs=1K count=1 2>/dev/nullreal 1m38.233suser …Read more

  • November 30, 2017
    CentOS GRUB损坏修复方法

    前言 博客很久没有更新了,一个原因就是原来存放部署博客的环境坏了,硬盘使用的是SSD,只要读取到某个文件,整个磁盘就直接识别不到了,还好博客环境之前有做备份,最近一直没有把部署环境做下恢复,今天抽空把环境做下恢复并且记录一篇基础的GRUB的处理文档 这两天正好碰到GRUB损坏的事,很久前处理过,但是没留下文档,正好现在把流程梳理一下,来解决grub.cfg损坏的情况,或者无法启动的情况 实践步骤 安装操作系统的时候会有多种可能分区的方法,一个直接的分区,一个是用了lvm,本篇将几种分区的情况分别写出来 lvm分区的情况 [root@localhost ~]# df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/centos-root 17G 927M 17G 6% /devtmpfs 901M 0 901M 0% /devtmpfs 912M 0 912M 0% /dev/shmtmpfs 912M 8.6M 904M 1% /runtmpfs 912M 0 912M 0% /sys/fs/cgroup/dev/sda1 1014M 143M 872M 15% /boottmpfs 183M 0 183M 0% /run/user/0 模拟/boot/grub2/grub.cfg的破坏 [root@localhost ~]# mv …Read more

  • November 28, 2017
    Introducing ceph-nano

    I’ve recently started a small project that aims to help developers working with the S3 API. The program is called cn for Ceph Nano, is available on github let me give you a tour of what it does. I initially presented the program during my talk at the last OpenStack summit in Sydney. Originally, I …Read more

  • November 28, 2017
    Quick overview of Ceph version running on OSDs

    When checking a Ceph cluster it’s useful to know which versions you OSDs in the cluster are running. There is a very simple on-line command to do this: ceph osd metadata|jq ‘.[].ceph_version’|sort|uniq -c Running this on a cluster which is currently being upgraded to Jewel to Luminous it shows: 10 “ceph version 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f)” 1670 …Read more

  • November 27, 2017
    Introducing Ceph Ansible profile library

    A couple of releases ago, in order to minimize changes within the ceph.conf.j2 Jinja template, we introduced a new module that we took from the OpenStack Ansible guy. This module is called config_template and allows us to declare Ceph configuration options as variables in your group_vars files. This is extremely useful for us Based on …Read more

  • November 7, 2017
    OpenStack Summit Syndey: Delivering OpenStack and Ceph in containers

    DELIVERING OPENSTACK AND CEPH IN CONTAINERS Date: 07/11/17 Video: Source: Sebastian Han (OpenStack Summit Syndey: Delivering OpenStack and Ceph in containers)

  • November 1, 2017
    openATTIC 3.6.0 has been released

    We’re happy to announce version 3.6.0 of openATTIC. Given the fact that openATTIC 3.5.3 was only a bug fix release this 3.6.0 release includes all the improvements and changes since 3.5.2. We cleaned up and removed a lot of unnecessary things and also made some usability improvements to the UI. The most visible change in …Read more

  • October 30, 2017
    openATTIC 3.5.3 has been released

    We are happy to announce version 3.5.3 of openATTIC. 3.5.3 is a small bugfix release mainly containing fixes for upgrade bugs from openATTIC 2.0. This release also features fixes regarding deletion of RGW users with existing buckets and the health widget in the classic dashboard widget. Read more… (1 min remaining to read) Source: SUSE …Read more

  • October 23, 2017
    Using Erasure Coding with RadosGW

    This is going to be a quick write up of Erasure Coding and how to use it with our RadosGW. First lets look at our default profile for erasure coding on Ceph, understand it, and go and create our own. 123456 root> ceph osd erasure-code-profile get defaultk=2m=1plugin=jerasurecrush-failure-domain=hosttechnique=reed_sol_van Erasure coding profiles break down using the following …Read more