Planet Ceph

Aggregated news from external sources

  • April 19, 2017
    Cephfs的文件存到哪里了

    前言 在ceph里面使用rbd接口的时候,存储的数据在后台是以固定的prifix的对象存在的,这样就能根据相同的前缀对象去对image文件进行拼接或者修复 在文件系统里面这一块就要复杂一些,本篇就写的关于这个,文件和对象的对应关系是怎样的,用系统命令怎么定位,又是怎么得到这个路径的 实践 根据系统命令进行文件的定位 写入测试文件 dd if=/dev/zero of=/mnt/testfile bs=4M count=10 查看文件的映射 [root@lab8106 mnt]# cephfs /mnt/testfile mapWARNING: This tool is deprecated. Use the layout.* xattrs to query and modify layouts. FILE OFFSET OBJECT OFFSET LENGTH OSD 0 10000001188.00000000 0 4194304 1 4194304 10000001188.00000001 0 4194304 0 8388608 10000001188.00000002 0 4194304 1 12582912 10000001188.00000003 0 4194304 …Read more

  • April 19, 2017
    Implementing a more scalable storage management framework in openATTIC 3.0

    Over the course of the last years, we’ve been working on expanding and enhancing both our “traditional” local storage management functionality (NFS/CIFS/iSCSI on top of local attached disks) as well as the Ceph management features in openATTIC. Along the way, it became more and more clear to us that our current approach for managing storage …Read more

  • April 19, 2017
    为什么删除的Ceph对象还能get

    前言 在很久以前在研究一套文件系统的时候,当时发现一个比较奇怪的现象,没有文件存在,磁盘容量还在增加,在研究了一段时间后,发现这里面有一种比较奇特的处理逻辑 这套文件系统在处理一个文件的时候放入的是一个临时目录,最开始在发送第一个写请求后,在操作系统层面马上进行了一个delete操作,而写还在继续,并且需要处理这个数据的进程一直占着的,一旦使用完这个文件,不需要做处理,这个文件就会自动释放掉,而无需担心临时文件占用空间的问题 在Ceph集群当中,有人碰到了去后台的OSD直接rm一个对象后,在前端通过rados还能get到这个删除的对象,而不能rados ls到,我猜测就是这个原因,我们来看下怎么验证这个问题 验证步骤 准备测试数据,并且put进去集群 [root@lab8106 ~]# cat zp1 sdasdasd[root@lab8106 ~]# rados -p rbd put zp1 zp1[root@lab8106 ~]# rados -p rbd lszp1 找到测试数据并且直接 rm 删除 [root@lab8106 ~]# ceph osd map rbd zp1osdmap e90 pool ‘rbd’ (3) object ‘zp1’ -> pg 3.43eb7bdb (3.1b) -> up ([0], p0) acting ([0], p0)[root@lab8106 ~]# ll /var/lib/ceph/osd/ceph-0/current/3.1b_head/DIR_B/DIR_D/zp1__head_43EB7BDB__3 -rw-r–r– …Read more

  • April 19, 2017
    Ceph删除OSD上一个异常object

    前言 ceph里面的数据是以对象的形式存储在OSD当中的,有的时候因为磁盘的损坏或者其它的一些特殊情况,会引起集群当中的某一个对象的异常,那么我们需要对这个对象进行处理 在对象损坏的情况下,启动OSD有的时候都会有问题,那么通过rados rm的方式是没法发送到这个无法启动的OSD的,也就无法删除,所以需要用其他的办法来处理这个情况 处理步骤 查找对象的路径 [root@lab8106 ~]# ceph osd map rbd rbd_data.857e6b8b4567.00000000000000baosdmap e53 pool ‘rbd’ (0) object ‘rbd_data.857e6b8b4567.00000000000000ba’ -> pg 0.2daee1ba (0.3a) -> up ([1], p1) acting ([1], p1) 先找到这个对象所在的OSD以及PG 设置集群的noout [root@lab8106 ~]#ceph osd set noout 这个是为了防止osd的停止产生不必要的删除 停止OSD [root@lab8106 ]#systemctl stop ceph-osd@1 如果osd已经是停止的状态就不需要做这一步 使用ceph-objectstore-tool工具删除单个对象 [root@lab8106 ]#ceph-objectstore-tool –data-path /var/lib/ceph/osd/ceph-1/ –journal-path /var/lib/ceph/osd/ceph-1/journal –pgid 0.3a rbd_data.857e6b8b4567.00000000000000ba remove …Read more

  • April 18, 2017
    Faster Ceph CRUSH computation with smaller buckets

    The CRUSH function maps Ceph placement groups (PGs) and objects to OSDs. It is used extensively in Ceph clients and daemons as well as in the Linux kernel modules and its CPU cost should be reduced to the minimum. It is common to define the Ceph CRUSH map so that PGs use OSDs on different …Read more

  • April 18, 2017
    Faster Ceph CRUSH computation with smaller buckets

    The CRUSH function maps Ceph placement groups (PGs) and objects to OSDs. It is used extensively in Ceph clients and daemons as well as in the Linux kernel modules and its CPU cost should be reduced to the minimum. It is common to define the Ceph CRUSH map so that PGs use OSDs on different …Read more

  • April 17, 2017
    Disabling scenarios in ceph-docker

    I recently completed a full resync from Kraken to Jewel in ceph-docker in which I introduced a new feature to disable scenarios. Running an application on bleeding edge technology can be tough and challenging for individuals and also for companies. Even me, as a developer and for bleeding edge testers I’m tempted to release unstable …Read more

  • April 13, 2017
    Test Ceph Luminous pre-release with ceph-docker

    /! DISCLAIMER /! /! DO NOT GET TOO EXCITED, AT THE TIME OF THE WRITTING LUMINOUS IS NOT OFFICIALLY RELEASE IN STABLE YET /! /! USE AT YOUR OWN RISK, DO NOT PUT PRODUCTION DATA ON THIS /! Luminous is just around the corner but we have been having packages available for a couple of …Read more

  • April 13, 2017
    查看ceph集群被哪些客户端连接

    前言 我们在使用集群的时候,一般来说比较关注的是后台的集群的状态,但是在做一些更人性化的管理功能的时候,就需要考虑到更多的细节 本篇就是其中的一个点,查询ceph被哪些客户端连接了 实践 从接口上来说,ceph提供了文件,块,和对象的接口,所以不同的接口需要不同的查询方式,因为我接触文件和块比较多,并且文件和块存储属于长连接类型,对象属于请求类型,所以主要关注文件和块存储的连接信息查询 我的集群状态如下 [root@lab8106 ~]# ceph -s cluster 3daaf51a-eeba-43a6-9f58-c26c5796f928 health HEALTH_WARN mon.lab8106 low disk space monmap e1: 1 mons at {lab8106=192.168.8.106:6789/0} election epoch 6, quorum 0 lab8106 fsmap e20: 1/1/1 up {0=lab8106=up:active} osdmap e52: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v27223: 96 pgs, 3 pools, 2579 MB data, 4621 …Read more

  • April 10, 2017
    Ceph manager support in ceph-ansible and ceph-docker

    Thanks to this recent pull request, you can now bootstrap the Ceph Manager daemon. This new daemon was added during the Kraken development cycle, its main goal is to act as a hook for existing system to get monitoring information from the Ceph cluster. It normally runs alongside monitor daemons but can be deployed to …Read more

  • April 7, 2017
    Mapping CephFS directories to other pools

    <a href="#Mapping-diffrent-Cephfs-directories-to-diffrent-pools" class="heade Source: Stephen McElroy (Mapping CephFS directories to other pools)

  • April 4, 2017
    The Schrodinger Ceph cluster

    Inspired by Schrodinger’s famous thought experiment, this is the the story of a Ceph cluster that was both full and empty until reality kicked in. The cluster Let’s imagine a small 3 servers cluster. All servers are identical and contain 10x 2 TB hard drives. All Ceph pools are triple replicated on three different hosts. …Read more

Careers