Planet Ceph

Aggregated news from external sources

  • December 16, 2016
    Ceph RGW AWS4 presigned URLs working with the Minio Cloud client

    Some fellows are using the Minio Client (mc) as their primary client-side tool to work with S3 cloud storage and filesystems. As you may know, mc works with the AWS v4 signature API and it provides a modern alternative under the Apache 2.0 License to UNIX commands (ls, cat, cp, diff, etc). In the case …Read more

  • December 13, 2016
    Rapido: A Glorified Wrapper for Dracut and QEMU

    Introduction I’ve blogged a few of times about how Dracut and QEMU can be combined to greatly improve Linux kernel dev/test turnaround. My first post covered the basics of building the kernel, running dracut, and booting the resultant image with qemu-kvm. A later post took a closer look at network configuration, and focused on bridging …Read more

  • December 5, 2016
    Ceph ansible is building its community

    This blog just relays what the initial announcement of the ceph-ansible mailing list. Hello community! ceph-ansible has been growing quite decently for the last couple of year. I’m glad to see that we now have so many users and contributors. We are currently implementing a release process within ceph-ansible, where we will certify stable releases …Read more

  • November 27, 2016
    The Dos and Don'ts for Ceph for OpenStack

    Ceph and OpenStack are an extremely useful and highly popular combination. Still, new Ceph/OpenStack deployments frequently come with easily avoided shortcomings — we’ll help you fix them! Do use show_image_direct_url and the Glance v2 API With Ceph RBD (RADOS Block Device), you have the ability to create clones. You can think of clones as the …Read more

  • November 25, 2016
    How to increase debug levels and harvest a detailed OSD log

    Your OSD doesn’t start and you want to find out why. Here’s how to increase the debug levels and harvest a detailed OSD log: First, rotate the OSD log, or just do “cd /var/log/ceph ; mv ceph-osd.0.log ceph-osd.0.log-foo” Then, edit /etc/ceph/ceph.conf to add the following lines to the [osd] section: [osd] debug osd = 20 …Read more

  • November 25, 2016
    How to repair a leveldb database

    Repairing a corrupted leveldb database turns out to be simple, but there is no guarantee that the database state after repair will be the same as it was before the corruption occurred! First, install the leveldb Python module, e.g., using pip. Then, determine the directory path where your leveldb database is stored. For example, the …Read more

  • November 6, 2016
    Automatically deploying Ceph using Salt Open and DeepSea

    One key part of implementing Ceph management capabilities within openATTIC revolves around the possibilities to install, deploy and manage Ceph cluster nodes in an automatic fashion. This requires remote node management capabilities, that openATTIC currently does not provide out of the box. For “traditional” storage configurations, openATTIC needs to be installed on any storage node …Read more

  • November 2, 2016
    Hello Salty Goodness

    Anyone who’s ever deployed Ceph presumably knows about ceph-deploy. It’s right there in the Deployment chapter of the upstream docs, and it’s pretty easy to use to get a toy test cluster up and running. For any decent sized cluster though, ceph-deploy rapidly becomes cumbersome… As just one example, do you really want to have …Read more

  • October 28, 2016
    Developing with Ceph using Docker

    As you’re probably aware, we’re putting a lot of effort into improving the Ceph management and monitoring capabilities of openATTIC in collaboration with SUSE. One of the challenges here is that Ceph is a distributed system, usually running on a number of independent nodes/hosts. This can be somewhat of a challenge for a developer who …Read more

  • October 27, 2016
    How DreamHost Builds Its Cloud: Deciding on the physical layout

    This is post No. 6 in a series of posts about how DreamHost builds its Cloud products. Written by Luke Odom straight from the data center operations.  How many racks to use for our next-gen OpenStack cluster? We use standard 9 foot 58u racks: tall enough and with enough power going to them, that neither …Read more

  • October 20, 2016
    Get Your Cloud Networking Done Right: Find Out How from Mellanox at OpenStack Summit Barcelona

    You want your cloud network to be software-defined, but without compromising performance or efficiency. How about 25G, 50G or 100Gbps speed without glitches? How about earth-shattering DPDK or virtualized network function (VNF) performance? How about unleashing your NVMe SSD speed potentials with software defined storage? The hyperscale giants are doing all of this and more …Read more

  • October 20, 2016
    How DreamHost Builds Its Cloud: To Converge or Not to Converge?

    This is No. 5 in a series of posts about how DreamHost builds its Cloud products. Written by Luke Odom straight from the data center operations.  With the first DreamCompute cluster, we used specialized hardware for both the Ceph storage nodes and the hypervisors. The Ceph storage nodes had little RAM, low frequency and low …Read more

Careers