Planet Ceph

Aggregated news from external sources

  • December 5, 2016
    Ceph ansible is building its community

    This blog just relays what the initial announcement of the ceph-ansible mailing list. Hello community! ceph-ansible has been growing quite decently for the last couple of year. I’m glad to see that we now have so many users and contributors. We are currently implementing a release process within ceph-ansible, where we will certify stable releases …Read more

  • November 27, 2016
    The Dos and Don'ts for Ceph for OpenStack

    Ceph and OpenStack are an extremely useful and highly popular combination. Still, new Ceph/OpenStack deployments frequently come with easily avoided shortcomings — we’ll help you fix them! Do use show_image_direct_url and the Glance v2 API With Ceph RBD (RADOS Block Device), you have the ability to create clones. You can think of clones as the …Read more

  • November 25, 2016
    How to increase debug levels and harvest a detailed OSD log

    Your OSD doesn’t start and you want to find out why. Here’s how to increase the debug levels and harvest a detailed OSD log: First, rotate the OSD log, or just do “cd /var/log/ceph ; mv ceph-osd.0.log ceph-osd.0.log-foo” Then, edit /etc/ceph/ceph.conf to add the following lines to the [osd] section: [osd] debug osd = 20 …Read more

  • November 25, 2016
    How to repair a leveldb database

    Repairing a corrupted leveldb database turns out to be simple, but there is no guarantee that the database state after repair will be the same as it was before the corruption occurred! First, install the leveldb Python module, e.g., using pip. Then, determine the directory path where your leveldb database is stored. For example, the …Read more

  • November 6, 2016
    Automatically deploying Ceph using Salt Open and DeepSea

    One key part of implementing Ceph management capabilities within openATTIC revolves around the possibilities to install, deploy and manage Ceph cluster nodes in an automatic fashion. This requires remote node management capabilities, that openATTIC currently does not provide out of the box. For “traditional” storage configurations, openATTIC needs to be installed on any storage node …Read more

  • November 2, 2016
    Hello Salty Goodness

    Anyone who’s ever deployed Ceph presumably knows about ceph-deploy. It’s right there in the Deployment chapter of the upstream docs, and it’s pretty easy to use to get a toy test cluster up and running. For any decent sized cluster though, ceph-deploy rapidly becomes cumbersome… As just one example, do you really want to have …Read more

  • October 28, 2016
    Developing with Ceph using Docker

    As you’re probably aware, we’re putting a lot of effort into improving the Ceph management and monitoring capabilities of openATTIC in collaboration with SUSE. One of the challenges here is that Ceph is a distributed system, usually running on a number of independent nodes/hosts. This can be somewhat of a challenge for a developer who …Read more

  • October 27, 2016
    How DreamHost Builds Its Cloud: Deciding on the physical layout

    This is post No. 6 in a series of posts about how DreamHost builds its Cloud products. Written by Luke Odom straight from the data center operations.  How many racks to use for our next-gen OpenStack cluster? We use standard 9 foot 58u racks: tall enough and with enough power going to them, that neither …Read more

  • October 20, 2016
    Get Your Cloud Networking Done Right: Find Out How from Mellanox at OpenStack Summit Barcelona

    You want your cloud network to be software-defined, but without compromising performance or efficiency. How about 25G, 50G or 100Gbps speed without glitches? How about earth-shattering DPDK or virtualized network function (VNF) performance? How about unleashing your NVMe SSD speed potentials with software defined storage? The hyperscale giants are doing all of this and more …Read more

  • October 20, 2016
    How DreamHost Builds Its Cloud: To Converge or Not to Converge?

    This is No. 5 in a series of posts about how DreamHost builds its Cloud products. Written by Luke Odom straight from the data center operations.  With the first DreamCompute cluster, we used specialized hardware for both the Ceph storage nodes and the hypervisors. The Ceph storage nodes had little RAM, low frequency and low …Read more

  • October 12, 2016
    Multipart Upload (Copy part) goes upstream in Ceph

    The last Upload Part (Copy) patches went upstream in Ceph some days ago. This new feature is available in the master branch now, and it will ship with the first development checkpoint for Kraken. In S3, this feature is used to copy/move data using an existing object as data source in the storage backend instead …Read more

  • October 11, 2016
    Ceph For Databases? Yes You Can, and Should

    Ceph is traditionally known for both object and block storage, but not for database storage. While its scale-out design supports both high capacity and high throughput, the stereotype is that Ceph doesn’t support the low latency and high IOPS typically required by database workloads. However, recent testing by Red Hat, Supermicro, and Percona—one of the …Read more