Aggregated news from external sources
When Ceph is built from sources, make check will not run the test_rados.py tests. A minimal cluster is required and can be run from the src directory with: CEPH_NUM_MON=1 CEPH_NUM_OSD=3 ./vstart.sh -d -n -X -l mon osd The test can … Continue reading →
A Ceph cluster is run from sources with CEPH_NUM_MON=1 CEPH_NUM_OSD=5 ./vstart.sh -d -n -X -l mon osd and each ceph-osd uses approximately 50MB of resident memory USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND loic 7489 1.7 … Continue reading →
The Ceph integration tests run by teuthology are described with YAML files in the ceph-qa-suite repository. The actual work is carried out on machines provisioned by teuthology via tasks. For instance, the workunit task runs a script found in the … Continue reading →
Although it is extremely unlikely to loose an object stored in Ceph, it is not impossible. When it happens to a Cinder volume based on RBD, knowing which has an object missing will help with disaster recovery. The list_missing command … Continue reading →
An update on my talk submission for the OpenStack summit this year in Paris: my speech about Ceph performance analysis was not chosen by the committee for the official agenda. But at least one piece of good news: Marc’s talk will be part of t…
By default teuthology will clone the ceph-qa-suite repository and use the tasks it contains. If tasks have been modified localy, teuthology can be instructed to use a local directory by inserting something like: suite_path: /home/loic/software/ceph/ceph-qa-suite in the teuthology job yaml … Continue reading →
I’m glad to announce that Ceph is now part of the mirrors iWeb provides.
It is available in both IPv4 and IPv6 by:
rsync on ceph.mirror.iweb.ca::ceph
The mirror provides 4 Gbps of connectivity and is located on the eastern coast of Canada, more precisely in Montreal, Quebec.
Feel free to give it a try and let me know if you see any problems !
The Call for Speakers period for the OpenStack Summit from 03. – 07.11.2014 in Paris ended this week. Now the voting for the submitted talks started and ends at 11:59pm CDT on August 6. (6:59 am CEST on 7. August).I’ve submitted a talk to the stor…
In a Ceph cluster with low bandwidth, the root disk of an OpenStack instance became extremely slow during days. When an OSD is scrubbing a placement group, it has a significant impact on performances and this is expected, for a … Continue reading →