From Ceph wiki
The kernel client can re-export a ceph mount via NFS. The only real caveat here is that you need to manually specify an fsid when exporting the file system. (note: using fsid has benefits if the nfs server is run in a HA scenario and might failover, otherwise the mounts might go stale - so this is a GOOD thing)
$ mount -t ceph 126.96.36.199:/ /mnt/ceph $ exportfs client:/mnt/ceph -o fsid=1234,rw,no_squash_root
That's about it. You'll probably need to remove the export before you can unmount, e.g.
$ exportfs -u client:/mnt/ceph $ umount /mnt/ceph
Below is a note about reliability of exporting ceph via NFS from the source to the kernel module. Basically, ceph's design makes it difficult to provide a single, universally consistent identifier to the NFS export interface. This causes stale filehandles, presumably when a cache object is released on the NFS server (ceph client).
* NFS export support * * NFS re-export of a ceph mount is, at present, only semireliable. * The basic issue is that the Ceph architectures doesn't lend itself * well to generating filehandles that will remain valid forever. * * So, we do our best. If you're lucky, your inode will be in the * client's cache. If it's not, and you have a connectable fh, then * the MDS server may be able to find it for you. Otherwise, you get * ESTALE. * * There are ways to this more reliable, but in the non-connectable fh * case, we won't every work perfectly, and in the connectable case, * some changes are needed on the MDS side to work better. */