Motion segmentation is a fundamental technology in many robotic applications, such as mapping and navigation in dynamic environments. In this study, we propose a novel motion segmentation approach based on on-line non-parametric learning using RGB-D data. The proposed approach requires no prior information, such as hand-labelled initial segmentation. Foreground cues are derived from dense optical flow with the homography constraint. Visual and depth information of moving objects are learned on the fly to maintain a foreground model, which is incrementally updated during the iterations. We evaluate the approach using public sequences. The results demonstrate that our approach is able to effectively segment moving objects with a freely moving RGB-D camera.