Sampled-Point Network for Classification of Deformed Building Element Point Clouds

Jingdao Chen1, Yong Kwon Cho, Jun Ueda2

  • 1Mississippi State University
  • 2Georgia Institute of Technology

Details

14:30 - 17:00 | Tue 22 May | podO | [email protected]

Session: Vision 2

Abstract

Search-and-rescue (SAR) robots operating in post- disaster urban areas need to accurately identify physical site information to perform navigation, mapping and manipulation tasks. This can be achieved by acquiring a 3D point cloud of the environment and performing object recognition from the point cloud data. However, this task is complicated by the unstructured environments and potentially-deformed objects encountered during disaster relief operations. Current 3D object recognition methods rely on point cloud input acquired under suitable conditions and do not consider deformations such as outlier noise, bending and truncation. This work introduces a deep learning architecture for 3D class recognition from point clouds of deformed building elements. The classifi- cation network, consisting of stacked convolution and average pooling layers applied directly to point coordinates, was trained using point clouds sampled from a database of mesh models. The proposed method achieves robustness to input variability using point sorting, resampling, and rotation normalization techniques. Experimental results on synthetically-deformed ob- ject datasets show that the proposed method outperforms the conventional deep learning methods in terms of classification accuracy and computational efficiency.