Manipulating Highly Deformable Materials Using a Visual Feedback Dictionary

Biao Jia1, Zhe Hu2, Jia Pan3, Dinesh Manocha4

  • 1University of North Carolina, Chapel Hill
  • 2City University of Hong Kong
  • 3University of Hong Kong
  • 4University of Maryland

Details

10:30 - 13:00 | Tue 22 May | podE | [email protected]

Session: Manipulation - Planning 1

Abstract

The complex physical properties of highly deformable materials such as clothes pose significant challenges for autonomous robotic manipulation systems. We present a novel visual feedback dictionary-based method for manipulating deformable objects towards a desired configuration. Our approach is based on visual servoing and we use an efficient technique to extract key features from the RGB sensor stream in the form of a histogram of deformable model features. These histogram features serve as high-level representations of the state of the deformable material. Next, we collect manipulation data and use a visual feedback dictionary that maps the velocity in the high-dimensional feature space to the velocity of the robotic end-effectors for manipulation. We have evaluated our approach on a set of complex manipulation tasks and human-robot manipulation tasks on different cloth pieces with varying material characteristics.