Visual Articulated Tracking in the Presence of Occlusions

Christian Rauch1, Timothy Hospedales2, Jamie Shotton3, Maurice Fallon4

  • 1Robert Bosch GmbH
  • 2Queen Mary University of London
  • 3Microsoft Research
  • 4University of Oxford

Details

10:30 - 13:00 | Tue 22 May | podL | [email protected]

Session: Visual Tracking 1

Abstract

This paper focuses on visual tracking of a robotic manipulator during manipulation. In this situation, tracking is prone to failure when visual distractions are created by the object being manipulated and the clutter in the environment. Current state-of-the-art approaches, which typically rely on model-fitting using Iterative Closest Point (ICP), fail in the presence of distracting data points and are unable to recover. Meanwhile, discriminative methods which are trained only to distinguish parts of the tracked object can also fail in these scenarios as data points from the occlusions are incorrectly classified as being from the manipulator. We instead propose to use the per-pixel data-to-model associations provided from a random forest to avoid local minima during model fitting. By training the random forest with artificial occlusions we can achieve increased robustness to occlusion and clutter present in the scene. We do this without specific knowledge about the type or location of the manipulated object. Our approach is demonstrated by using dense depth data from an RGB-D camera to track a robotic manipulator during manipulation and in presence of occlusions.