Video Object Segmentation Using Teacher-Student Adaptation in a Human Robot Interaction (HRI) Setting

Mennatullah Siam1, Chen Jiang1, Steven Weikai Lu1, Laura Petrich1, Mahmoud Gamal2, Mohamed Elhoseiny3, Martin Jagersand1

  • 1University of Alberta
  • 2Cairo University
  • 3Facebook AI Research

Details

10:45 - 12:00 | Mon 20 May | Room 220 POD 02 | MoA1-02.2

Session: Object Recognition I - 1.1.02

Abstract

Video object segmentation is an essential task in robot manipulation to facilitate grasping and learning affordances. Incremental learning is important for robotics in unstructured environments. Inspired by the children learning process, human robot interaction (HRI) can be utilized to teach robots about the world guided by humans similar to how children learn from a parent or a teacher. A human teacher can show potential objects of interest to the robot, which is able to self adapt to the teaching signal without providing manual segmentation labels. We propose a novel teacher-student learning paradigm to teach robots about their surrounding environment. A two-stream motion and appearance "teacher" network provides pseudo-labels to adapt an appearance "student" network. The student network is able to segment the newly learned objects in other scenes, whether they are static or in motion. We also introduce a carefully designed dataset that serves the proposed HRI setup, denoted as (I)nteractive (V)ideo (O)bject (S)egmentation. Our IVOS dataset contains teaching videos of different objects, and manipulation tasks. Our proposed adaptation method outperforms the state-of-the-art on DAVIS and FBMS with 6.8% and 1.2% in F-measure respectively. It improves over the baseline on IVOS dataset with 46.1% and 25.9% in mIoU.