Robot Learning to Paint from Demonstrations

Younghyo Park1, Seunghun Jeon2, Taeyoon Lee3

  • 1MIT
  • 2KAIST
  • 3Naver labs
 IROS Best Entertainment and Amusement Paper Award sponsored by JTCF

Details

16:45 - 17:00 | Mon 24 Oct | Rm1 (Room A) | MoC-1.4

Session: Award Session III

Abstract

Robotic painting tasks in the real world are often made complicated by the highly complex and stochastic nature of the dynamics that underlie, e.g., physical contact between the painting tool and a canvas, color blendings between painting mediums, and many more. Simulation-based inverse graphics algorithms, for example, can not be directly transferred to the real-world due in large to the considerable gap in the viable range of painting strokes the robot can accurately generate onto the physical canvas. In this paper, we aim at minimizing this gap by appealing to a data-driven skill learning approach. The core idea lies in allowing the robot to learn continuous stroke-level skills that jointly encodes action trajectories and painted outcomes from an extensive collection of human demonstrations. We demonstrate the efficacy of our method through extensive real-world experiments using a 4-dof torque-controllable manipulator with a digital canvas(iPad).