Real2Sim2Real: Self-Supervised Learning of Physical Single-Step Dynamic Actions for Planar Robot Casting

Vincent Lim, Huang Huang1, Lawrence Yunliang Chen2, Jonathan Wang3, Jeffrey Ichnowski4, Daniel Seita4, Michael Laskey3, Ken Goldberg2

  • 1University of California at Berkeley
  • 2UC Berkeley
  • 3University of California, Berkeley
  • 4Carnegie Mellon University

Details

10:40 - 10:45 | Thu 26 May | Room 115A | ThA06.07

Session: Deep Learning in Grasping and Manipulation II

Abstract

This paper introduces the task of {em Planar Robot Casting (PRC)}: where one planar motion of a robot arm holding one end of a cable causes the other end to slide across the plane toward a desired target. PRC allows the cable to reach points beyond the robot workspace and has applications for cable management in homes, warehouses, and factories. To efficiently learn a PRC policy for a given cable, we propose {em Real2Sim2Real}, a self-supervised framework that automatically collects physical trajectory examples to tune parameters of a dynamics simulator using Differential Evolution, generates many simulated examples, and then learns a policy using a weighted combination of simulated and physical data. We evaluate Real2Sim2Real with three simulators, Isaac Gym-segmented, Isaac Gym-hybrid, and PyBullet, two function approximators, Gaussian Processes and Neural Networks (NNs), and three cables with differing stiffness, torsion, and friction. Results with 240 physical trials suggest that the PRC policies can attain median error distance (as % of cable length) ranging from 8% to 14%, outperforming baselines and policies trained on only real or only simulated examples. Code, data, and videos are available at url{https://tinyurl.com/robotcast}.