Learning from the Hindsight Plan -- Episodic MPC Improvement

Aviv Tamar1, Garrett Thomas2, Tianhao Zhang3, Sergey Levine2, Pieter Abbeel2

  • 1Technion
  • 2UC Berkeley
  • 3University of California, Berkeley

Details

10:15 - 10:20 | Tue 30 May | Room 4611/4612 | TUA6.5

Session: Learning and Adaptive Systems 1

Abstract

Model predictive control (MPC) is a popular control method that has proved effective for robotics, among other fields. MPC performs re-planning at every time step. Re-planning is done with a limited horizon per computational and real-time constraints and often also for robustness to potential model errors. However, the limited horizon leads to suboptimal performance. In this work, we consider the iterative learning setting, where the same task can be repeated several times, and propose a policy improvement scheme for MPC. The main idea is that between executions we can, offline, run MPC with a longer horizon, resulting in a hindsight plan. To bring the next real-world execution closer to the hindsight plan, our approach learns to re-shape the original cost function with the goal of satisfying the following property: short horizon planning (as realistic during real executions) with respect to the shaped cost should result in mimicking the hindsight plan. This effectively consolidates long-term reasoning into the short-horizon planning. We empirically evaluate our approach in contact-rich manipulation tasks both in simulated and real environments, such as peg insertion by a real PR2 robot.