Learning from the Hindsight Plan -- Episodic MPC Improvement

Aviv Tamar1, Garrett Thomas1, Tianhao Zhang2, Sergey Levine1, Pieter Abbeel1

  • 1UC Berkeley
  • 2University of California, Berkeley



Regular Papers


09:55 - 11:10 | Tue 30 May | Room 4611/4612 | TUA6

Learning and Adaptive Systems 1

Full Text


Model predictive control (MPC) is a popular control method that has proved effective for robotics, among other fields. MPC performs re-planning at every time step. Re-planning is done with a limited horizon per computational and real-time constraints and often also for robustness to potential model errors. However, the limited horizon leads to suboptimal performance. In this work, we consider the iterative learning setting, where the same task can be repeated several times, and propose a policy improvement scheme for MPC. The main idea is that between executions we can, offline, run MPC with a longer horizon, resulting in a hindsight plan. To bring the next real-world execution closer to the hindsight plan, our approach learns to re-shape the original cost function with the goal of satisfying the following property: short horizon planning (as realistic during real executions) with respect to the shaped cost should result in mimicking the hindsight plan. This effectively consolidates long-term reasoning into the short-horizon planning. We empirically evaluate our approach in contact-rich manipulation tasks both in simulated and real environments, such as peg insertion by a real PR2 robot.

Additional Information

No information added