Robust Kinodynamic Motion Planning Using Model-Free Game-Theoretic Learning

George Kontoudis1, Kyriakos G. Vamvoudakis2

  • 1University of Maryland
  • 2Georgia Inst. of Tech

Details

10:40 - 11:00 | Wed 10 Jul | Franklin 8 | WeA08.3

Session: Learning I

Abstract

This paper presents an online, robust, and model-free motion planning framework for kinodynamic systems. In particular, we employ a Q-learning algorithm for a two player zero-sum dynamic game to account for worst-case disturbances and kinodynamic constraints. We use one critic, and two actor approximators to solve online the finite horizon minimax problem with a form of integral reinforcement learning. We then leverage a terminal state evaluation structure to facilitate the online implementation. A static obstacle augmentation, and a local re-planning framework is presented to guarantee safe kinodynamic motion planning. Rigorous Lyapunov-based proofs are provided to guarantee closed-loop stability, while maintaining robustness and optimality. We finally evaluate the efficacy of the proposed framework with simulations and we provide a qualitative comparison of kinodynamic motion planning techniques.