A Data-Driven Policy Iteration Scheme Based on Linear Programming

Details

10:40 - 11:00 | Wed 11 Dec | Risso 8 | WeA23.3

Session: Learning-Based Controller Synthesis

Abstract

We consider the problem of learning discounted-cost optimal control policies for unknown deterministic discrete-time systems with continuous state and action spaces. We show that a policy evaluation step of the well-known policy iteration (PI) algorithm can be characterized as a solution to an infinite dimensional linear program (LP). However, when approximating such an LP with a finite dimensional program, the PI algorithm loses its nominal properties. We propose a data-driven PI scheme that ensures a certain monotonic behavior and allows for incorporation of expert knowledge on the system. A numerical example illustrates effectiveness of the proposed algorithm.