Model Predictive Control Based on Deep Reinforcement Learning Method with Discrete-Valued Input

Yoshio Tange1, Satoshi Kiryu1, Tetsuro Matsui2

  • 1Fuji Electric Co., Ltd.
  • 2Fuji Electric, Co. Ltd., Japan

Details

16:10 - 16:30 | Mon 19 Aug | Lau, 6-211 | MoC4.3

Session: Reinforcement Learning

Abstract

Recent developments of artificial intelligence (AI) such as deep learning have rapidly attracted a lot of interest from many industrial fields including process control. In this paper, we propose a novel approach for model predictive control (MPC) combined with deep reinforcement learning (DRL) technology. The proposed method utilize a pre-identified linear model to predict future tracking error and this information is fed to a RL compensator as an observed state. We show a discrete-valued input case which often appears in on/off control or multi-level control. By numerical examples, we show that the proposed method is superior to direct P-based and PI-based RL approaches with respect to control performance and learning convergence.