Tactical Decision-Making in Autonomous Driving by Reinforcement Learning with Uncertainty Estimation

Carl-johan Hoel1, Krister Wolff2, Leo Laine1

  • 1Volvo Group
  • 2Chalmers University of Technology


08:15 - 08:30 | Wed 11 Nov | Virtual Room 1 | W4WeAM2.2

Session: Deep Learning 4

Category: Regular
Theme: Deep Learning


Reinforcement learning (RL) can be used to create a tactical decision-making agent for autonomous driving. However, previous approaches only output decisions and do not provide information about the agent's confidence in the recommended actions. This paper investigates how a Bayesian RL technique, based on an ensemble of neural networks with additional randomized prior functions (RPF), can be used to estimate the uncertainty of decisions in autonomous driving method for classifying whether or not an action should be considered safe is also introduced. The performance of the ensemble RPF method is evaluated by training an agent on a highway driving scenario. It is shown that the trained agent can estimate the uncertainty of its decisions and indicate an unacceptable level when the agent faces a situation that is far from the training distribution. Furthermore, within the training distribution, the ensemble RPF agent outperforms a standard Deep Q-Network agent. In this study, the estimated uncertainty is used to choose safe actions in unknown situations. However, the uncertainty information could also be used to identify situations that should be added to the training process.