Marcel Menner1, Karl Berntorp2, Melanie N. Zeilinger3, Stefano Di Cairano4
10:20 - 10:40 | Wed 11 Dec | Risso 8 | WeA23.2
This paper presents a method for inverse learning of a control objective defined in terms of requirements and their probability distribution. The probability distribution characterizes tolerated deviations from the deterministic requirements, is modeled as Gaussian, and learned from data using likelihood maximization. Further, this paper introduces both parametrized requirements for motion planning in autonomous driving applications and methods for their estimation from demonstrations. Human-in-the-loop simulations with four drivers suggest that human motion planning can be modeled with the considered probabilistic control objective and the inverse learning methods in this paper enable more natural and personalized automated driving.