How to Improve Object Detection in a Driver Assistance System Applying Explainable Deep Learning

Details

13:30 - 17:30 | Sun 9 Jun | Room L109 | SuFT10.1

Session: FRCA-IAV: Formal Methods vs. Machine Learning Approaches for Reliable Navigation

Abstract

Reliable perception and detection of objects are one of the fundamental aspects of vehicle autonomy. Although model-based approaches perform well in the area of planning and control, they often fail when applied to perception due to the open-world nature of problems for autonomous vehicles. Therefore, data-driven approaches to object detection and location are likely to be used in both self-driving cars and advanced driver assistance systems. In particular, the deep neural networks proved to be excellent in detection and classification of objects from images, often achieving super-human performance. However, neural networks applied in intelligent vehicles need to be explainable, providing rationales for their decisions. In this paper, we demonstrate how such an interpretation can be provided for a deep learning system that detects specific objects (charging posts) for driver assistance in an electric bus. The interpretation, achieved by visualization of attention heat maps, has twofold use: it allows us to augment the dataset used for training, improving the results, but it also may be used as a tool when fielding the system with the given bus operator. Explaining which parts of the images triggered the decision helps to eliminate misdetections.