Investigating Low Level Features in CNN for Traffic Sign Detection and Recognition

EeHeng Chen, Philipp Roethig1, Joeran Zeisler1, Darius Burschka2

  • 1BMW AG
  • 2Technische Universitaet Muenchen

Details

12:30 - 12:45 | Mon 28 Oct | The Great Room II | MoD-T3.3

Session: Regular Session on Object Detection and Classification (II)

Abstract

Understanding traffic signs is one of the basic requirements for a self-driving car to drive autonomously in real world scenario. It needs to be able to achieve this task with an error rate similar or lower than a human. Currently, in the automotive industry, most vision based algorithms still rely heavily on the geometry and color of traffic signs to detect and classify them. Although these approaches are suitable for highway scenario where there is not much background objects, they are not robust enough to handle the vast and diverse amount of objects that can be found in urban scenario. Inspired by the performance achieved by Convolutional Neural Network (CNN) based object detector, we would like to investigate the feasibility of using CNN for the task of traffic sign detection from camera images. In this paper, we will be focusing on two specific issues of our task, the low interclass variation of traffic signs and the small size of the traffic signs in images. To tackle these issues, we propose a new architecture that splits the detection and classification branches of a CNN based object detector. This architecture exploits low level features for classification while it uses high level features for detection in a single forward pass. With this modification, we were able to improve the average precision of the detection results by 5% to 19% on public datasets.