Pedestrian Recognition through Different Cross-Modality DeepLearning Methods

Danut Ovidiu Pop1, Alexandrina Rogozan2, Fawzi Nashashibi3, Abdelaziz Bensrhair4

  • 1INRIA
  • 2National Institute of Applied Sciences- Rouen
  • 3Inria
  • 4INSA de Rouen

Details

11:24 - 11:42 | Wed 28 Jun | | WeBPl.4

Session: Pattern Recognition for Vehicles

Abstract

A wide variety of approaches have been proposed for pedestrian detection in the last decay and it still remains an open challenge due to its outstanding importance in the field of automotive. In recent years, deep learning classification methods, in particular convolutional neural networks, combined with multi-modality images applied on different fusion schemes have achieved great performances in computer vision tasks. For the pedestrian recognition task, the late-fusion scheme outperforms the early and intermediate integration of modalities. In this paper, we focus on improving and optimizing the late-fusion scheme for pedestrian classification on the Daimler stereo vision data set. We propose different training methods based on Cross-Modality deep learning of Convolutional Neural Networks (CNNs): (1) a correlated model, (2) an incremental model and, (3) a particular cross-modality model, where each CNN is trained on one modality, but tested on a different one. The experiments show that the incremental cross-modality deep learning of CNNs achieves the best performances. It improves the classification performances not only for each modality classifier, but also for the multi-modality late-fusion scheme. The particular cross-modality model is a promising idea for automated annotation of modality images with a classifier trained on a different modality and/or for cross-dataset training.