Can We PASS Beyond the Field of View? Panoramic Annular Semantic Segmentation for Real-World Surrounding Perception

Kailun Yang1, Xinxin Hu1, Luis M. Bergasa2, Eduardo Romera3, Xiao Huang4, Dongming Sun5, Kaiwei Wang6

  • 1Zhejiang University
  • 2University of Alcala
  • 3University of Alcala (UAH)
  • 4University of Arizona
  • 5Imperial College London
  • 6Zhejiang Univeristy

Details

11:00 - 12:30 | Mon 10 June | Room 5 | MoAM_P1.7

Session: [MoAM_P1] Poster 1: AV + Vision

Category: Poster Session

Full Text

Abstract

Pixel-wise semantic segmentation unifies distinct scene perception tasks in a coherent way, and has catalyzed notable progress in autonomous and assisted navigation, where a whole surrounding perception is vital. However, current mainstream semantic segmenters are normally benchmarked against datasets with narrow Field of View (FoV), and most vision-based navigation systems use only a forward-view camera. In this paper, we propose a Panoramic Annular Semantic Segmentation (PASS) framework to perceive the entire surrounding based on a compact panoramic annular lens system and an online panorama unfolding process. To facilitate the training of PASS models, we leverage conventional FoV imaging datasets, bypassing the effort entailed to create dense panoramic annotations. To consistently exploit the rich contextual cues in the unfolded panorama, we adapt our real-time ERF-PSPNet to predict semantically meaningful feature maps in different segments and fuse them to fulfill smooth and seamless panoramic scene parsing. Beyond the enlarged FoV, we extend focal length-related and style transfer-based data augmentations, to robustify the semantic segmenter against distortions and blurs in panoramic imagery. A comprehensive variety of experiments demonstrates the qualified robustness of our proposal for real-world surrounding understanding.