Seeing Through Fog and Snow: Sensor Fusion & Dense Depth in Adverse Weather

Details

10:15 - 10:45 | Sun 9 Jun | Room L108 | SuDT5.3

Session: 3D-DLAD: 3D Deep Learning for Automated Driving

Abstract

Color and lidar data play a critical role in object detection for autonomous vehicles, which base their decision making on these inputs. While existing perception stacks exploit redundant and complementary information under good imaging conditions, they fail to do this in adverse weather and in imaging conditions where the sensory streams can be distorted. These rare “edge case” scenarios are not represented in available datasets, and existing sensor stacks and fusion methods are not designed to handle them. This talk describes a deep fusion approach for robust detection and depth reconstruction in fog and snow without having labeled training data available for these scenarios. For robust object detection in adverse weather, we departing from proposal-level fusion, we propose a real-time single-shot model that adaptively fuses features, driven by a generic measurement uncertainty signal which is unseen during training. To provide accurate depth information in these scenarios, we do not rely on scanning lidar, and instead, demonstrate that it is possible to turn a low-cost CMOS gated imager into a dense depth camera with at least 80m range – by learning depth from three gated images. We validate the proposed method in simulation and on unseen real-world data acquired over 4.000 km driving in northern Europe.