DFuseNet: Deep Fusion of RGB and Sparse Depth Information for Image Guided Dense Depth Completion


11:30 - 11:45 | Mon 28 October | The Great Room I | MoC-T1.3

Session: Regular Session on Sensor Fusion (I)

Category: Regular Session


In this paper we propose a convolutional neural network that is designed to upsample a series of sparse range measurements based on the contextual cues gleaned from a high resolution intensity image. Our approach draws inspiration from related work on super-resolution and in-painting. We propose a novel architecture that seeks to pull contextual cues separately from the intensity image and the depth features and then fuse them later in the network. We argue that this approach effectively exploits the relationship between the two modalities and produces accurate results while respecting salient image structures. We present experimental results to demonstrate that our approach is comparable with state of the art methods and generalizes well across multiple datasets.