Lucas Matias1, Marc Sons2, Jefferson Souza3, Denis Wolf4, Christoph Stiller5
09:57 - 10:08 | Mon 10 Jun | Berlioz Auditorium | MoAM2_Oral.3
09:57 - 10:08 | Mon 10 Jun | Room 4 | MoAM2_Oral.3
The recent precision increase in image-based depth estimation encourages to use this type of data for mapping. Recent work proposes different approaches to deal with the problem of occlusion generated by different scene perspectives of stereo cameras. However, there is less attention to depth estimation and inpainting for object removal and object occlusion. In this paper, we study recent inpainting approaches for RGB images and apply these methods on depth maps. We propose a Generative Adversarial Network (GAN) for depth feature extraction to estimate the depth inside a masked area, in order to remove objects on disparity images. Our results show that using depth features on the loss function and on the network architecture, increase the result precision and give to the generated image a depth distribution close to the real data. Our main contribution is a GAN, which estimates depth information in a masked area inside a disparity image.