Driving in the Matrix: Can Virtual Worlds Replace Human-Generated Annotations for Real World Tasks?

Details

11:55 - 12:00 | Tue 30 May | Room 4311/4312 | TUB3.6

Session: Computer Vision 2

Abstract

Deep learning has rapidly transformed the state of the art algorithms used to address a variety of problems in computer vision and robotics. These breakthroughs have relied upon massive amounts of human annotated training data. This time consuming process has begun impeding the progress of these deep learning efforts. This paper describes a method to incorporate photo-realistic computer images from a simulation engine to rapidly generate annotated data that can be used for the training of machine learning algorithms. We demonstrate that a state of the art architecture, which is trained only using these synthetic annotations, performs better than the identical architecture trained on human annotated real-world data, when tested on the KITTI data set for vehicle detection. By training machine learning algorithms on a rich virtual world, real objects in real scenes can be learned and classified using synthetic data. This approach offers the possibility of accelerating deep learning's application to sensor based classification problems like those that appear in self-driving cars. The source code and data are made available for researchers.