DISC: A Large-Scale Virtual Dataset for Simulating Disaster Scenarios

Hae-gon Jeon1, Sunghoon Im2, Byeong-uk Lee3, Donggul Choi3, Martial Hebert4, In So Kweon3

  • 1Carnegie Mellon University
  • 2DGIST
  • 3KAIST
  • 4CMU

Details

11:45 - 12:00 | Tue 5 Nov | L1-R5 | TuAT5.4

Session: Robot Safety

Abstract

In this paper, we present the first large-scale synthetic dataset for visual perception in disaster scenarios, and analyze state-of-the-art methods for multiple computer vision tasks with reference baselines. We simulated before and after disaster scenarios such as fire and building collapse for fifteen different locations in realistic virtual worlds. The dataset consists of more than 300K high-resolution stereo image pairs, all annotated with ground-truth data for semantic segmentation, depth, optical flow, surface normal estimation and camera pose estimation. To create realistic disaster scenes, we manually augmented the effects with 3D models using physical-based graphics tools. We use our dataset to train state-of-the-art methods and evaluate how well these methods can recognize the disaster situations and produce reliable results on virtual scenes as well as real-world images. The results obtained from each task are then used as inputs to the proposed visual odometry network for generating 3D maps of buildings on fire. Finally, we discuss challenges for future research.