Selective Presentation of AI Object Detection Results While Maintaining Human Reliance

Yosuke Fukuchi1, Seiji Yamada2

  • 1National Institute of Informatics
  • 2National Institute Informatics

Details

14:06 - 14:12 | Mon 2 Oct | 330A | MoBT16.2

Session: AI-Enabled Robotics

15:30 - 17:00 | Mon 2 Oct | Hall E - Track 16 | MoBIP-16.2

Session: AI-Enabled Robotics

Abstract

Transparency in decision-making is an important factor for AI-driven autonomous systems to be trusted and relied on by users. Studies in the field of visual information processing typically attempt to make an AI system's behavior transparent by showing bounding boxes or heatmaps as explanations. However, it has also been found that an excessive amount of explanations sometimes causes information overload and brings negative results. This paper proposes SmartBBox, a method for reducing the number of bounding boxes to show while maintaining human reliance on an AI. It infers if each bounding box is worth showing by predicting its effect on human reliance. SmartBBox can autonomously learn to decide whether to show bounding boxes from humans' usage data. We implemented and tested SmartBBox in an autonomous driving scenario in which a human continuously decides whether to rely on an autonomous driving system while observing the dynamic results of object detection by the system. The results suggest that SmartBBox can reduce bounding boxes 64.8% on average from object recognition results while keeping human reliance at the same level as in the case where all the bounding boxes are presented.