A Collaborative Visual Assistant for Robot Operations in Unstructured or Confined Environments

Xuesu Xiao1, Robin Murphy2

  • 1George Mason University
  • 2Texas A&M

Details

10:00 - 10:30 | Mon 25 Sep | Ballroom Foyer | MoAmPo.24

Session: Monday Posters AM

Abstract

The emerging state of the practice for nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the sensors by providing an external viewpoint. However, the benefits of using an assistant have been limited for at least two reasons: users tend to choose suboptimal viewpoints and only ground robot assistants are considered, ignoring the rapid evolution of small unmanned aerial systems for indoor flying. Advances in small unmanned aerial vehicles (UAV), especially tethered UAVs suggest that flying robot helpers will soon supply the needed secondary visual perspective and increase the primary robot's operator's situation awareness by enable a richer set of candidate view points. However, the bulk of research on autonomous robot assistants has been motivated by NASA's quest to automate construction in space, ignoring the challenges posed by unstructured or cluttered indoor environments. This work creates the fundamental theory for autonomously positioning either ground or aerial assistant to provide the best external views of the primary robot's workspace given the complexity of environment and user's low tolerance of rapidly shifting viewpoints. Our approach is to use the cognitive science concept of affordances, where the potential for an action can be directly perceived without knowing intent or models, and thus is universal to all robots and tasks. Based on our prior experience with 21 disaster deployments, participation in 35 homeland security exercises, and examination of common tasks for robots at Fukushima, the development of the theory is initially restricted to five common affordances: Traversability Manipulability Reachability Separability Passability A viewpoint v will have a performance value |v| based on how well an operator can perform the action from that viewpoint, allowing the information gain of viewpoints to be ranked. Adjacent clusters of viewpoints with a similar score form a continuous volume, or manifold, M. The entire space will be divided into manifolds. Manifolds represent uniform clusters of views of affordances and the affordances are independent of the structure of an object or clutter in the environment. Within a manifold, each viewpoint is equally good. It simplifies navigational reachability because as long as the assistant can reach any location within the manifold it will provide the same value as any other member of the manifold. The equivalence of viewpoints within a manifold aids with visual stability as a UAV subject to drift can be placed at the centroid of a manifold with the assurance that small flight perturbations will not visually distract the operator. The spatial relationship between manifolds is also useful. Mani