CoSTAR: Instructing Collaborative Robots with Behavior Trees and Vision

Chris Paxton1, Andrew Hundt1, Felix Jonathan1, Kelleher Guerin1, Gregory Hager1

  • 1Johns Hopkins University



Regular Papers


09:55 - 11:10 | Tue 30 May | Room 4813/4913 | TUA11

Intelligent and Flexible Automation

Full Text


For collaborative robots to become useful, end users who are not robotics experts must be able to instruct them to perform a variety of tasks. With this goal in mind, we developed a system for end-user creation of robust task plans with a broad range of capabilities. CoSTAR: the Collaborative System for Task Automation and Recognition is our winning entry in the 2016 KUKA Innovation Award competition at the Hannover Messe trade show, which this year focused on Flexible Manufacturing. CoSTAR is unique in how it creates natural abstractions that use perception to represent the world in a way users can both understand and utilize to author capable and robust task plans. Our Behavior Tree-based task editor integrates high-level information from known object segmentation and pose estimation with spatial reasoning and robot actions to create robust task plans. We describe the cross-platform design and implementation of this system on multiple industrial robots and evaluate its suitability for a wide variety of use cases.

Additional Information

No information added