09:00 - 18:00 | Mon 4 Nov | LG-R8 | MoW-R8
Interacting with the environment using hands is one of the distinctive abilities of humans with respect to other species. This attitude reflects on the crucial role played by objects’ manipulation in the world that we have shaped for us. Manipulating objects autonomously and in unstructured environments is one of the basic skills for robots to support people during everyday life outside industrial cages. However, such a capability is characterized by great complexity especially regarding the processing of sensors information to perceive the surrounding environment. The study of autonomous manipulation in robotics aims at transferring human-like perceptive skills to robots so that, combined with state of the art control techniques, they could be able to achieve similar performance in manipulating objects. The great complexity of this task makes autonomous manipulation one of the open problems in robotics that has been drawing a big interest in the community in the recent years. Conventional approaches attempt to reconstruct the scene using 3D vision and compute a grasping pose that attains force closure constraints, or by querying a database of precomputed or learned poses. More recently, grasping has been addressed using end-to-end learning methods showing great performance. However, these methods require robots to perform thousands of trials. For this reason, their application is often focused on simple grippers and scenarios in which images are acquired from a top-down view. Manipulation with multi-finger hands and mobile robots is unfortunately still out of the scope of these techniques due to the problem complexity. The aim of this workshop is to discuss and present different techniques proposed for addressing the same problem: object manipulation. More than a comparison, this workshop is designed to encourage people belonging to different research fields such as robotics and deep learning to share their approaches, ideas, and problems regarding autonomous manipulation.
No information added