Self-Supervised Learning of Tool Affordances from 3D Tool Representation through Parallel SOM Mapping

Tanis Mar1, Vadim Tikhanoff2, Giorgio Metta3, Lorenzo Natale1

  • 1Istituto Italiano di Tecnologia
  • 2Italian Institute of Technology
  • 3Istituto Italiano di Tecnologia (IIT)

Details

11:40 - 11:45 | Tue 30 May | Room 4611/4612 | TUB6.3

Session: Learning and Adaptive Systems 2

Abstract

Future humanoid robots will be expected to carry out a wide range of tasks for which they had not been originally equipped by learning new skills and adapting to their environment. A crucial requirement towards that goal is to be able to take advantage of external elements as tools to perform tasks for which their own manipulators are insufficient; the ability to autonomously learn how to use tools will render robots far more versatile and simpler to design. Motivated by this prospect, this paper proposes and evaluates an approach to allow robots to learn tool affordances based on their 3D geometry. To this end, we apply tool-pose descriptors to represent tools combined with the way in which they are grasped, and affordance vectors to represent the effect tool-poses achieve in function of the action performed. This way, tool affordance learning consists in determining the mapping between these 2 representations, which is achieved in 2 steps. First, the dimensionality of both representations is reduced by unsupervisedly mapping them onto respective Self-Organizing Maps (SOMs). Then, the mapping between the neurons in the tool-pose SOM and the neurons in the affordance SOM for pairs of tool-poses and their corresponding affordance vectors, respectively, is learned with a neural based regression model. This method enables the robot to accurately predict the effect of its actions using tools, and thus to select the best action for a given goal, even with previously unseen tools.