Morphology-Specific Convolutional Neural Networks for Tactile Object Recognition with a Multi-Fingered Hand

Satoshi Funabashi1, Gang Yan, Andreas Geier2, Alexander Schmitz2, Tetsuya Ogata2, Shigeki Sugano2

  • 1Waseda University, Sugano Lab
  • 2Waseda University

Details

10:45 - 12:00 | Mon 20 May | Room 220 POD 02 | MoA1-02.3

Session: Object Recognition I - 1.1.02

Abstract

Distributed tactile sensors on multi-fingered hands can provide high-dimensional information for grasping objects, but it is not clear how to optimally process such abundant tactile information. The current paper explores the possibility of using a morphology-specific convolutional neural network (MS-CNN). uSkin tactile sensors are mounted on an Allegro Hand, which provides 720 force measurements (15 patches of uSkin modules with 16 triaxial force sensors each) in addition to 16 joint angle measurements. Consecutive layers in the CNN get input from parts of one finger segment, one finger, and the whole hand. Since the sensors give 3D (x, y, z) vector tactile information, inputs with 3 channels (x, y and z) are used in the first layer, based on the idea of such inputs for RGB images from cameras. Overall, the layers are combined, resulting in the building of a tactile map based on the relative position of the tactile sensors on the hand. Seven different combination variations were evaluated, and an over-95% object recognition rate with 20 objects was achieved, even though only one random time instance from a repeated squeezing motion of an object in an unknown pose within the hand was used as input.