Vision-Based Automatic Control of a 5-Fingered Simulated Assistive Robotic Manipulator for Activities of Daily Living

Chen Wang1, Daniel Freer1, Jindong Liu2, Guang-Zhong Yang3

  • 1Imperial College London
  • 2Precision Robotics Ltd
  • 3Shanghai Jiao Tong University

Details

11:30 - 11:45 | Tue 5 Nov | LG-R16 | TuAT16.3

Session: Grasping I

Abstract

Assistive robotic manipulators (ARMs) play an important role for people withupper-limb disabilities and the elderly to assist them in fulfilling Activities of Daily Living (ADLs). However, as the objects to be handled in ADLs differ in size, shape and manipulation constraints, many two or three fingered end-effectors of ARMs have difficulty robustly interacting with these objects. In this paper, we proposed vision-based control of a 5-fingered manipulator (Schunk SVH), allowing it to automatically change its shape based on object classification using computer vision combined with deep learning. The control method is tested in a simulated environment and achieves a more robust grasp with the properly shaped five-fingered hand than with a comparable three-fingered gripper (Barrett Hand) using the same control sequence. In addition, the final optimal grasp pose (x, y, and theta) is learned through a deep regressor in the penultimate stage of the grasp. This method correctly identifies the optimal grasp pose in 78.35% of the test cases when considering all three parameters.