Fast-Grasp'D: Dexterous Multi-Finger Grasp Generation through Differentiable Simulation

Dylan Turpin1, Tao Zhong1, Shutong Zhang1, Guanglei Zhu2, Eric Heiden3, Miles Macklin4, Stavros Tsogkas5, Sven Dickinson1, Animesh Garg6

  • 1University of Toronto
  • 2University of Toronto, Carnegie Mellon University
  • 3NVIDIA
  • 4University of Copenhagen, NVIDIA
  • 5Samsung/University Of Toronto
  • 6Georgia Institute of Technology

Details

15:00 - 16:40 | Wed 31 May | PODS 43-46 (Posters) | WePO2S-22.15

Session: Grasping and Manipulation I

Abstract

Multi-finger grasping relies on high quality training data, which is hard to obtain: human data is hard to transfer and synthetic data relies on simplifying assumptions that reduce grasp quality. By making grasp simulation differentiable, and contact dynamics amenable to gradient-based optimization, we accelerate the search for high-quality grasps with fewer limiting assumptions. We present Grasp’D-1M: a large-scale dataset for multi-finger robotic grasping, synthesized with FastGrasp’D, a novel differentiable grasping simulator. Grasp’D1M contains one million training examples for three robotic hands (three, four and five-fingered), each with multimodal visual inputs (RGB+depth+segmentation, available in mono and stereo). Grasp synthesis with Fast-Grasp’D is 10x faster than GraspIt! [1] and 20x faster than the prior Grasp’D differentiable simulator [2]. Generated grasps are more stable and contact-rich than GraspIt! grasps, regardless of the distance threshold used for contact generation. We validate the usefulness of our dataset by retraining an existing vision-based grasping pipeline [3] on Grasp’D-1M, and showing a dramatic increase in model performance, predicting grasps with 30% more contact, a 33% higher epsilon metric, and 35% lower simulated displacement. Additional details at fast-graspd.github.io.