Inverse Reinforcement Learning Via Function Approximation for Clinical Motion Analysis

Kun Li1, Mrinal Rath2, Joel Burdick1

  • 1California Institute of Technology
  • 2University of California- Los Angeles



Interactive Session


10:30 - 13:00 | Tue 22 May | podK | TuA@K


Full Text


This paper introduces a new method for inverse reinforcement learning in large state spaces, where the learned reward function can be used to control high-dimensional robot systems and analyze complicated human motions. To avoid solving the computationally expensive reinforcement learning problems in reward learning, we propose a function approximation method to ensure that the Bellman Optimality Equation always holds, and then estimate a function to maximize the likelihood of the observed motion. The time complexity of the proposed method is linearly proportional to the cardinality of the action set, thus it can handle large state spaces efficiently. We test the proposed method in a simulated environment on reward learning, and show that it is more accurate than existing methods and significantly better in scalability. We also show that the proposed method can extend many existing methods to large state spaces. We then apply the method to evaluating the effect of rehabilitative stimulations on patients with spinal cord injuries based on the observed patient motions.

Additional Information

No information added


No videos found


  • Reformulate Bellman Optimality Equation with a VR function
  • Use a deep neural network to approximate the VR function
  • Solve the inverse reinforcement learning problem in high-dimensional state spaces and extend other methods as well.
  • Apply the method to clinical motion analysis.