Low Level Control of a Quadrotor with Deep Model-Based Reinforcement Learning

Nathan Lambert1, Daniel S. Drew2, Joseph Yaconelli3, Sergey Levine2, Roberto Calandra4, Kristofer S. J. Pister1

  • 1University of California, Berkeley
  • 2UC Berkeley
  • 3University of Oregon
  • 4TU Dresden

Details

11:45 - 12:00 | Tue 5 Nov | L1-R2 | TuAT2.4

Session: Deep Learning for Aerial Systems

Abstract

Designing effective low-level robot controllers often entail platform-specific implementations that require manual heuristic parameter tuning, significant system knowledge, or long design times. With the rising number of robotic and mecha- tronic systems deployed across areas ranging from industrial automation to intelligent toys, the need for a general approach to generating low-level controllers is increasing. To address the challenge of rapidly generating low-level controllers, we argue for using model-based reinforcement learning (MBRL) trained on relatively small amounts of automatically generated (i.e., without system simulation) data. In this paper, we explore the capabilities of MBRL on a Crazyflie centimeter-scale quadrotor with rapid dynamics to predict and control at