Gradient Compression for Communication-Limited Convex Optimization

Sarit Khirirat1, Mikael Johansson2, Dan Alistarh3

  • 1KTH Royal Institute of Technology
  • 2KTH - Royal Institute of Technology
  • 3IST Austria

Details

10:20 - 10:40 | Mon 17 Dec | Flash | MoA05.2

Session: Distributed Optimization for Networked Systems I

Abstract

Data-rich applications in machine-learning and control have motivated an intense research on large-scale optimization. Novel algorithms have been proposed and shown to have order-optimal convergence rates in terms of iteration counts. However, in actual implementations, their performance is severely degraded by the cost of exchanging large gradient vectors between computing nodes. Several lossy gradient compression heuristics have recently been proposed to reduce communications, but few theoretical results exist that quantify how they impact algorithm convergence. This paper establishes and strengthens the convergence guarantees for gradient descent under a family of gradient compression techniques. For convex optimization problems, we derive admissible step sizes and quantify both the number of iterations and the number of bits that need to be exchanged to reach a target accuracy. Finally, we validate the performance of different gradient compression techniques in simulations. The numerical results highlight the properties of different gradient compression algorithms and confirm that fast convergence with limited information exchange is indeed possible.