Privacy-Preserving Distributed Learning Via Obfuscated Stochastic Gradients

Shripad Gade1, Nitin H. Vaidya2

  • 1University of Illinois at Urbana Champaign
  • 2University of Illinois at Urbana-Champaign

Details

11:20 - 11:40 | Mon 17 Dec | Flash | MoA05.5

Session: Distributed Optimization for Networked Systems I

Abstract

Distributed (federated) learning has become a popular paradigm in recent years. In this scenario private data is stored among several machines (possibly cellular or mobile devices). These machines collaboratively solve a distributed optimization problem, using private data, to learn predictive models. The aggressive use of distributed learning to solve problems involving sensitive data has resulted in privacy concerns. In this paper we present a synchronous distributed stochastic gradient descent based algorithm that introduces privacy via gradient obfuscation in client-server model. We prove the correctness of our algorithm. We also show that obfuscation of gradients via additive and multiplicative perturbations provides privacy to local data against honest-but-curious adversaries.