Rate Analysis for Detection of Sparse Mixtures

George Moustakides1, Jonathan Ligo2, Venugopal Veeravalli3

  • 1University of Patras and Rutgers University
  • 2University of Illinois at Urbana-Champaign
  • 3University Of Illinois-Urbana Champagne

Details

13:30 - 15:30 | Tue 22 Mar | Poster Area F | SPTM-P1.2

Session: Detection

Abstract

In this paper, we study the rate of decay of the probability of error for distinguishing between a sparse signal with noise, modeled as a sparse mixture, from pure noise. This problem has many applications in signal processing, evolutionary biology, bioinformatics, astrophysics and feature selection for machine learning. We let the mixture probability tend to zero as the number of observations tends to infinity and derive oracle rates at which the error probability can be driven to zero for a general class of signal and noise distributions. In contrast to the problem of detection of non-sparse signals, we see the log-probability of error decays sublinearly rather than linearly and is characterized through the chi-squared-divergence rather than the Kullback-Leibler divergence. This work provides the first characterization of the rate of decay of the error probability for this problem.