Learning Compact Structural Representations for Audio Events Using Regressor Banks

Alfred Mertins1, Huy Phan1, Ian Mcloughlin2, Lars Hertel1, Marco Maass1, Radoslaw Mazur1

  • 1University of Lübeck
  • 2University of Kent

Details

13:30 - 15:30 | Tue 22 Mar | Poster Area C | AASP-P1.1

Session: Hearing Aids and Environmental Sound Recognition

Abstract

We introduce a new learned descriptor for audio signals which is efficient for event representation. The entries of the descriptor are produced by evaluating a set of regressors on the input signal. The regressors are class-specific and trained using the random regression forests framework. Given an input signal, each regressor estimates the onset and offset positions of the target event. The estimation confidence scores output by a regressor are then used to quantify how the target event aligns with the temporal structure of the corresponding category. Our proposed descriptor has two advantages. First, it is compact, i.e. the dimensionality of the descriptor is equal to the number of event classes. Second, we show that even simple \emph{linear} classification models, trained on our descriptor, yield better accuracies on audio event classification task than not only the nonlinear baselines but also the state-of-the-art results.