Classification of Breath and Snore Sounds Using Audio Data Recorded with Smartphones in The Home Environment

Johannes Schneider1, Tim Fischer2, Wilhelm Stork3

  • 1FZI Forschungszentrum Informatik
  • 2FZI Research Center for Information Technology
  • 3Karlsruhe Institute of Technology (KIT)



Technical Session: Poster


Audio and Acoustic Signal Processing


13:30 - 15:30 | Tue 22 Mar | Poster Area C | AASP-P1

Hearing Aids and Environmental Sound Recognition

Full Text


In this paper, classification between snore-inhale (SI), snoreexhale (SE), breathe-inhale (BI), breathe-exhale (BE) and noise (NS) sounds is performed. The database is obtained from 7 subjects, who recorded whole night audio data in their private home environments with their own smartphones. Preprocessing is done by a modification of an adaptive noise suppression method [1]. The classification system consists of 5 binary RobustBoost classifiers (RBs) [2] applying the one-vs.-rest strategy and an artificial neural network (ANN) for voting on the outputs. ReliefF and Sequential Forward Selection (SFS) extract a 5-dimensional feature vector, consisting of psychoacoustic features from time and spectral domain. Sensitivity (Se) and specificity (Sp) in percent on a preprocessed (i.e. the signal contains only sound activity segments), representative 1 h 20 min dataset are: Se_{SI; SE; BI; BE; NS} = {80.91; 80.01; 34; 12; 66.45; 29.53} Sp_{SI; SE; BI; BE; NS} = {83.56; 91.7; 90.53; 83.32; 93.51}

Additional Information

No information added


No videos found