Classification of Breath and Snore Sounds Using Audio Data Recorded with Smartphones in The Home Environment

Johannes Schneider • Tim Fischer • Wilhelm Stork

13:30 - 15:30 | Tuesday 22 March 2016 | Poster Area C



In this paper, classification between snore-inhale (SI), snoreexhale (SE), breathe-inhale (BI), breathe-exhale (BE) and noise (NS) sounds is performed. The database is obtained from 7 subjects, who recorded whole night audio data in their private home environments with their own smartphones. Preprocessing is done by a modification of an adaptive noise suppression method [1]. The classification system consists of 5 binary RobustBoost classifiers (RBs) [2] applying the one-vs.-rest strategy and an artificial neural network (ANN) for voting on the outputs. ReliefF and Sequential Forward Selection (SFS) extract a 5-dimensional feature vector, consisting of psychoacoustic features from time and spectral domain. Sensitivity (Se) and specificity (Sp) in percent on a preprocessed (i.e. the signal contains only sound activity segments), representative 1 h 20 min dataset are: Se_{SI; SE; BI; BE; NS} = {80.91; 80.01; 34; 12; 66.45; 29.53} Sp_{SI; SE; BI; BE; NS} = {83.56; 91.7; 90.53; 83.32; 93.51}