Facial Expression-Based Emotion Classification using Electrocardiogram and Respiration Signals

Full Text


Automated emotion recognition from physiological signals is an ongoing research area. Many studies rely on self-reported emotion scores from subjects to generate classification labels. This can introduce labeling inconsistencies due to inter-subject variability. Facial expressions provide a more consistent means of generating labels. We generate labels by selecting locations at which subjects either displayed a visibly averse/negative reaction or laughed in video recordings. We next use a supervised learning approach for classifying these emotional responses based on electrocardiogram (EKG) and respiration signal features in an experiment where different movie/video clips were utilized to elicit feelings of joy, disgust, amusement etc. As features, we extract wavelet coefficient patches from EKG RR-interval time series and respiration waveform parameters. We use principal component analysis for dimensionality reduction and support vector machines for classification. We achieved an overall classification accuracy of 78.3%.

Additional Information

No information added


No videos found