Detecting Driving Fatigue with Multimodal Deep Learning

Lihuan Du1, Wei Liu1, Wei-Long Zheng1, Bao-Liang Lu1

  • 1Shanghai Jiao Tong University

Details

11:30 - 13:30 | Fri 26 May | Emerald III, Rose, Narcissus & Jasmine | FrPS1T1.19

Session: Poster I

Abstract

Physiological signals such as EEG and EOG have been successfully applied to detect driving fatigue in single modality. In this paper, we propose a multimodal approach by combining partial EEG and forehead EOG to enhance driving fatigue detection. We investigate the key brain area where we collect the EEG to combine with forehead EOG. Our experiment results demonstrate that the temporal EEG signals from six-channel have the best performance when combining with forehead EOG to extract shared features. Furthermore, we propose a novel multimodal fusion strategy using deep autoencoder model to learn a better shared representation. We assess our approach with other fusion strategies on 21 subjects. Our multimodal approach achieves the best performance that the average COR and RMSE are 0.85 and 0.09, respectively. The experiment results demonstrate that our multimodal approach could learn an efficient shared representation between modalities and could significantly improve the performance of detecting driving fatigue.