Regression between EEG and Speech Signals for Spoken Vowels

Debdeep Sikdar1, Rinku Roy1, Manjunatha Mahadevappa2

  • 1IIT Kharagpur
  • 2Indian Institute of Technolgy Kharagpur

Details

09:15 - 09:30 | Wed 24 Jul | R3 - Level 3 | WeA14.4

Session: Signal Processing and Classification of Acoustic and Auditory Signals

Abstract

One of the primary difference of mankind from other species is his ability to communicate verbally. Our brain upon framing a sentence, coordinates with the oro-pharyngeal-laryngeal muscle groups to produce the speech with the help of vocal cord and mouth aperture. However, some individuals due to congenital or illness, may loose their ability to speak in spite of their brain framing speech. Research on speech restoration through brain computer interface (BCI) is still at an early stage. Through this study, we have explored the regression between the chaos parameters of acoustic signal and electroencephalography (EEG) signal for different vowels chosen from International Phonetic Alphabets (IPA). The vowels were categorised into two categories, namely, soft vowels and diphthongs.We have selected the EEG channels based on their higher contribution towards the first principle component. Goodnessof fit parameters were evaluated for the regression analysis to explore the most suitable chaos parameter