Speaker Cluster-Based Speaker Adaptive Training for Deep Neural Network Acoustic Modeling

Ruxin Chen1, Wei Chu2

  • 1Sony Computer Entertainment America (SCEA)
  • 2SONY

Details

13:30 - 15:30 | Tue 22 Mar | Poster Area H | SP-P1.6

Session: Acoustic Model Adaptation for Speech Recognition I

Abstract

A speaker cluster-based speaker adaptive training (SAT) method under deep neural network-hidden Markov model (DNN-HMM) framework is presented in this paper. During training, speakers that are acoustically adjacent to each other are hierarchically clustered using an i-vector based distance metric. DNNs with speaker dependent layers are then adaptively trained for each cluster of speakers. Before decoding starts, an unseen speaker in test set is matched to the closest speaker cluster through comparing i-vector based distances. The previously trained DNN of the matched speaker cluster is used for decoding utterances of the test speaker. The performance of the proposed method on a large vocabulary spontaneous speech recognition task is evaluated on a training set of 1500 hours of speech, and a test set of 24 speakers with 1774 utterances. Compared to a speaker independent DNN with a word error rate of 11.6%, a relative 6.8% improvement in performance is obtained from the proposed method.