Title
Multilingual Sequence-to-Sequence Speech Recognition: Architecture, Transfer Learning, and Language Modeling
Date Issued
11 February 2019
Access level
open access
Resource Type
conference paper
Author(s)
Cho J.
Baskar M.K.
Li R.
Wiesner M.
Mallidi S.H.
Karafiat M.
Watanabe S.
Hori T.
Waseda University
Publisher(s)
Institute of Electrical and Electronics Engineers Inc.
Abstract
Sequence-to-sequence (seq2seq) approach for low-resource ASR is a relatively new direction in speech research. The approach benefits by performing model training without using lexicon and alignments. However, this poses a new problem of requiring more data compared to conventional DNN-HMM systems. In this work, we attempt to use data from 10 BABEL languages to build a multilingual seq2seq model as a prior model, and then port them towards 4 other BABEL languages using transfer learning approach. We also explore different architectures for improving the prior multilingual seq2seq model. The paper also discusses the effect of integrating a recurrent neural network language model (RNNLM) with a seq2seq model during decoding. Experimental results show that the transfer learning approach from the multilingual model shows substantial gains over monolingual models across all 4 BABEL languages. Incorporating an RNNLM also brings significant improvements in terms of %WER, and achieves recognition performance comparable to the models trained with twice more training data.
Start page
521
End page
527
Language
English
OCDE Knowledge area
LingüÃstica
Otras ingenierÃas y tecnologÃas
Subjects
Scopus EID
2-s2.0-85063077624
Resource of which it is part
2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings
ISBN of the container
978-153864334-1
Conference
2018 IEEE Spoken Language Technology Workshop, SLT 2018
Sources of information:
Directorio de Producción CientÃfica
Scopus