Conference proceeding
UNSUPERVISED PRE-TRAINING OF BIDIRECTIONAL SPEECH ENCODERS VIA MASKED RECONSTRUCTION
2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, Vol.2020-, pp.6889-6893
International Conference on Acoustics Speech and Signal Processing ICASSP
01/01/2020
DOI: 10.1109/icassp40776.2020.9053541
Abstract
We propose an approach for pre-training speech representations via a masked reconstruction loss. Our pre-trained encoder networks are bidirectional and can therefore be used directly in typical bidirectional speech recognition models. The pre-trained networks can then be fine-tuned on a smaller amount of supervised data for speech recognition. Experiments with this approach on the LibriSpeech and Wall Street Journal corpora show promising results. We find that the main factors that lead to speech recognition improvements are: masking segments of sufficient width in both time and frequency, pre-training on a much larger amount of unlabeled data than the labeled data, and domain adaptation when the unlabeled and labeled data come from different domains. The gain from pre-training is additive to that of supervised data augmentation.
Details
- Title: Subtitle
- UNSUPERVISED PRE-TRAINING OF BIDIRECTIONAL SPEECH ENCODERS VIA MASKED RECONSTRUCTION
- Creators
- Weiran Wang - Amazon Alexa, San Francisco, CA 94110 USAQingming Tang - Toyota Technol Inst Chicago, Chicago, IL USAKaren Livescu - Toyota Technol Inst Chicago, Chicago, IL USA
- Resource Type
- Conference proceeding
- Publication Details
- 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, Vol.2020-, pp.6889-6893
- Publisher
- IEEE
- Series
- International Conference on Acoustics Speech and Signal Processing ICASSP
- DOI
- 10.1109/icassp40776.2020.9053541
- ISSN
- 1520-6149
- eISSN
- 2379-190X
- Number of pages
- 5
- Language
- English
- Date published
- 01/01/2020
- Academic Unit
- Computer Science
- Record Identifier
- 9984696717202771
Metrics
4 Record Views