Lattice-Free MMI Adaptation Of Self-Supervised Pretrained Acoustic Models
In this work, we propose lattice-free MMI (LFMMI) for supervised adaptation of self-supervised pretrained acoustic model. We pretrain a Transformer model on thousand hours of untranscribed Librispeech data followed by supervised adaptation with LFMMI on three different datasets. Our results show that fine-tuning with LFMMI, we consistently obtain relative WER improvements of 10 and 35.3 Switchboard (300h), and 4.3 compared to the baseline trained only with supervised data.
READ FULL TEXT