InterAug: Augmenting Noisy Intermediate Predictions for CTC-based ASR

04/01/2022
by   Yu Nakagome, et al.
0

This paper proposes InterAug: a novel training method for CTC-based ASR using augmented intermediate representations for conditioning. The proposed method exploits the conditioning framework of self-conditioned CTC to train robust models by conditioning with "noisy" intermediate predictions. During the training, intermediate predictions are changed to incorrect intermediate predictions, and fed into the next layer for conditioning. The subsequent layers are trained to correct the incorrect intermediate predictions with the intermediate losses. By repeating the augmentation and the correction, iterative refinements, which generally require a special decoder, can be realized only with the audio encoder. To produce noisy intermediate predictions, we also introduce new augmentation: intermediate feature space augmentation and intermediate token space augmentation that are designed to simulate typical errors. The combination of the proposed InterAug framework with new augmentation allows explicit training of the robust audio encoders. In experiments using augmentations simulating deletion, insertion, and substitution error, we confirmed that the trained model acquires robustness to each error, boosting the speech recognition performance of the strong self-conditioned CTC baseline.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/01/2022

Better Intermediates Improve CTC Inference

This paper proposes a method for improved CTC inference with searched in...
research
04/01/2022

Multi-sequence Intermediate Conditioning for CTC-based ASR

End-to-end automatic speech recognition (ASR) directly maps input speech...
research
04/06/2021

Relaxing the Conditional Independence Assumption of CTC-based ASR by Conditioning on Intermediate Predictions

This paper proposes a method to relax the conditional independence assum...
research
09/16/2023

Decoder-only Architecture for Speech Recognition with CTC Prompts and Text Data Augmentation

Collecting audio-text pairs is expensive; however, it is much easier to ...
research
02/17/2022

Non-Autoregressive ASR with Self-Conditioned Folded Encoders

This paper proposes CTC-based non-autoregressive ASR with self-condition...
research
07/09/2022

Intermediate-layer output Regularization for Attention-based Speech Recognition with Shared Decoder

Intermediate layer output (ILO) regularization by means of multitask tra...
research
07/05/2023

Exploring new ways: Enforcing representational dissimilarity to learn new features and reduce error consistency

Independently trained machine learning models tend to learn similar feat...

Please sign up or login with your details

Forgot password? Click here to reset