DeepAI AI Chat
Log In Sign Up

Controllable Time-Delay Transformer for Real-Time Punctuation Prediction and Disfluency Detection

by   Qian Chen, et al.
Alibaba Group

With the increased applications of automatic speech recognition (ASR) in recent years, it is essential to automatically insert punctuation marks and remove disfluencies in transcripts, to improve the readability of the transcripts as well as the performance of subsequent applications, such as machine translation, dialogue systems, and so forth. In this paper, we propose a Controllable Time-delay Transformer (CT-Transformer) model that jointly completes the punctuation prediction and disfluency detection tasks in real time. The CT-Transformer model facilitates freezing partial outputs with controllable time delay to fulfill the real-time constraints in partial decoding required by subsequent applications. We further propose a fast decoding strategy to minimize latency while maintaining competitive performance. Experimental results on the IWSLT2011 benchmark dataset and an in-house Chinese annotated dataset demonstrate that the proposed approach outperforms the previous state-of-the-art models on F-scores and achieves a competitive inference speed.


page 1

page 2

page 3

page 4


Lattice Transformer for Speech Translation

Recent advances in sequence modeling have highlighted the strengths of t...

Fast-MD: Fast Multi-Decoder End-to-End Speech Translation with Non-Autoregressive Hidden Intermediates

The multi-decoder (MD) end-to-end speech translation model has demonstra...

Mutually-Constrained Monotonic Multihead Attention for Online ASR

Despite the feature of real-time decoding, Monotonic Multihead Attention...

FunASR: A Fundamental End-to-End Speech Recognition Toolkit

This paper introduces FunASR, an open-source speech recognition toolkit ...

Discriminative Self-training for Punctuation Prediction

Punctuation prediction for automatic speech recognition (ASR) output tra...