TransFusion: Transcribing Speech with Multinomial Diffusion

10/14/2022
by   Matthew Baas, et al.
0

Diffusion models have shown exceptional scaling properties in the image synthesis domain, and initial attempts have shown similar benefits for applying diffusion to unconditional text synthesis. Denoising diffusion models attempt to iteratively refine a sampled noise signal until it resembles a coherent signal (such as an image or written sentence). In this work we aim to see whether the benefits of diffusion models can also be realized for speech recognition. To this end, we propose a new way to perform speech recognition using a diffusion model conditioned on pretrained speech features. Specifically, we propose TransFusion: a transcribing diffusion model which iteratively denoises a random character sequence into coherent text corresponding to the transcript of a conditioning utterance. We demonstrate comparable performance to existing high-performing contrastive models on the LibriSpeech speech recognition benchmark. To the best of our knowledge, we are the first to apply denoising diffusion to speech recognition. We also propose new techniques for effectively sampling and decoding multinomial diffusion models. These are required because traditional methods of sampling from acoustic models are not possible with our new discrete diffusion approach. Code and trained models are available: https://github.com/RF5/transfusion-asr

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2022

GAN You Hear Me? Reclaiming Unconditional Speech Synthesis from Diffusion Models

We propose AudioStyleGAN (ASGAN), a new generative adversarial network (...
research
09/13/2023

DCTTS: Discrete Diffusion Model with Contrastive Learning for Text-to-speech Generation

In the Text-to-speech(TTS) task, the latent diffusion model has excellen...
research
10/02/2022

OCD: Learning to Overfit with Conditional Diffusion Models

We present a dynamic model in which the weights are conditioned on an in...
research
03/23/2023

Enhancing Unsupervised Speech Recognition with Diffusion GANs

We enhance the vanilla adversarial training method for unsupervised Auto...
research
04/25/2023

CoDi: Co-evolving Contrastive Diffusion Models for Mixed-type Tabular Synthesis

With growing attention to tabular data these days, the attempt to apply ...
research
05/31/2023

Tree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust

Watermarking the outputs of generative models is a crucial technique for...
research
07/27/2023

TEDi: Temporally-Entangled Diffusion for Long-Term Motion Synthesis

The gradual nature of a diffusion process that synthesizes samples in sm...

Please sign up or login with your details

Forgot password? Click here to reset