Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling

by   Jonathan Shen, et al.

Lingvo is a Tensorflow framework offering a complete solution for collaborative deep learning research, with a particular focus towards sequence-to-sequence models. Lingvo models are composed of modular building blocks that are flexible and easily extensible, and experiment configurations are centralized and highly customizable. Distributed training and quantized inference are supported directly within the framework, and it contains existing implementations of a large number of utilities, helper functions, and the newest research ideas. Lingvo has been used in collaboration by dozens of researchers in more than 20 papers over the last two years. This document outlines the underlying design of Lingvo and serves as an introduction to the various pieces of the framework, while also offering examples of advanced features that showcase the capabilities of the framework.


OpenSeq2Seq: extensible toolkit for distributed and mixed precision training of sequence-to-sequence models

We present OpenSeq2Seq -- an open-source toolkit for training sequence-t...

An Overview Analysis of Sequence-to-Sequence Emotional Voice Conversion

Emotional voice conversion (EVC) focuses on converting a speech utteranc...

Adapting Sequence Models for Sentence Correction

In a controlled experiment of sequence-to-sequence approaches for the ta...

LiftTiles: Constructive Building Blocks for Prototyping Room-scale Shape-changing Interfaces

Large-scale shape-changing interfaces have great potential, but creating...

On Adversarial Robustness of Synthetic Code Generation

Automatic code synthesis from natural language descriptions is a challen...

Please sign up or login with your details

Forgot password? Click here to reset