DataMUX: Data Multiplexing for Neural Networks

02/18/2022
by   Vishvak Murahari, et al.
9

In this paper, we introduce data multiplexing (DataMUX), a technique that enables deep neural networks to process multiple inputs simultaneously using a single compact representation. DataMUX demonstrates that neural networks are capable of generating accurate predictions over mixtures of inputs, resulting in increased throughput with minimal extra memory requirements. Our approach uses two key components – 1) a multiplexing layer that performs a fixed linear transformation to each input before combining them to create a mixed representation of the same size as a single input, which is then processed by the base network, and 2) a demultiplexing layer that converts the base network's output back into independent representations before producing predictions for each input. We show the viability of DataMUX for different architectures (Transformers, and to a lesser extent MLPs and CNNs) across six different tasks spanning sentence classification, named entity recognition and image classification. For instance, DataMUX for Transformers can multiplex up to 20x/40x inputs, achieving 11x/18x increase in throughput with minimal absolute performance drops of <2% and <4% respectively on MNLI, a natural language inference task. We also provide a theoretical construction for multiplexing in self-attention networks and analyze the effect of various design elements in DataMUX.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2020

Self-attention-based BiGRU and capsule network for named entity recognition

Named entity recognition(NER) is one of the tasks of natural language pr...
research
04/08/2020

Self-Attention Gazetteer Embeddings for Named-Entity Recognition

Recent attempts to ingest external knowledge into neural models for name...
research
02/06/2023

Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks

In tasks like node classification, image segmentation, and named-entity ...
research
02/17/2021

Centroid Transformers: Learning to Abstract with Attention

Self-attention, as the key block of transformers, is a powerful mechanis...
research
10/11/2022

Curved Representation Space of Vision Transformers

Neural networks with self-attention (a.k.a. Transformers) like ViT and S...
research
08/31/2022

Application of Data Encryption in Chinese Named Entity Recognition

Recently, with the continuous development of deep learning, the performa...

Please sign up or login with your details

Forgot password? Click here to reset