Less is More! A slim architecture for optimal language translation

05/18/2023
by   Luca Herranz-Celotti, et al.
0

The softmax attention mechanism has emerged as a noteworthy development in the field of Artificial Intelligence research, building on the successes of Transformer-based architectures. However, their ever increasing sizes necessitate ever increasing computational memory, that limits their usage. We propose KgV, a sigmoid gating mechanism that, in conjunction with softmax attention, significantly boosts performance without increasing architecture size. To amend the size requirements, we leverage Tensor Chains to identify and prune the excess parameters. We find that such excess resides primarily within the embedding layer, and not in the output linear layer. To further improve embedding and significantly reduce parameters, we introduce H-SoftPOS, a hierarchical embedding layer which simultaneously enhances performance. Remarkably, on the WMT14 English-German validation set, our approach yields a threefold reduction in perplexity, surpassing the current state-of-the-art, while reducing parameter counts also by a factor of 3. When we further reduce the number of parameters up to sevenfold, we can still achieve a 21% decrease in perplexity with respect to the baseline Transformer. To understand generalization capabilities, we conduct experiments on the 7 language pairs of the WMT17 dataset. Our method outperforms existing techniques in terms of test loss while simultaneously halving the number of parameters. Moreover, we observe a 70 times reduction in variance with respect to the prior state-of-the-art. In conclusion, our proposed method yields significant improvements in performance and much lower memory cost. We call the resulting architecture Anthe.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2020

Hamming OCR: A Locality Sensitive Hashing Neural Network for Scene Text Recognition

Recently, inspired by Transformer, self-attention-based scene text recog...
research
09/26/2022

Fast-FNet: Accelerating Transformer Encoder Models via Efficient Fourier Layers

Transformer-based language models utilize the attention mechanism for su...
research
04/25/2020

All Word Embeddings from One Embedding

In neural network-based models for natural language processing (NLP), th...
research
04/21/2020

A Generic Network Compression Framework for Sequential Recommender Systems

Sequential recommender systems (SRS) have become the key technology in c...
research
09/25/2018

Fast and Simple Mixture of Softmaxes with BPE and Hybrid-LightRNN for Language Generation

Mixture of Softmaxes (MoS) has been shown to be effective at addressing ...
research
07/18/2023

Light-Weight Vision Transformer with Parallel Local and Global Self-Attention

While transformer architectures have dominated computer vision in recent...
research
06/03/2019

NodeDrop: A Condition for Reducing Network Size without Effect on Output

Determining an appropriate number of features for each layer in a neural...

Please sign up or login with your details

Forgot password? Click here to reset