Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models

03/02/2022
by   Ze-Feng Gao, et al.
0

The state-of-the-art Mixture-of-Experts (short as MoE) architecture has achieved several remarkable successes in terms of increasing model capacity. However, MoE has been hindered widespread adoption due to complexity, communication costs, and training instability. Here we present a novel MoE architecture based on matrix product operators (MPO) from quantum many-body physics. It can decompose an original matrix into central tensors (containing the core information) and auxiliary tensors (with only a small proportion of parameters). With the decomposed MPO structure, we can reduce the parameters of the original MoE architecture by sharing a global central tensor across experts and keeping expert-specific auxiliary tensors. We also design the gradient mask strategy for the tensor structure of MPO to alleviate the overfitting problem. Experiments on the three well-known downstream natural language datasets based on GPT2 show improved performance and efficiency in increasing model capacity (7.26x fewer parameters with the same amount of experts). We additionally demonstrate an improvement in the positive transfer effects of our approach for multi-task learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2021

Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators

This paper presents a novel pre-trained language models (PLM) compressio...
research
03/27/2023

Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture

In this paper, we propose a highly parameter-efficient approach to scali...
research
02/28/2019

Tensor-variate Mixture of Experts

When data are organized in matrices or arrays of higher dimensions (tens...
research
06/19/2023

JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for Multi-task Mathematical Problem Solving

Although pre-trained language models (PLMs) have recently advanced the r...
research
03/11/2023

A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training

Mixture-of-Experts (MoE) is a neural network architecture that adds spar...
research
06/21/2018

Gradient Adversarial Training of Neural Networks

We propose gradient adversarial training, an auxiliary deep learning fra...
research
09/10/2019

Accelerating Training using Tensor Decomposition

Tensor decomposition is one of the well-known approaches to reduce the l...

Please sign up or login with your details

Forgot password? Click here to reset