Efficient GPT Model Pre-training using Tensor Train Matrix Representation

06/05/2023
by   Viktoriia Chekalina, et al.
0

Large-scale transformer models have shown remarkable performance in language modelling tasks. However, such models feature billions of parameters, leading to difficulties in their deployment and prohibitive training costs from scratch. To reduce the number of the parameters in the GPT-2 architecture, we replace the matrices of fully-connected layers with the corresponding Tensor Train Matrix (TTM) structure. Finally, we customize forward and backward operations through the TTM-based layer for simplicity and the stableness of further training. parameters, showing the perplexity comparable to the original model. On the downstream tasks, including language understanding and text summarization, the model performs similarly to the original GPT-2 model. The proposed tensorized layers could be used to efficiently pre-training other Transformer models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset