The Geometry of Multilingual Language Model Representations

05/22/2022
by   Tyler A. Chang, et al.
0

We assess how multilingual language models maintain a shared multilingual representation space while still encoding language-sensitive information in each language. Using XLM-R as a case study, we show that languages occupy similar linear subspaces after mean-centering, evaluated based on causal effects on language modeling performance and direct comparisons between subspaces for 88 languages. The subspace means differ along language-sensitive axes that are relatively stable throughout middle layers, and these axes encode information such as token vocabularies. Shifting representations by language means is sufficient to induce token predictions in different languages. However, we also identify stable language-neutral axes that encode information such as token positions and part-of-speech. We visualize representations projected onto language-sensitive and language-neutral axes, identifying language family and part-of-speech clusters, along with spirals, toruses, and curves representing token position information. These results demonstrate that multilingual language models encode information along orthogonal language-sensitive and language-neutral axes, allowing the models to extract a variety of features for downstream tasks and cross-lingual transfer learning.

READ FULL TEXT

page 13

page 14

research
05/13/2023

The Geometry of Multilingual Language Models: An Equality Lens

Understanding the representations of different languages in multilingual...
research
09/16/2021

Locating Language-Specific Information in Contextualized Embeddings

Multilingual pretrained language models (MPLMs) exhibit multilinguality ...
research
05/25/2022

Discovering Language-neutral Sub-networks in Multilingual Language Models

Multilingual pre-trained language models perform remarkably well on cros...
research
12/23/2021

Do Multi-Lingual Pre-trained Language Models Reveal Consistent Token Attributions in Different Languages?

During the past several years, a surge of multi-lingual Pre-trained Lang...
research
04/09/2020

Learning to Scale Multilingual Representations for Vision-Language Tasks

Current multilingual vision-language models either require a large numbe...
research
07/04/2022

Unify and Conquer: How Phonetic Feature Representation Affects Polyglot Text-To-Speech (TTS)

An essential design decision for multilingual Neural Text-To-Speech (NTT...
research
09/11/2021

The Impact of Positional Encodings on Multilingual Compression

In order to preserve word-order information in a non-autoregressive sett...

Please sign up or login with your details

Forgot password? Click here to reset