Measuring Memorization Effect in Word-Level Neural Networks Probing

06/29/2020
by   Rudolf Rosa, et al.
0

Multiple studies have probed representations emerging in neural networks trained for end-to-end NLP tasks and examined what word-level linguistic information may be encoded in the representations. In classical probing, a classifier is trained on the representations to extract the target linguistic information. However, there is a threat of the classifier simply memorizing the linguistic labels for individual words, instead of extracting the linguistic abstractions from the representations, thus reporting false positive results. While considerable efforts have been made to minimize the memorization problem, the task of actually measuring the amount of memorization happening in the classifier has been understudied so far. In our work, we propose a simple general method for measuring the memorization effect, based on a symmetric selection of comparable sets of test words seen versus unseen in training. Our method can be used to explicitly quantify the amount of memorization happening in a probing setup, so that an adequate setup can be chosen and the results of the probing can be interpreted with a reliability estimate. We exemplify this by showcasing our method on a case study of probing for part of speech in a trained neural machine translation encoder.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2016

Linguistic Input Features Improve Neural Machine Translation

Neural machine translation has recently achieved impressive results, whi...
research
03/29/2022

Visualizing the Relationship Between Encoded Linguistic Information and Task Performance

Probing is popular to analyze whether linguistic information can be capt...
research
12/21/2018

What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models

Despite the remarkable evolution of deep neural networks in natural lang...
research
03/22/2019

LINSPECTOR: Multilingual Probing Tasks for Word Representations

Despite an ever growing number of word representation models introduced ...
research
08/02/2016

Semantic Representations of Word Senses and Concepts

Representing the semantics of linguistic items in a machine-interpretabl...
research
10/02/2022

The boundaries of meaning: a case study in neural machine translation

The success of deep learning in natural language processing raises intri...
research
04/13/2021

DirectProbe: Studying Representations without Classifiers

Understanding how linguistic structures are encoded in contextualized em...

Please sign up or login with your details

Forgot password? Click here to reset