AST-Probe: Recovering abstract syntax trees from hidden representations of pre-trained language models

The objective of pre-trained language models is to learn contextual representations of textual data. Pre-trained language models have become mainstream in natural language processing and code modeling. Using probes, a technique to study the linguistic properties of hidden vector spaces, previous works have shown that these pre-trained language models encode simple linguistic properties in their hidden representations. However, none of the previous work assessed whether these models encode the whole grammatical structure of a programming language. In this paper, we prove the existence of a syntactic subspace, lying in the hidden representations of pre-trained language models, which contain the syntactic information of the programming language. We show that this subspace can be extracted from the models' representations and define a novel probing method, the AST-Probe, that enables recovering the whole abstract syntax tree (AST) of an input code snippet. In our experimentations, we show that this syntactic subspace exists in five state-of-the-art pre-trained language models. In addition, we highlight that the middle layers of the models are the ones that encode most of the AST information. Finally, we estimate the optimal size of this syntactic subspace and show that its dimension is substantially lower than those of the models' representation spaces. This suggests that pre-trained language models use a small part of their representation spaces to encode syntactic information of the programming languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2022

Benchmarking Language Models for Code Syntax Understanding

Pre-trained language models have demonstrated impressive performance in ...
research
12/15/2021

Simple Text Detoxification by Identifying a Linear Toxic Subspace in Language Model Embeddings

Large pre-trained language models are often trained on large volumes of ...
research
06/04/2021

Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing

Analysing whether neural language models encode linguistic information h...
research
04/29/2020

Do Neural Language Models Show Preferences for Syntactic Formalisms?

Recent work on the interpretability of deep neural language models has c...
research
06/01/2023

Contextual Distortion Reveals Constituency: Masked Language Models are Implicit Parsers

Recent advancements in pre-trained language models (PLMs) have demonstra...
research
09/04/2023

Towards Foundational AI Models for Additive Manufacturing: Language Models for G-Code Debugging, Manipulation, and Comprehension

3D printing or additive manufacturing is a revolutionary technology that...
research
04/10/2020

Overestimation of Syntactic Representationin Neural Language Models

With the advent of powerful neural language models over the last few yea...

Please sign up or login with your details

Forgot password? Click here to reset