Code Prediction by Feeding Trees to Transformers

by   Seohyun Kim, et al.
University of Wisconsin-Madison
Association for Computing Machinery
Columbia University

In this paper, we describe how to leverage Transformer, a recent neural architecture for learning from sequential data (such as text), for code completion. As in the realm of natural language processing, Transformers surpass the prediction accuracy achievable by RNNs; we provide an experimental confirmation of this over a Python dataset. Furthermore, we show that the way to obtain even better accuracy from Transformers is to expose the syntactic structure of code, which is easily recovered by parsing, to the neural network. This works significantly better than presenting the code as a linear token sequence, which is how Transformers were originally intended to be used. To accomplish this, we propose a novel enhancement to the self-attention mechanism of the Transformer. We enable the mechanism to learn weights—that is, how much to focus on each preceding token in the input—not only on the basis of a token's value, but also on the basis of the spatial relationships, as in their positions in the abstract syntax tree, between each pair of tokens. We provide comprehensive experimental evaluation of our proposal, along with alternative design choices, on a standard Python dataset, as well as on a Python corpus internal to Facebook.


page 1

page 2

page 3

page 4


Adversarial Token Attacks on Vision Transformers

Vision transformers rely on a patch token based self attention mechanism...

Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention

Transformers have emerged as a powerful tool for a broad range of natura...

Attention Hijacking in Trojan Transformers

Trojan attacks pose a severe threat to AI systems. Recent works on Trans...

Syntax-guided Localized Self-attention by Constituency Syntactic Distance

Recent works have revealed that Transformers are implicitly learning the...

TreeGen: A Tree-Based Transformer Architecture for Code Generation

A code generation system generates programming language code based on an...

Trees in transformers: a theoretical analysis of the Transformer's ability to represent trees

Transformer networks are the de facto standard architecture in natural l...

glassoformer: a query-sparse transformer for post-fault power grid voltage prediction

We propose GLassoformer, a novel and efficient transformer architecture ...

Code Repositories


This repo will contain replication package for the paper "Feeding Trees to Transformers for Code Completion"

view repo

Please sign up or login with your details

Forgot password? Click here to reset