Can Machines Read Coding Manuals Yet? – A Benchmark for Building Better Language Models for Code Understanding

09/15/2021
by   Ibrahim Abdelaziz, et al.
8

Code understanding is an increasingly important application of Artificial Intelligence. A fundamental aspect of understanding code is understanding text about code, e.g., documentation and forum discussions. Pre-trained language models (e.g., BERT) are a popular approach for various NLP tasks, and there are now a variety of benchmarks, such as GLUE, to help improve the development of such models for natural language understanding. However, little is known about how well such models work on textual artifacts about code, and we are unaware of any systematic set of downstream tasks for such an evaluation. In this paper, we derive a set of benchmarks (BLANCA - Benchmarks for LANguage models on Coding Artifacts) that assess code understanding based on tasks such as predicting the best answer to a question in a forum post, finding related forum posts, or predicting classes related in a hierarchy from class documentation. We evaluate the performance of current state-of-the-art language models on these tasks and show that there is a significant improvement on each task from fine tuning. We also show that multi-task training over BLANCA tasks helps build better language models for code understanding.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2019

Deepening Hidden Representations from Pre-trained Language Models for Natural Language Understanding

Transformer-based pre-trained language models have proven to be effectiv...
research
09/24/2021

Text-based NP Enrichment

Understanding the relations between entities denoted by NPs in text is a...
research
04/07/2023

Probing Conceptual Understanding of Large Visual-Language Models

We present a novel framework for probing and improving relational, compo...
research
06/14/2023

Language models are not naysayers: An analysis of language models on negation benchmarks

Negation has been shown to be a major bottleneck for masked language mod...
research
05/03/2021

Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks

Leader-boards like SuperGLUE are seen as important incentives for active...
research
04/19/2023

How to Do Things with Deep Learning Code

The premise of this article is that a basic understanding of the composi...
research
05/19/2023

Prompting with Pseudo-Code Instructions

Prompting with natural language instructions has recently emerged as a p...

Please sign up or login with your details

Forgot password? Click here to reset