How could Neural Networks understand Programs?

05/10/2021
by   Dinglan Peng, et al.
0

Semantic understanding of programs is a fundamental problem for programming language processing (PLP). Recent works that learn representations of code based on pre-training techniques in NLP have pushed the frontiers in this direction. However, the semantics of PL and NL have essential differences. These being ignored, we believe it is difficult to build a model to better understand programs, by either directly applying off-the-shelf NLP pre-training techniques to the source code, or adding features to the model by the heuristic. In fact, the semantics of a program can be rigorously defined by formal semantics in PL theory. For example, the operational semantics, describes the meaning of a valid program as updating the environment (i.e., the memory address-value function) through fundamental operations, such as memory I/O and conditional branching. Inspired by this, we propose a novel program semantics learning paradigm, that the model should learn from information composed of (1) the representations which align well with the fundamental operations in operational semantics, and (2) the information of environment transition, which is indispensable for program understanding. To validate our proposal, we present a hierarchical Transformer-based pre-training model called OSCAR to better facilitate the understanding of programs. OSCAR learns from intermediate representation (IR) and an encoded representation derived from static analysis, which are used for representing the fundamental operations and approximating the environment transitions respectively. OSCAR empirically shows the outstanding capability of program semantics understanding on many practical software engineering tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2023

Code Representation Pre-training with Complements from Program Executions

Large language models (LLMs) for natural language processing have been g...
research
06/19/2018

Neural Code Comprehension: A Learnable Representation of Code Semantics

With the recent success of embeddings in natural language processing, re...
research
09/13/2023

CRIL: A Concurrent Reversible Intermediate Language

We present a reversible intermediate language with concurrency for trans...
research
05/14/2018

Structural Operational Semantics for Control Flow Graph Machines

Compilers use control flow graph (CFG) representations of low-level prog...
research
09/09/2023

FAIR: Flow Type-Aware Pre-Training of Compiler Intermediate Representations

While the majority of existing pre-trained models from code learn source...
research
08/24/2023

Pre-training Code Representation with Semantic Flow Graph for Effective Bug Localization

Enlightened by the big success of pre-training in natural language proce...
research
08/30/2023

On Feasibility of Declarative Diagnosis

The programming language Prolog makes declarative programming possible, ...

Please sign up or login with your details

Forgot password? Click here to reset